From akekane at redhat.com Fri Oct 1 05:42:14 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 1 Oct 2021 11:12:14 +0530 Subject: [glance] No meeting on 07th October Message-ID: Hi All, We decided to cancel our next (October 7th) weekly meeting. According to schedule we will meet directly on October 14th. In case of any queries, reach us on #openstack-glance IRC channel. Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Fri Oct 1 07:58:57 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Fri, 1 Oct 2021 09:58:57 +0200 Subject: [oslo] oslo.metrics package for Fedora In-Reply-To: <663593806.87658.1632279717177@mail.yahoo.com> References: <1578604251.1736431.1632195525621.ref@mail.yahoo.com> <1578604251.1736431.1632195525621@mail.yahoo.com> <20210921055711.coiwgigvp22imrhd@p1.localdomain> <663593806.87658.1632279717177@mail.yahoo.com> Message-ID: On Wed, Sep 22, 2021 at 03:01:57AM +0000, Hirotaka Wakabayashi wrote: > Hello Slawek, > > Thank you for your kind reply. I will use the RDO's spec file to make the > Fedora package. :) > > My application packed for Fedora is a simple notification listener using > oslo.messaging that requires oslo.metrics. As Artem says, any packages in > Fedora must resolve the dependencies without using RDO packages. > > Best Regards, > Hirotaka Wakabayashi > > On Tuesday, September 21, 2021, 02:57:21 PM GMT+9, Slawek Kaplonski wrote: > > Hi, > > On Tue, Sep 21, 2021 at 03:38:45AM +0000, Hirotaka Wakabayashi wrote: > > Hello Oslo Team, > > > > I am a Fedora packager. I want to package oslo.metrics for Fedora because > > my package uses oslo.messaging that requires oslo.metrics as you know. > > oslo.messaging package repository already exists in Fedora. I will take over > > it from the former package maintainer. the oslo.metrics repository doesn't existso I need to make it. > > > > If any concerns with it, please reply. I can update the version as soon as the > > new version releases by using Fedora's release monitoring system. > > Sorry, I'm late in the game here. Your package IS in Fedora and uses both oslo.metrics and also oslo.messaging? Looking at the dist-git[1], it seems oslo.messaging has been removed from Fedora. In order to get it back, you'll need to go through a package review. I would suspect this could go in quickly, since there is already a spec. Matthias [1] https://src.fedoraproject.org/rpms/python-oslo-messaging/tree/rawhide Matthias Runge 2021-10-01 07:33:05 UTC -- Matthias Runge From hberaud at redhat.com Fri Oct 1 14:25:50 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 1 Oct 2021 16:25:50 +0200 Subject: [release] Release countdown for week R-0, Oct 4 - Oct 8 Message-ID: Development Focus ----------------- We will be releasing the coordinated OpenStack Xena release next week, on October 6, 2021. Thanks to everyone involved in the Xena cycle! We are now in pre-release freeze, so no new deliverable will be created until final release, unless a release-critical regression is spotted. Otherwise, teams attending the virtual PTG should start to plan what they will be discussing there! General Information ------------------- On release day, the release team will produce final versions of deliverables following the cycle-with-rc release model, by re-tagging the commit used for the last RC. A patch doing just that will be proposed soon. PTLs and release liaisons should watch for that final release patch from the release team. While not required, we would appreciate having an ack from each team before we approve it on the 16th, so that their approval is included in the metadata that goes onto the signed tag. Upcoming Deadlines & Dates -------------------------- Final Xena release: October 6 Yoga PTG: October 18-22 -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Fri Oct 1 14:54:56 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Fri, 1 Oct 2021 20:24:56 +0530 Subject: [TripleO] Issue in running Pre-Introspection In-Reply-To: References: Message-ID: Hi Team,, Upon further debugging, I found that pre-introspection internally calls the ansible playbook located at path /usr/share/ansible/validation-playbooks File "dhcp-introspection.yaml" has hosts mentioned as undercloud. - hosts: *undercloud* become: true vars: ... ... But the artifacts created for dhcp-introspection at location /home/stack/validations/artifacts/_dhcp-introspection.yaml_2021-10-01T11 has file *hosts *present which has *localhost* written into it as a result of which when command gets executed it gives the error *"Could not match supplied host pattern, ignoring: undercloud:"* Can someone suggest how is this artifacts written in tripleo and the way we can change hosts file entry to undercloud so that it can work Similar is the case with other tasks like undercloud-tokenflush, ctlplane-ip-range etc Regards Anirudh Gupta On Wed, Sep 29, 2021 at 4:47 PM Anirudh Gupta wrote: > Hi Team, > > I tried installing Undercloud using the below link: > > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud > > I am getting the following error: > > (undercloud) [stack at undercloud ~]$ openstack tripleo validator run > --group pre-introspection > Selected log directory '/home/stack/validations' does not exist. > Attempting to create it. > > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | UUID | Validations | > Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | > > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | 7029c1f6-5ab4-465d-82d7-3f29058012ce | check-cpu | > PASSED | localhost | localhost | | 0:00:02.531 | > | db059017-30f1-4b97-925e-3f55b586d492 | check-disk-space | > PASSED | localhost | localhost | | 0:00:04.432 | > | e23dd9a1-90d3-4797-ae0a-b43e55ab6179 | check-ram | > PASSED | localhost | localhost | | 0:00:01.324 | > | 598ca02d-258a-44ad-b78d-3877321cdfe6 | check-selinux-mode | > PASSED | localhost | localhost | | 0:00:01.591 | > | c4435b4c-b432-4a1e-8a99-00638034a884 | *check-network-gateway > | FAILED* | undercloud | *No host matched* | | > | > | cb1eed23-ef2f-4acd-a43a-86fb09bf0372 | *undercloud-disk-space > | FAILED* | undercloud | *No host matched* | | > | > | abde5329-9289-4b24-bf16-c4d82b03e67a | *undercloud-neutron-sanity-check > | FAILED* | undercloud | *No host matched* | | > | > | d0e5fdca-ece6-4a37-b759-ed1fac31a10f | *ctlplane-ip-range > | FAILED* | undercloud | No host matched | | > | > | 91511807-225c-4852-bb52-6d0003c51d49 | *dhcp-introspection > | FAILED* | undercloud | No host matched | | > | > | e96f7704-d2fb-465d-972b-47e2f057449c |* undercloud-tokenflush > | FAILED *| undercloud | No host matched | | > | > > > As per the validation link, > > https://docs.openstack.org/tripleo-validations/wallaby/validations-pre-introspection-details.html > > check-network-gateway > > If gateway in undercloud.conf is different from local_ip, verify that the > gateway exists and is reachable > > Observation - In my case IP specified in local_ip and gateway, both are > pingable, but still this error is being observed > > > ctlplane-ip-range? > > > Check the number of IP addresses available for the overcloud nodes. > > Verify that the number of IP addresses defined in dhcp_start and dhcp_end fields > in undercloud.conf is not too low. > > - > > ctlplane_iprange_min_size: 20 > > Observation - In my case I have defined more than 20 IPs > > > Similarly for disk related issue, I have dedicated 100 GB space in /var > and / > > Filesystem Size Used Avail Use% Mounted on > devtmpfs 12G 0 12G 0% /dev > tmpfs 12G 84K 12G 1% /dev/shm > tmpfs 12G 8.7M 12G 1% /run > tmpfs 12G 0 12G 0% /sys/fs/cgroup > /dev/mapper/cl-root 100G 2.5G 98G 3% / > /dev/mapper/cl-home 47G 365M 47G 1% /home > /dev/mapper/cl-var 103G 1.1G 102G 2% /var > /dev/vda1 947M 200M 747M 22% /boot > tmpfs 2.4G 0 2.4G 0% /run/user/0 > tmpfs 2.4G 0 2.4G 0% /run/user/1000 > > Despite setting al the parameters, still I am not able to pass > pre-introspection checks. *"NO Host Matched" *is found in the table. > > > Regards > > Anirudh Gupta > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Oct 1 15:52:51 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 1 Oct 2021 17:52:51 +0200 Subject: [tc][docs] missing documentation Message-ID: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> Hi, With this mail I want to raise multiple topics towards TC, related to Documentation (SIG): * This week I had the task in the Release Management Team to notify the Documentation (Technical Writing) SIG to apply their processes to create the new release series landing pages for docs.openstack.org. Currently the SIG is chaired by Stephen Finucane, but he won't be around in the next cycle so the Technical Writing SIG will remain without a chair and active members. * Another point that came up is that a lot of projects are missing documentation in Victoria and Wallaby releases as they don't even have a single patch merged on their stable/victoria or stable/wallaby branches, not even the auto-generated patches (showing the lack of stable maintainers of the given projects). For example compare Ussuri [1] and Wallaby [2] projects page. ??? - one proposed solution for this is to auto-merge the auto-generated patches (but on the other hand this does not solve the issue of lacking active maintainers) Thanks, El?d [1] https://docs.openstack.org/ussuri/projects.html [2] https://docs.openstack.org/wallaby/projects.html From fungi at yuggoth.org Fri Oct 1 16:01:44 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 Oct 2021 16:01:44 +0000 Subject: [tc][docs] missing documentation In-Reply-To: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> References: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> Message-ID: <20211001160143.5n2e5dsm6qikopuf@yuggoth.org> On 2021-10-01 17:52:51 +0200 (+0200), El?d Ill?s wrote: [...] > the lack of stable maintainers of the given projects [...] I believe that's what https://review.opendev.org/810721 is attempting to solve, but could use more reviews. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Fri Oct 1 16:16:54 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Oct 2021 11:16:54 -0500 Subject: [tc][docs] missing documentation In-Reply-To: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> References: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> Message-ID: <17c3ca4eed7.c6239dbe366092.5303458889627498004@ghanshyammann.com> ---- On Fri, 01 Oct 2021 10:52:51 -0500 El?d Ill?s wrote ---- > Hi, > > With this mail I want to raise multiple topics towards TC, related to > Documentation (SIG): > > * This week I had the task in the Release Management Team to notify the > Documentation (Technical Writing) SIG to apply their processes to create > the new release series landing pages for docs.openstack.org. Currently > the SIG is chaired by Stephen Finucane, but he won't be around in the > next cycle so the Technical Writing SIG will remain without a chair and > active members. > > * Another point that came up is that a lot of projects are missing > documentation in Victoria and Wallaby releases as they don't even have a > single patch merged on their stable/victoria or stable/wallaby branches, > not even the auto-generated patches (showing the lack of stable > maintainers of the given projects). For example compare Ussuri [1] and > Wallaby [2] projects page. > - one proposed solution for this is to auto-merge the > auto-generated patches (but on the other hand this does not solve the > issue of lacking active maintainers) Thanks, Elod, for raising the issue. This is very helpful for TC to analyze the project status. To solve it now, I agree with your proposal to auto-merge the auto-generated patches and have their documentation fixed for stable branches. And to solve the stable branch maintainer, we are in-progress to change the stable branch team structure[1]. The current proposal is along with global stable maintainer team as an advisory body and allows the project team to have/manage their stable branch team as they do for the master branch, and that team can handle/manage their stable branch activities/members. I will try to get more attention from TC on this and merge it soon. On Documentation SIG chair, we appreciate Stephen's work and taking care of it. I am adding it in the next meeting agenda also we will discuss the plan in PTG. [1] https://review.opendev.org/c/openstack/governance/+/810721/ -gmann > > Thanks, > > El?d > > > [1] https://docs.openstack.org/ussuri/projects.html > [2] https://docs.openstack.org/wallaby/projects.html > > > > From gmann at ghanshyammann.com Fri Oct 1 17:09:07 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Oct 2021 12:09:07 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 1st Oct, 21: Reading: 5 min Message-ID: <17c3cd4bdff.e3b20c14368201.4658562949417284171@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * TC this week IRC meeting held on Sept 30th Thursday. * Most of the meeting discussions are summarized in the below sections (Completed or in-progress activities section). To know more details, you can check the complete logs @ https://meetings.opendev.org/meetings/tc/2021/tc.2021-09-30-15.00.log.html * We will have next week's video call meeting on Oct 7th, Thursday 15:00 UTC, feel free the topic in agenda [1] by Oct 6th. 2. What we completed this week: ========================= * Listed the 'Places for projects to spreading the word'[2] * Removed stable release 'Unmaintained' phase[3] 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ * TC is using the etherpad[4] for Xena cycle working item. We will be checking and updating the status biweekly on the same etherpad. * Current status is: 8 completed, 5 in-progress Open Reviews ----------------- * Five open reviews for ongoing activities[5]. Place to maintain the external hosted ELK, E-R, O-H services ------------------------------------------------------------------------- * We discussed about the technical possibility and the place to add the ELK services maintenance[6]. * As there is no other infra project than OpenStack has shown the interest to use and maintain it, we are disucssing where we can fit this in OpenStack. TACT SIG is one place we are leaning towards. We will continue to discuss it in next TC meeting and prepare some draft plan. Add project health check tool ----------------------------------- * No updates on this than previous week. * We are reviewing Rico proposal on collecting stats tool[7] and TODO of documenting the usage and interpretation of those stats. Stable Core team process change --------------------------------------- * Draft proposal resolution is still under review[8] . Feel free to provide early feedback if you have any. * Elod has raised more issues today[9], that a few projects stable change (even auto generated patch) are not merged and so does their stable branch doc site is not up. For now, we are fine to auto/single core approval of those auto-generated patches and proceed to make stable branch doc site up. Call for 'Technical Writing' SIG Chair/Maintainers ---------------------------------------------------------- * The technical writing SIG[10] provides documentation guidance, assistance, tooling, and style guides for OpenStack project teams. * As you might have read the email from Elod[9], Stephen who is current chair for this SIG is not planning to continue to chair. Please let us know if you are interested to help in this Doc TC tags analysis ------------------- * As discussed in the last PTG, TC is working on an analysis of the usefulness of the Tags framework[11] or what all tags can be cleaned up. * We are still waiting for the operator's response to the email on openstack-disscuss ML[12]. If you are an operator, please respond to the email and based on the feedback we will continue the discussion in PTG. Project updates ------------------- * Add the cinder-netapp charm to Openstack charms[13] * Retiring js-openstack-lib [14] * Retire puppet-freezer[15] Yoga release community-wide goal ----------------------------------------- * Please add the possible candidates in this etherpad [16]. * Current status: "Secure RBAC" is selected for Yoga cycle[17]. PTG planning ---------------- * We are collecting the PTG topics in etherpad[18], feel free to add any topic you would like to discuss. * We discussed the live stream of one of the TC PTG sessions like we did last time. Once we will have more topics in etherpad then we can select the appropriate one. Test support for TLS default: ---------------------------------- * Rico has started a separate email thread over testing with tls-proxy enabled[19], we encourage projects to participate in that testing and help to enable the tls-proxy in gate testing. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[20]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [21] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [22] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://docs.openstack.org/project-team-guide/spread-the-word.html [3] https://review.opendev.org/c/openstack/project-team-guide/+/810499 [4] https://etherpad.opendev.org/p/tc-xena-tracke [5] https://review.opendev.org/q/projects:openstack/governance+status:open [6] https://etherpad.opendev.org/p/elk-service-maintenance-plan [7] https://review.opendev.org/c/openstack/governance/+/810037 [8] https://review.opendev.org/c/openstack/governance/+/810721 [9] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025161.html [10] https://governance.openstack.org/sigs/ [11] https://governance.openstack.org/tc/reference/tags/index.html [12] http://lists.openstack.org/pipermail/openstack-discuss/2021-September/024804.html [13] https://review.opendev.org/c/openstack/governance/+/809011 [14] https://review.opendev.org/c/openstack/governance/+/798540 [15] https://review.opendev.org/c/openstack/governance/+/807163 [16] https://etherpad.opendev.org/p/y-series-goals [17] https://review.opendev.org/c/openstack/governance/+/803783 [18] https://etherpad.opendev.org/p/tc-yoga-ptg [19] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023000.html [20] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [21] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [22] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours From elod.illes at est.tech Fri Oct 1 17:41:28 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 1 Oct 2021 19:41:28 +0200 Subject: [tc][docs] missing documentation In-Reply-To: <20211001160143.5n2e5dsm6qikopuf@yuggoth.org> References: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> <20211001160143.5n2e5dsm6qikopuf@yuggoth.org> Message-ID: El?d On 2021. 10. 01. 18:01, Jeremy Stanley wrote: > On 2021-10-01 17:52:51 +0200 (+0200), El?d Ill?s wrote: > [...] >> the lack of stable maintainers of the given projects > [...] > > I believe that's what https://review.opendev.org/810721 is > attempting to solve, but could use more reviews. Partly, as my experience (or maybe just feeling?) is that those projects that does not even merge the bot proposed stable patches usually have reviewing problems on master branches as well. From peter.matulis at canonical.com Fri Oct 1 18:00:31 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Fri, 1 Oct 2021 14:00:31 -0400 Subject: [tc][docs] missing documentation In-Reply-To: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> References: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> Message-ID: How does the Projects page get populated? On Fri, Oct 1, 2021 at 11:56 AM El?d Ill?s wrote: > Hi, > > With this mail I want to raise multiple topics towards TC, related to > Documentation (SIG): > > * This week I had the task in the Release Management Team to notify the > Documentation (Technical Writing) SIG to apply their processes to create > the new release series landing pages for docs.openstack.org. Currently > the SIG is chaired by Stephen Finucane, but he won't be around in the > next cycle so the Technical Writing SIG will remain without a chair and > active members. > > * Another point that came up is that a lot of projects are missing > documentation in Victoria and Wallaby releases as they don't even have a > single patch merged on their stable/victoria or stable/wallaby branches, > not even the auto-generated patches (showing the lack of stable > maintainers of the given projects). For example compare Ussuri [1] and > Wallaby [2] projects page. > - one proposed solution for this is to auto-merge the > auto-generated patches (but on the other hand this does not solve the > issue of lacking active maintainers) > > Thanks, > > El?d > > > [1] https://docs.openstack.org/ussuri/projects.html > [2] https://docs.openstack.org/wallaby/projects.html > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Fri Oct 1 20:58:25 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 1 Oct 2021 16:58:25 -0400 Subject: [TripleO] Gate blocker - please hold rechecks In-Reply-To: References: <20210930192027.tytawpypbirzylyk@yuggoth.org> Message-ID: On Thu, Sep 30, 2021 at 4:17 PM Ronelle Landy wrote: > > > On Thu, Sep 30, 2021 at 3:25 PM Jeremy Stanley wrote: > >> On 2021-09-30 13:54:02 -0400 (-0400), Ronelle Landy wrote: >> > We have a gate blocker for tripleo at: >> > https://bugs.launchpad.net/tripleo/+bug/1945682 >> > >> > This tox error is impacting tox jobs on multiple tripleo-related repos. >> > A resolution is being worked on by infra. >> [...] >> >> This was due to a regression in a bug fix change[0] which merged to >> zuul-jobs, and the emergency revert[1] of that fix merged roughly an >> hour ago (18:17 UTC) so should no longer be causing new failures. >> I'm working on a regression test to exercise the tox feature TripleO >> was using and incorporate a solution for that so we can make sure >> it's not impacted when we re-merge[2] the original fix. >> > > Thanks for the quick resolution here. > Failed jobs are clearing the gate and will be rechecked if needed. > >> >> [0] https://review.opendev.org/806612 >> [1] https://review.opendev.org/812001 >> [2] https://review.opendev.org/812005 >> >> -- >> Jeremy Stanley >> > > Note that the OVB issue is still ongoing. > OVB issue should be resolved now > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sat Oct 2 09:10:55 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 2 Oct 2021 11:10:55 +0200 Subject: [tc][docs] missing documentation In-Reply-To: References: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> <20211001160143.5n2e5dsm6qikopuf@yuggoth.org> Message-ID: QQ - do you have a listing of missing projects handy? or better yet: some script to list those - that could help TC in deriving project health criteria. -yoctozepto On Fri, 1 Oct 2021 at 19:42, El?d Ill?s wrote: > > > El?d > > On 2021. 10. 01. 18:01, Jeremy Stanley wrote: > > On 2021-10-01 17:52:51 +0200 (+0200), El?d Ill?s wrote: > > [...] > >> the lack of stable maintainers of the given projects > > [...] > > > > I believe that's what https://review.opendev.org/810721 is > > attempting to solve, but could use more reviews. > Partly, as my experience (or maybe just feeling?) is that those projects > that does not even merge the bot proposed stable patches usually have > reviewing problems on master branches as well. > > From hojat.gazestani1 at gmail.com Sun Oct 3 09:45:34 2021 From: hojat.gazestani1 at gmail.com (hojat openstack-nsx-VOIP SBC) Date: Sun, 3 Oct 2021 13:15:34 +0330 Subject: Access denied for user nova Message-ID: Hi I have a problem which is described here , Does anyone have any idea to resolve this issue? Regards, Hojii. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Mon Oct 4 05:23:40 2021 From: sorrison at gmail.com (Sam Morrison) Date: Mon, 4 Oct 2021 16:23:40 +1100 Subject: [kolla] skipping base if already exist Message-ID: <9217E6F1-22AC-4579-A920-5EFA7DCCAE56@gmail.com> Hi, We?ve started to use kolla to build container images and trying to figure out if I?m doing it wrong it it?s just not how kolla works. What I?m trying to do it not rebuild the base and openstack-base images when we build an image for a project. Example. We build a horizon image and it builds and pushes up to our registry the following kolla/ubuntu-source-base <> kolla/ubuntu-source-openstack-base kolla/ubuntu-source-horizon Now I can rebuild this without having to again build the base images with ?skip parents But now I want to build a barbican image and I can?t use skip-parents as the barbican image also requires barbican-base. Which means I need to go and rebuild the ubuntu base and Openstack base images again. Is there a way to essentially skip parents but only if they don?t exist in the registry already? Or make skip-parents only mean skip base and Openstack-base? Thanks in advance, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Oct 4 07:18:53 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 4 Oct 2021 09:18:53 +0200 Subject: [kolla] skipping base if already exist In-Reply-To: <9217E6F1-22AC-4579-A920-5EFA7DCCAE56@gmail.com> References: <9217E6F1-22AC-4579-A920-5EFA7DCCAE56@gmail.com> Message-ID: On Mon, 4 Oct 2021 at 07:24, Sam Morrison wrote: > > Hi, > > We?ve started to use kolla to build container images and trying to figure out if I?m doing it wrong it it?s just not how kolla works. > > What I?m trying to do it not rebuild the base and openstack-base images when we build an image for a project. > > Example. > > We build a horizon image and it builds and pushes up to our registry the following > > kolla/ubuntu-source-base > kolla/ubuntu-source-openstack-base > kolla/ubuntu-source-horizon > > > Now I can rebuild this without having to again build the base images with > > ?skip parents > > > But now I want to build a barbican image and I can?t use skip-parents as the barbican image also requires barbican-base. Which means I need to go and rebuild the ubuntu base and Openstack base images again. > > Is there a way to essentially skip parents but only if they don?t exist in the registry already? Or make skip-parents only mean skip base and Openstack-base? You might then be interested in --skip-existing -yoctozepto > > Thanks in advance, > Sam > > From eblock at nde.ag Mon Oct 4 07:15:09 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 04 Oct 2021 07:15:09 +0000 Subject: Access denied for user nova In-Reply-To: Message-ID: <20211004071509.Horde.XyH99zDwEsAOUPdKEkUGH48@webmail.nde.ag> Hi, your hostname seems to be controller001 but your config settings refer to controller01 (as far as I checked). That would explain it, wouldn't it? The "access denied" message: 2021-10-02 12:52:16 141 [Warning] Access denied for user 'nova'@'controller001' (using password: YES) and your nova endpoint: openstack endpoint create --region RegionOne compute public http://controller01:8774/v2.1 Zitat von hojat openstack-nsx-VOIP SBC : > Hi > > I have a problem which is described here > , > Does anyone have any idea to resolve this issue? > > Regards, > Hojii. From geguileo at redhat.com Mon Oct 4 10:23:31 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 4 Oct 2021 12:23:31 +0200 Subject: [dev][cinder] Consultation about new cinder-backup features In-Reply-To: References: <20210906132813.xsaxbsyyvf4ey4vm@localhost> Message-ID: <20211004102331.e3otr2k2mjzglg42@localhost> On 30/09, Daniel de Oliveira Pereira wrote: > On 06/09/2021 10:28, Gorka Eguileor wrote: > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > On 27/08, Daniel de Oliveira Pereira wrote: > >> Hello everyone, > >> > >> We have prototyped some new features on Cinder for our clients, and we > >> think that they are nice features and good candidates to be part of > >> upstream Cinder, so we would like to get feedback from OpenStack > >> community about these features and if you would be willing to accept > >> them in upstream OpenStack. > > > > Hi Daniel, > > > > Thank you very much for your willingness to give back!!! > > > > > >> > >> Our team implemented the following features for cinder-backup service: > >> > >> 1. A multi-backend backup driver, that allow OpenStack users to > >> choose, via API/CLI/Horizon, which backup driver (Ceph or NFS, in our > >> prototype) will be used during a backup operation to create a new volume > >> backup. > > > > This is a feature that has been discussed before, and e0ne already did > > some of the prerequisites for it. > > > > > >> 2. An improved NFS backup driver, that allow OpenStack users to back > >> up their volumes to private NFS servers, providing the NFS hostpath at > >> runtime via API/CLI/Horizon, while creating the volume backup. > >> > > > > What about the username and password? > > Hi Gorka, > > thanks for your feedback. > > Our prototype doesn't support authentication using username/password, > since this is a feature that NFS doesn't provide built-in support. > > > Can backups be restored from a remote location as well? > > Yes, if the location is the one where the backup was originally saved > (same NFS hostpath), as the backup location is stored on Cinder backups > table during the backup creation. It doesn't support restoring the > backup from an arbitrary remote NFS server. > > > > > This sounds like a very cool feature, but I'm not too comfortable with > > having it in Cinder. > > > > The idea is that Cinder provides an abstraction and doesn't let users > > know about implementation details. > > > > With that feature as it is a user could request a backup to an off-site > > location that could result in congestion in one of the outbound > > connections. > > I think this is a very good point, that we weren't taking into > consideration in our prototype. > > > > > I can only think of this being acceptable for admin users, and in that > > case I think it would be best to use the multi-backup destination > > feature instead. > > > > After all, how many times do we have to backup to a different location? > > Maybe I'm missing a use case. > > Our clients have privacy and security concerns with the same NFS server > being shared by OpenStack tenants to store volume backups, so they > required cinder-backup to be able to back up volumes to private NFS servers. > > > > > If the community thinks this as a desired feature I would encourage > > adding it with a policy that disables it by default. > > > > > >> Considering that cinder was configured to use the multi-backend backup > >> driver, this is how it works: > >> > >> During a volume backup operation, the user provides a "location" > >> parameter to indicate which backend will be used, and the backup > >> hostpath, if applicable (for NFS driver), to create the volume backup. > >> For instance: > >> > >> - Creating a backup using Ceph backend: > >> $ openstack volume backup create --name --location > >> ceph > >> > >> - Creating a backup using the improved NFS backend: > >> $ openstack volume backup create --name --location > >> nfs://my.nfs.server:/backups > >> > >> If the user chooses Ceph backend, the Ceph driver will be used to > >> create the backup. If the user chooses the NFS backend, the improved NFS > >> driver, previously mentioned, will be used to create the backup. > >> > >> The backup location, if provided, is stored on Cinder database, and > >> can be seen fetching the backup details: > >> $ openstack volume backup show > >> > >> Briefly, this is how the features were implemented: > >> > >> - Cinder API was updated to add an optional location parameter to > >> "create backup" method. Horizon, and OpenStack and Cinder CLIs were > >> updated accordingly, to handle the new parameter. > >> - Cinder backup controller was updated to handle the backup location > >> parameter, and a validator for the parameter was implemented using the > >> oslo config library. > >> - Cinder backup object model was updated to add a nullable location > >> property, so that the backup location could be stored on cinder database. > >> - a new backup driver base class, that extends BackupDriver and > >> accepts a backup context object, was implemented to handle the backup > >> configuration provided at runtime by the user. This new backup base > >> class requires that the concrete drivers implement a method to validate > >> the backup context (similar to BackupDriver.check_for_setup_error) > >> - the 2 new backup drivers, previously mentioned, were implemented > >> using these new backup base class. > >> - in BackupManager class, the "service" attribute, that on upstream > >> OpenStack holds the backup driver class name, was re-implemented as a > >> factory function that accepts a backup context object and return an > >> instance of a backup driver, according to the backup driver configured > >> on cinder.conf file and the backup context provided at runtime by the user. > >> - All the backup operations continue working as usual. > >> > > > > When this feature was discussed upstream we liked the idea of > > implementing this like we do multi-backends for the volume service, > > adding backup-types. > > I found this approved spec [1] (that, I believe, is product of the work > done by eOne that you mentioned before), but I couldn't find any work > items in progress related to it. > Do you know the current status of this spec? Is it ready to be > implemented or is there some more work to be done until there? If we > decide to work on its implementation, would be required to review, and > possibly update, the spec for the current development cycle? > > [1] > https://specs.openstack.org/openstack/cinder-specs/specs/victoria/backup-backends-configuration.html > Hi, I think all that would need to be done regarding the spec is to submit a patch to move it to the current release directory and fix the formatting issue of the tables from the "Data model impact" section. You'll be able to leverage Ivan's work [1] when implementing the multi-backup feature. Cheers, Gorka. [1]: https://review.opendev.org/c/openstack/cinder/+/630305 > > > > > In latest code backup creation operations have been modified to go > > through the scheduler, so that's a piece that is already implemented. > > > > > >> Could you please let us know your thoughts about these features and if > >> you would be open to adding them to upstream Cinder? If yes, we would be > >> willing to submit the specs and work on the upstream implementation, if > >> they are approved. > >> > >> Regards, > >> Daniel Pereira > >> > > > > I believe you will have the full community's support on the first idea > > (though probably not on the proposed implementation). > > > > I'm not so sure on the second one, iti will most likely depend on the > > use cases. Many times the reasons why features are dismissed upstream > > is because there are no clear use cases that justify the addition of the > > code. > > > > Looking forward to continuing this conversation at the PTG, IRC, in a > > spec, or through here. > > > > Cheers, > > Gorka. > > > From smooney at redhat.com Mon Oct 4 12:22:59 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 4 Oct 2021 13:22:59 +0100 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> <4a27c961-354e-270c-5dc7-789a5770fe5c@debian.org> Message-ID: On Wed, Sep 29, 2021 at 9:41 PM Michael Johnson wrote: > > I would like to for Designate. Assuming the eventlet issues get resolved. > > There is at least one bug in 1.16 that has been resolved on the 2.x > chain and some of the new features set us up for new security related > features. i belive that the latest release of eventlets is now compatiable with dnspython 2.x https://eventlet.net/doc/changelog.html#id1 so yes i think we should be movign to eventlet 0.32.0+ and dnspython 2.x > > Michael > > On Wed, Sep 29, 2021 at 1:19 PM Corey Bryant wrote: > > > > > > > > On Fri, Sep 10, 2021 at 2:11 PM Corey Bryant wrote: > >> > >> > >> > >> On Wed, Sep 30, 2020 at 11:53 AM Sean Mooney wrote: > >> > >>> > >>> we do not know if there are other failrue > >>> neutron has a spereate issue which was tracked by https://github.com/eventlet/eventlet/issues/619 > >>> and nova hit the ssl issue with websockify and eventlets tracked by https://github.com/eventlet/eventlet/issues/632 > >>> > >>> so the issue is really eventlets is not compatiabley with dnspython 2.0 > >>> so before openstack can uncap dnspython eventlets need to gain support for dnspython 2.0 > >>> that should hopefully resolve the issues that nova, neutron and other projects are now hitting. > >>> > >>> it is unlikely that this is something we can resolve in openstack alone, not unless we are willing to monkeyptych > >>> eventlets and other dependcies so really we need to work with eventlets and or dnspython to resolve the incompatiablity > >>> caused by the dnspython changes in 2.0 > >> > >> > >> It looks like there's been some progress on eventlet supporting dnspython 2.0: https://github.com/eventlet/eventlet/commit/aeb0390094a1c3f29bb4f25a8dab96587a86b3e8 > > > > > > Does anyone know if there are plans to (attempt to) move to dnspython 2.0 in yoga? > > > > Thanks, > > Corey > From jean-francois.taltavull at elca.ch Mon Oct 4 13:03:08 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Mon, 4 Oct 2021 13:03:08 +0000 Subject: [OpenStack-Ansible] LXC containers apt upgrade Message-ID: <2cce6f95893340dcba81c88e278213b8@elca.ch> Hi All, Following the recent Let's Encrypt certificates expiration, I was wondering what was the best policy to apt upgrade the operating system used by LXC containers running on controller nodes. Has anyone ever defined such a policy ? Is there an OSA tool to do this ? Regards, Jean-Fran?ois From amy at demarco.com Mon Oct 4 13:06:18 2021 From: amy at demarco.com (Amy Marrich) Date: Mon, 4 Oct 2021 08:06:18 -0500 Subject: Diversity and Inclusion Meeting Reminder - Tooday Message-ID: The Diversity & Inclusion WG invites members of all OIF projects to attend our next meeting Monday October 4th, at 17:00 UTC in the #openinfra- diversity channel on OFTC. The agenda can be found at https://etherpad.openstack.org/p/diversity-wg-agenda. Please feel free to add any topics you wish to discuss at the meeting. Thanks, Amy (apotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence.romsics at gmail.com Mon Oct 4 13:45:55 2021 From: bence.romsics at gmail.com (Bence Romsics) Date: Mon, 4 Oct 2021 15:45:55 +0200 Subject: [neutron] bug deputy report of week 2021-09-27 Message-ID: Hi Neutrinos, Here comes last week's report: Unassigned: * https://bugs.launchpad.net/neutron/+bug/1945283 test_overlapping_sec_grp_rules from neutron_tempest_plugin.scenario is failing intermittently * https://bugs.launchpad.net/neutron/+bug/1945306 [dvr+l3ha] north-south traffic not working when VM and main router are not on the same host Medium: * https://bugs.launchpad.net/neutron/+bug/1945512 [HA] HA router first transition to master should not wait fix proposed by ralonsoh: https://review.opendev.org/c/openstack/neutron/+/811751 * https://bugs.launchpad.net/neutron/+bug/1945651 [ovn] Updating binding profile through CLI doesn't work fix proposed by dalvarez and slaweq: https://review.opendev.org/c/openstack/neutron/+/811971 Low: * https://bugs.launchpad.net/neutron/+bug/1945954 [os-ken] Missing subclass for SUBTYPE_RIB_*_MULTICAST in mrtlib fix proposed by ralonsoh: https://review.opendev.org/c/openstack/os-ken/+/812293 Duplicate: * https://bugs.launchpad.net/neutron/+bug/1945747 GET security group rule is missing description attribute fixed on master, but not yet backported to ussuri where it was reported Still being triaged: * https://bugs.launchpad.net/neutron/+bug/1945560 Neutron-metering doesnt get "bandwidth" metric Cheers, Bence (rubasov) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Oct 4 14:20:19 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 4 Oct 2021 16:20:19 +0200 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> <4a27c961-354e-270c-5dc7-789a5770fe5c@debian.org> Message-ID: <266d8064-6b04-d951-4318-96412f7351a8@debian.org> On 10/4/21 2:22 PM, Sean Mooney wrote: > On Wed, Sep 29, 2021 at 9:41 PM Michael Johnson wrote: >> >> I would like to for Designate. Assuming the eventlet issues get resolved. >> >> There is at least one bug in 1.16 that has been resolved on the 2.x >> chain and some of the new features set us up for new security related >> features. > i belive that the latest release of eventlets is now compatiable with > dnspython 2.x > https://eventlet.net/doc/changelog.html#id1 > > so yes i think we should be movign to eventlet 0.32.0+ and dnspython 2.x FYI, in Debian, we have backported patches to the Eventlet version for Victoria, Wallaby and Xena. I didn't have much time to test that yet though. Cheers, Thomas Goirand (zigo) From ralonsoh at redhat.com Mon Oct 4 15:51:08 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 4 Oct 2021 17:51:08 +0200 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: <266d8064-6b04-d951-4318-96412f7351a8@debian.org> References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> <4a27c961-354e-270c-5dc7-789a5770fe5c@debian.org> <266d8064-6b04-d951-4318-96412f7351a8@debian.org> Message-ID: Hello: We are bumping both libraries in https://review.opendev.org/c/openstack/requirements/+/811555/6/upper-constraints.txt (still under review). Regards On Mon, Oct 4, 2021 at 4:34 PM Thomas Goirand wrote: > On 10/4/21 2:22 PM, Sean Mooney wrote: > > On Wed, Sep 29, 2021 at 9:41 PM Michael Johnson > wrote: > >> > >> I would like to for Designate. Assuming the eventlet issues get > resolved. > >> > >> There is at least one bug in 1.16 that has been resolved on the 2.x > >> chain and some of the new features set us up for new security related > >> features. > > i belive that the latest release of eventlets is now compatiable with > > dnspython 2.x > > https://eventlet.net/doc/changelog.html#id1 > > > > so yes i think we should be movign to eventlet 0.32.0+ and dnspython 2.x > > FYI, in Debian, we have backported patches to the Eventlet version for > Victoria, Wallaby and Xena. I didn't have much time to test that yet > though. > > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Oct 4 16:09:24 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 4 Oct 2021 18:09:24 +0200 Subject: [oslo] Propose to EOL stable/queens, stable/rocky on all the oslo scope Message-ID: Hi, On our last meeting of the oslo team we discussed the problem with broken stable branches (rocky and older) in oslo's projects [1]. Indeed, almost all these branches are broken. El?d Ill?s kindly generated a list of periodic-stable errors on Oslo's stable branches [2]. Given the lack of active maintainers on Oslo and given the current status of the CI in those branches, I propose to make them End Of Life. I will wait until the end of month for anyone who would like to maybe step up as maintainer of those branches and who would at least try to fix CI of them. If no one will volunteer for that, I'll EOLing those branches for all the projects under the oslo umbrella. Let us know your thoughts. Thank you for your attention. [1] https://meetings.opendev.org/meetings/oslo/2021/oslo.2021-10-04-15.00.log.txt [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023939.html -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Mon Oct 4 16:46:01 2021 From: corey.bryant at canonical.com (Corey Bryant) Date: Mon, 4 Oct 2021 12:46:01 -0400 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> <4a27c961-354e-270c-5dc7-789a5770fe5c@debian.org> <266d8064-6b04-d951-4318-96412f7351a8@debian.org> Message-ID: Great to see! Thanks for sharing. Corey On Mon, Oct 4, 2021 at 11:51 AM Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > Hello: > > We are bumping both libraries in > https://review.opendev.org/c/openstack/requirements/+/811555/6/upper-constraints.txt > (still under review). > > Regards > > On Mon, Oct 4, 2021 at 4:34 PM Thomas Goirand wrote: > >> On 10/4/21 2:22 PM, Sean Mooney wrote: >> > On Wed, Sep 29, 2021 at 9:41 PM Michael Johnson >> wrote: >> >> >> >> I would like to for Designate. Assuming the eventlet issues get >> resolved. >> >> >> >> There is at least one bug in 1.16 that has been resolved on the 2.x >> >> chain and some of the new features set us up for new security related >> >> features. >> > i belive that the latest release of eventlets is now compatiable with >> > dnspython 2.x >> > https://eventlet.net/doc/changelog.html#id1 >> > >> > so yes i think we should be movign to eventlet 0.32.0+ and dnspython 2.x >> >> FYI, in Debian, we have backported patches to the Eventlet version for >> Victoria, Wallaby and Xena. I didn't have much time to test that yet >> though. >> >> Cheers, >> >> Thomas Goirand (zigo) >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Mon Oct 4 17:28:02 2021 From: neil at tigera.io (Neil Jerram) Date: Mon, 4 Oct 2021 18:28:02 +0100 Subject: [stable][requirements][zuul] unpinned setuptools dependency on stable In-Reply-To: References: <6J4UZQ.VOBD0LVDTPUX1@est.tech> <827e99c6-99b2-54c8-a627-5153e3b84e6b@est.tech> Message-ID: Is anyone helping to progress this? I just checked that stable/ussuri devstack is still broken. Best wishes, Neil On Tue, Sep 28, 2021 at 9:20 AM Neil Jerram wrote: > But I don't think that solution works for devstack, does it? Is there a > way to pin setuptools in a stable/ussuri devstack run, except by changing > the stable branch of the requirements project? > > > On Mon, Sep 27, 2021 at 7:50 PM El?d Ill?s wrote: > >> Hi again, >> >> as I see there is no objection yet about using gibi's solution [1] (as I >> already summarized the situation in my previous mail [2]) for a fix for >> similar cases, so with a general stable core hat on, I *suggest* >> everyone to use that solution to pin the setuptools in tox for every >> failing cases (so that to avoid similar future errors as well). >> >> [1] https://review.opendev.org/810461 >> [2] >> >> http://lists.openstack.org/pipermail/openstack-discuss/2021-September/025059.html >> >> El?d >> >> >> On 2021. 09. 27. 14:47, Balazs Gibizer wrote: >> > >> > >> > On Fri, Sep 24 2021 at 10:21:33 PM +0200, Thomas Goirand >> > wrote: >> >> Hi Gibi! >> >> >> >> Thanks for bringing this up. >> >> >> >> As a distro package maintainer, here's my view. >> >> >> >> On 9/22/21 2:11 PM, Balazs Gibizer wrote: >> >>> Option 1: Bump the major version of the decorator dependency on >> >>> stable. >> >> >> >> Decorator 4.0.11 is even in Debian Stretch (currently oldoldstable), >> for >> >> which I don't even maintain OpenStack anymore (that's OpenStack >> >> Newton...). So I don't see how switching to decorator 4.0.0 is a >> >> problem, and I don't understand how OpenStack could be using 3.4.0 >> which >> >> is in Jessie (ie: 6 years old Debian release). >> >> >> >> PyPi says Decorator 3.4.0 is from 2012: >> >> https://pypi.org/project/decorator/#history >> >> >> >> Do you have your release numbers correct? If so, then switching to >> >> Decorator 4.4.2 (available in Debian Bullseye (shipped with Victoria) >> >> and Ubuntu >=Focal) looks like reasonable to me... Sticking with 3.4.0 >> >> feels a bit crazy (and I wasn't aware of it). >> > >> > Thanks for the info. So from Debian perspective it is OK to bump the >> > decorator version on stable. As others noted in this thread it seems >> > to be more than just decorator that broke. :/ >> > >> >> >> >>> Option 2: Pin the setuptools version during tox installation >> >> >> >> Please don't do this for the master branch, we need OpenStack to stay >> >> current with setuptools (yeah, even if this means breaking changes...). >> > >> > I've no intention to pin it on master. Master needs to work with the >> > latest and greatest. Also on master it is easier to fix / replace the >> > dependencies that become broken with new setuptools. >> > >> >> >> >> For already released OpenStack: I don't mind much if this is done (I >> >> could backport fixes if something breaks). >> > >> > ack >> > >> >> >> >>> Option 3: turn off lower-constraints testing >> >> >> >> I already expressed myself about this: this is dangerous as distros >> rely >> >> on it for setting lower bounds as low as possible (which is always >> >> preferred from a distro point of view). >> >> >> >>> Option 4: utilize pyproject.toml[6] to specify build-time >> requirements >> >> >> >> I don't know about pyproject.toml. >> >> >> >> Just my 2 cents, hoping it's useful, >> > >> > Thanks! >> > >> > Cheers, >> > gibi >> > >> >> Cheers, >> >> >> >> Thomas Goirand (zigo) >> >> >> > >> > >> > >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Mon Oct 4 18:16:52 2021 From: neil at tigera.io (Neil Jerram) Date: Mon, 4 Oct 2021 19:16:52 +0100 Subject: [stable][requirements][zuul] unpinned setuptools dependency on stable In-Reply-To: References: <6J4UZQ.VOBD0LVDTPUX1@est.tech> <827e99c6-99b2-54c8-a627-5153e3b84e6b@est.tech> Message-ID: I can now confirm that https://review.opendev.org/c/openstack/requirements/+/810859 fixes my CI use case. (By temporarily using a fork of the requirements repo that includes that change.) (Fix detail if needed here: https://github.com/projectcalico/networking-calico/pull/64/commits/cbed6282405957f7d60b6e0790c91fb852afe84c ) Best wishes. Neil On Mon, Oct 4, 2021 at 6:28 PM Neil Jerram wrote: > Is anyone helping to progress this? I just checked that stable/ussuri > devstack is still broken. > > Best wishes, > Neil > > > On Tue, Sep 28, 2021 at 9:20 AM Neil Jerram wrote: > >> But I don't think that solution works for devstack, does it? Is there a >> way to pin setuptools in a stable/ussuri devstack run, except by changing >> the stable branch of the requirements project? >> >> >> On Mon, Sep 27, 2021 at 7:50 PM El?d Ill?s wrote: >> >>> Hi again, >>> >>> as I see there is no objection yet about using gibi's solution [1] (as I >>> already summarized the situation in my previous mail [2]) for a fix for >>> similar cases, so with a general stable core hat on, I *suggest* >>> everyone to use that solution to pin the setuptools in tox for every >>> failing cases (so that to avoid similar future errors as well). >>> >>> [1] https://review.opendev.org/810461 >>> [2] >>> >>> http://lists.openstack.org/pipermail/openstack-discuss/2021-September/025059.html >>> >>> El?d >>> >>> >>> On 2021. 09. 27. 14:47, Balazs Gibizer wrote: >>> > >>> > >>> > On Fri, Sep 24 2021 at 10:21:33 PM +0200, Thomas Goirand >>> > wrote: >>> >> Hi Gibi! >>> >> >>> >> Thanks for bringing this up. >>> >> >>> >> As a distro package maintainer, here's my view. >>> >> >>> >> On 9/22/21 2:11 PM, Balazs Gibizer wrote: >>> >>> Option 1: Bump the major version of the decorator dependency on >>> >>> stable. >>> >> >>> >> Decorator 4.0.11 is even in Debian Stretch (currently oldoldstable), >>> for >>> >> which I don't even maintain OpenStack anymore (that's OpenStack >>> >> Newton...). So I don't see how switching to decorator 4.0.0 is a >>> >> problem, and I don't understand how OpenStack could be using 3.4.0 >>> which >>> >> is in Jessie (ie: 6 years old Debian release). >>> >> >>> >> PyPi says Decorator 3.4.0 is from 2012: >>> >> https://pypi.org/project/decorator/#history >>> >> >>> >> Do you have your release numbers correct? If so, then switching to >>> >> Decorator 4.4.2 (available in Debian Bullseye (shipped with Victoria) >>> >> and Ubuntu >=Focal) looks like reasonable to me... Sticking with 3.4.0 >>> >> feels a bit crazy (and I wasn't aware of it). >>> > >>> > Thanks for the info. So from Debian perspective it is OK to bump the >>> > decorator version on stable. As others noted in this thread it seems >>> > to be more than just decorator that broke. :/ >>> > >>> >> >>> >>> Option 2: Pin the setuptools version during tox installation >>> >> >>> >> Please don't do this for the master branch, we need OpenStack to stay >>> >> current with setuptools (yeah, even if this means breaking >>> changes...). >>> > >>> > I've no intention to pin it on master. Master needs to work with the >>> > latest and greatest. Also on master it is easier to fix / replace the >>> > dependencies that become broken with new setuptools. >>> > >>> >> >>> >> For already released OpenStack: I don't mind much if this is done (I >>> >> could backport fixes if something breaks). >>> > >>> > ack >>> > >>> >> >>> >>> Option 3: turn off lower-constraints testing >>> >> >>> >> I already expressed myself about this: this is dangerous as distros >>> rely >>> >> on it for setting lower bounds as low as possible (which is always >>> >> preferred from a distro point of view). >>> >> >>> >>> Option 4: utilize pyproject.toml[6] to specify build-time >>> requirements >>> >> >>> >> I don't know about pyproject.toml. >>> >> >>> >> Just my 2 cents, hoping it's useful, >>> > >>> > Thanks! >>> > >>> > Cheers, >>> > gibi >>> > >>> >> Cheers, >>> >> >>> >> Thomas Goirand (zigo) >>> >> >>> > >>> > >>> > >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Mon Oct 4 18:46:29 2021 From: feilong at catalyst.net.nz (feilong) Date: Tue, 5 Oct 2021 07:46:29 +1300 Subject: [oslo] Propose to EOL stable/queens, stable/rocky on all the oslo scope In-Reply-To: References: Message-ID: Hi Herve, Please correct me, does that mean we have to also EOL stable/queens and stable/rocky for most of the other projects technically? Or it should be OK? Thanks. On 5/10/21 5:09 am, Herve Beraud wrote: > Hi, > > On our last meeting of the oslo team we discussed the problem with > broken stable > branches (rocky and older) in oslo's projects [1]. > > Indeed, almost all these branches are broken. El?d Ill?s kindly > generated a list of periodic-stable errors on Oslo's stable branches [2]. > > Given the lack of active maintainers on Oslo and given the current > status of the CI in those branches, I propose to make them End Of Life. > > I will wait until the end of month for anyone who would like to maybe > step up > as maintainer of those branches and who would at least try to fix CI > of them. > > If no one will volunteer for that, I'll EOLing those branches for all > the projects under the oslo umbrella. > > Let us know your thoughts. > > Thank you for your attention. > > [1] > https://meetings.opendev.org/meetings/oslo/2021/oslo.2021-10-04-15.00.log.txt > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023939.html > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (???) (he/him) Head of Research & Development Catalyst Cloud Aotearoa's own Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Oct 4 19:00:18 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 04 Oct 2021 21:00:18 +0200 Subject: [oslo] Propose to EOL stable/queens, stable/rocky on all the oslo scope In-Reply-To: References: Message-ID: <3055264.zr5fvq113q@p1> Hi, On poniedzia?ek, 4 pa?dziernika 2021 20:46:29 CEST feilong wrote: > Hi Herve, > > Please correct me, does that mean we have to also EOL stable/queens and > stable/rocky for most of the other projects technically? Or it should be > OK? Thanks. I don't think we have to. I think it's not that common that we are using new versions of oslo libs in those stable branches so IMHO if all works fine for some project and it has maintainers, it still can be in EM phase. Or is my understanding wrong here? > > On 5/10/21 5:09 am, Herve Beraud wrote: > > Hi, > > > > On our last meeting of the oslo team we discussed the problem with > > broken stable > > branches (rocky and older) in oslo's projects [1]. > > > > Indeed, almost all these branches are broken. El?d Ill?s kindly > > generated a list of periodic-stable errors on Oslo's stable branches [2]. > > > > Given the lack of active maintainers on Oslo and given the current > > status of the CI in those branches, I propose to make them End Of Life. > > > > I will wait until the end of month for anyone who would like to maybe > > step up > > as maintainer of those branches and who would at least try to fix CI > > of them. > > > > If no one will volunteer for that, I'll EOLing those branches for all > > the projects under the oslo umbrella. > > > > Let us know your thoughts. > > > > Thank you for your attention. > > > > [1] > > https://meetings.opendev.org/meetings/oslo/2021/oslo. 2021-10-04-15.00.log.tx > > t > > [2] > > http://lists.openstack.org/pipermail/openstack-discuss/2021-July/ 023939.html -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From openstack at nemebean.com Mon Oct 4 20:59:23 2021 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 4 Oct 2021 15:59:23 -0500 Subject: [oslo] Propose to EOL stable/queens, stable/rocky on all the oslo scope In-Reply-To: <3055264.zr5fvq113q@p1> References: <3055264.zr5fvq113q@p1> Message-ID: <25b21881-bd0b-f763-9bb5-a66340108455@nemebean.com> On 10/4/21 2:00 PM, Slawek Kaplonski wrote: > Hi, > > On poniedzia?ek, 4 pa?dziernika 2021 20:46:29 CEST feilong wrote: >> Hi Herve, >> >> Please correct me, does that mean we have to also EOL stable/queens and >> stable/rocky for most of the other projects technically? Or it should be >> OK? Thanks. > > I don't think we have to. I think it's not that common that we are using new > versions of oslo libs in those stable branches so IMHO if all works fine for > some project and it has maintainers, it still can be in EM phase. > Or is my understanding wrong here? The Oslo libs released for those versions will continue to work, so you're right that it wouldn't be necessary to EOL all of the consumers of Oslo. The danger would be if a critical bug were found in one of those old releases and a fix needed to be released. However, at this point the likelihood of finding such a serious bug seems pretty low, and in some cases it may be possible to use a newer Oslo release with an older service. > >> >> On 5/10/21 5:09 am, Herve Beraud wrote: >>> Hi, >>> >>> On our last meeting of the oslo team we discussed the problem with >>> broken stable >>> branches (rocky and older) in oslo's projects [1]. >>> >>> Indeed, almost all these branches are broken. El?d Ill?s kindly >>> generated a list of periodic-stable errors on Oslo's stable branches [2]. >>> >>> Given the lack of active maintainers on Oslo and given the current >>> status of the CI in those branches, I propose to make them End Of Life. >>> >>> I will wait until the end of month for anyone who would like to maybe >>> step up >>> as maintainer of those branches and who would at least try to fix CI >>> of them. >>> >>> If no one will volunteer for that, I'll EOLing those branches for all >>> the projects under the oslo umbrella. >>> >>> Let us know your thoughts. >>> >>> Thank you for your attention. >>> >>> [1] >>> https://meetings.opendev.org/meetings/oslo/2021/oslo. > 2021-10-04-15.00.log.tx >>> t >>> [2] >>> http://lists.openstack.org/pipermail/openstack-discuss/2021-July/ > 023939.html > > From rafaelweingartner at gmail.com Mon Oct 4 21:27:52 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 4 Oct 2021 18:27:52 -0300 Subject: [CloudKitty] Virtual PTG October 2021 Message-ID: Hello everyone, As you probably heard our next PTG will be held virtually in October. I've marked October 18, at 14:00-17:00 UTC [1]. We already have a CloudKitty meeting organized for this day. Furthermore, I opened an Etherpad [2] to organize the topics of the meeting. Suggestions are welcome! [1] https://ethercalc.openstack.org/8tum5yl1bx43 [2] https://etherpad.opendev.org/p/cloudkitty-ptg-yoga -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Mon Oct 4 22:13:52 2021 From: sorrison at gmail.com (Sam Morrison) Date: Tue, 5 Oct 2021 09:13:52 +1100 Subject: [kolla] skipping base if already exist In-Reply-To: References: <9217E6F1-22AC-4579-A920-5EFA7DCCAE56@gmail.com> Message-ID: > On 4 Oct 2021, at 6:18 pm, Rados?aw Piliszek wrote: > > On Mon, 4 Oct 2021 at 07:24, Sam Morrison > wrote: >> >> Hi, >> >> We?ve started to use kolla to build container images and trying to figure out if I?m doing it wrong it it?s just not how kolla works. >> >> What I?m trying to do it not rebuild the base and openstack-base images when we build an image for a project. >> >> Example. >> >> We build a horizon image and it builds and pushes up to our registry the following >> >> kolla/ubuntu-source-base >> kolla/ubuntu-source-openstack-base >> kolla/ubuntu-source-horizon >> >> >> Now I can rebuild this without having to again build the base images with >> >> ?skip parents >> >> >> But now I want to build a barbican image and I can?t use skip-parents as the barbican image also requires barbican-base. Which means I need to go and rebuild the ubuntu base and Openstack base images again. >> >> Is there a way to essentially skip parents but only if they don?t exist in the registry already? Or make skip-parents only mean skip base and Openstack-base? > > You might then be interested in --skip-existing Ha, how did I miss that! Thanks, using that and a combination of pre pulling the base images from the registry before building has got what I wanted. Thanks, Sam > > -yoctozepto > >> >> Thanks in advance, >> Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Oct 4 22:43:37 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 04 Oct 2021 17:43:37 -0500 Subject: [all][tc] Technical Committee next weekly meeting on Oct 7th at 1500 UTC Message-ID: <17c4d7a0f6d.1124813d4556887.8685917442078267933@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for Oct 7th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, Oct 6th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From stendulker at gmail.com Tue Oct 5 07:22:44 2021 From: stendulker at gmail.com (Shivanand Tendulker) Date: Tue, 5 Oct 2021 12:52:44 +0530 Subject: [ironic][molteniron][qa] Anyone still using MoltenIron? In-Reply-To: References: Message-ID: Hello Julia MoltenIron is used in HPE Ironic 3rd Party CI to reserve the nodes. Thanks and Regards Shiv On Fri, Oct 1, 2021 at 12:01 AM Julia Kreger wrote: > I could have sworn that zuul had support to be basic > selection/checkout of a resource instead of calling out to something > else. > > Oh well! Good to know. Thanks Eric! > > On Thu, Sep 30, 2021 at 11:23 AM Barrera, Eric > wrote: > > > > Hi Julia, > > > > Yea, the Zuul based Third Party CI I'm building uses Molten Iron to > manage bare metal. I believe other Ironic 3rd Party CI projects are also > using it. > > > > Though, I don't see it as an absolute necessity. > > > > > > Regards, > > Eric > > > > > > > > Internal Use - Confidential > > > > -----Original Message----- > > From: Julia Kreger > > Sent: Thursday, September 30, 2021 11:25 AM > > To: openstack-discuss > > Subject: [ironic][molteniron][qa] Anyone still using MoltenIron? > > > > > > [EXTERNAL EMAIL] > > > > Out of curiosity, is anyone still using MotltenIron? > > > > A little historical context: It was originally tooling that came out of > IBM to reserve physical nodes in a CI cluster in order to perform testing. > It was never intended to be released as it was tooling > > *purely* for CI job usage. > > > > The reason I'm asking is that the ironic team is considering going ahead > and retiring the repository. > > > > Thanks! > > > > -Julia > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Tue Oct 5 08:14:16 2021 From: feilong at catalyst.net.nz (feilong) Date: Tue, 5 Oct 2021 21:14:16 +1300 Subject: [Magnum] Virtual PTG October 2021 Message-ID: <085b4407-de12-957d-9bee-d5ba686b0194@catalyst.net.nz> Hello team, Our Yoga PTG will be held virtually in October. We have booked Oct 18, at 22:00-00:00 UTC and Oct 20 9:00-11:00UTC [1] for our Yoga PTG. I opened an etherpad [2] to organize the topics of the meetings. Please feel free to add your topics! Thank you. [1] https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf [2] https://etherpad.opendev.org/p/magnum-ptg-yoga -- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (???) (he/him) Head of Research & Development Catalyst Cloud Aotearoa's own Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdemaced at redhat.com Tue Oct 5 10:05:56 2021 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Tue, 5 Oct 2021 12:05:56 +0200 Subject: [kuryr] Virtual PTG October 2021 In-Reply-To: References: Message-ID: Hello, With the PTG approaching I would like to remind you that the Kuryr sessions will be held on Oct 19 7-8 UTC and Oct 22 13-14 UTC and in case you're interested in discussing any topic with the Kuryr team to include it to the etherpad[1]. [1] https://etherpad.opendev.org/p/kuryr-yoga-ptg See you on the PTG. Thanks, Maysa Macedo. On Thu, Jul 22, 2021 at 11:36 AM Maysa De Macedo Souza wrote: > Hello, > > I booked the following slots for Kuryr during the Yoga PTG: Oct 19 7-8 > UTC and Oct 22 13-14 UTC. > If you have any topic ideas you would like to discuss, please include them > in the etherpad[1], > also it would be interesting to include your name there if you plan to > attend any Kuryr session. > > See you on the next PTG. > > Cheers, > Maysa. > > [1] https://etherpad.opendev.org/p/kuryr-yoga-ptg > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Oct 5 13:51:28 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 5 Oct 2021 15:51:28 +0200 Subject: [nova][placement] Asia friendly meeting slot on 7th of Oct Message-ID: Hi, This is a reminder that we will hold our monthly Asian-friendly Nova meeting timeslot on next Thursday 8:00 UTC [1]. Feel free to join us in the #openstack-nova IRC channel on the OFTC server [2] so we could discuss some topics like how to help synchronously or asynchronously contributors that are not in the European and American timezones. If you have problems joining us with IRC, please let me know by replying this email. Thanks, -Sylvain [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=2021-10-07T08:00:00 [2] https://docs.openstack.org/contributors/common/irc.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From sshnaidm at redhat.com Tue Oct 5 14:45:19 2021 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Tue, 5 Oct 2021 17:45:19 +0300 Subject: [tripleo][ansible] Openstack Ansible collections (modules) Yoga PTG Message-ID: Hi, all Openstack Ansible collection (modules) project has its Yoga PTG session on Wed 20 Oct 13.00-14.00 UTC in Cactus room. Please add topics for discussion in the etherpad: https://etherpad.opendev.org/p/osac-yoga-ptg Thanks and see you in PTG! -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Tue Oct 5 14:50:55 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Tue, 5 Oct 2021 16:50:55 +0200 Subject: [baremetal-sig][ironic] Tue Oct 12, 2021, 2pm & 6pm UTC: Ironic User & Operator Feedback (Session 2) Message-ID: <74936887-375d-c754-4d1f-70640fb4dd9c@cern.ch> Dear all, Due to popular demand and since we had to cut things short last month, the Bare Metal SIG has scheduled two meetings next week to continue the user/operator/admin feedback: - Tue Oct 12, 2021, at 2pm UTC (EMEA friendly), and - Tue Oct 12, 2021, at 6pm UTC (AMER friendly) So come along, meet other Ironicers and discuss your Ironic successes, pain points, issues, experiences and ideas with the community and in particular the upstream developers! Everyone, in particular not-yet Ironicers, are welcome to join! All details can be found on: - https://etherpad.opendev.org/p/bare-metal-sig Hope to see you there! Julia & Arne (for the Bare Metal SIG) From paspao at gmail.com Tue Oct 5 14:52:16 2021 From: paspao at gmail.com (P. P.) Date: Tue, 5 Oct 2021 16:52:16 +0200 Subject: [install] Install on OVH dedicated servers In-Reply-To: References: Message-ID: Hello all, I know that OVH uses Openstack to offer their public cloud services. I would like to know if someone was able to use their dedicated servers to build a private cloud based on Openstack. Do you think OVH dedicated server hardware + Vrack can provide sufficient requirements for a production environment? Thank you. P. From anyrude10 at gmail.com Tue Oct 5 13:24:35 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Tue, 5 Oct 2021 18:54:35 +0530 Subject: [TripleO] Timeout while introspecting Overcloud Node Message-ID: Hi Team, We were trying to provision Overcloud Nodes using the Tripleo wallaby release. For this, on Undercloud machine (Centos 8.4), we downloaded the ironic-python and overcloud images from the following link: https://images.rdoproject.org/centos8/wallaby/rdo_trunk/current-tripleo/ After untarring, we executed the command *openstack overcloud image upload* This command setted the images at path /var/lib/ironic/images folder successfully. Then we uploaded our instackenv.json file and executed the command *openstack overcloud node introspect --all-manageable* On the overcloud node, we are getting the Timeout error while getting the agent.kernel and agent.ramdisk image. *http://10.0.1.10/8088/agent.kernel......Connection timed out (http://ipxe.org/4c0a6092 )* *http://10.0.1.10/8088/agent.kernel......Connection timed out (http://ipxe.org/4c0a6092 )* However, from another test machine, when I tried *wget http://10.0.1.10/8088/agent.kernel * - It successfully worked Screenshot is attached for the reference Can someone please help in resolving this issue. Regards Anirudh Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture.jpg Type: image/jpeg Size: 120096 bytes Desc: not available URL: From urimeba511 at gmail.com Tue Oct 5 15:12:39 2021 From: urimeba511 at gmail.com (Uriel Medina) Date: Tue, 5 Oct 2021 10:12:39 -0500 Subject: [kolla] Neutron-metering is not creating the bandwidth metric Message-ID: Hello everyone. I'm having issues with the Neutron-Metering component inside Kolla-Ansible, and I was hoping that you guys could help me :) The problem is that Neutron Metering doesn't create/get the bandwidth metric. I create a report of this inside the Neutron Launchpad, thinking that maybe the Metering component had troubles: https://bugs.launchpad.net/neutron/+bug/1945560 With the help of Bence, we've discovered that the messages of the creation of metering labels and metering rules were OK inside RabbitMQ. After that, I deployed a new DevStack environment and, with the right configuration, the Neutron Metering is working as it should, only inside the DevStack environment. That made me think that maybe Kolla Ansible had a flag to avoid the modification of iptables and I found the flag "docker_disable_default_iptables_rules" which I set to "no", as I'm using the Wallaby version. Setting this flag didn't do the trick and I was thinking that maybe there is another flag or component of Kolla Ansible that prevents the modification of iptables, apart from "docker_disable_default_iptables_rules". Thanks in advance. Greetings! From feilong at catalyst.net.nz Tue Oct 5 19:16:48 2021 From: feilong at catalyst.net.nz (feilong) Date: Wed, 6 Oct 2021 08:16:48 +1300 Subject: [Magnum] Virtual PTG October 2021 In-Reply-To: <085b4407-de12-957d-9bee-d5ba686b0194@catalyst.net.nz> References: <085b4407-de12-957d-9bee-d5ba686b0194@catalyst.net.nz> Message-ID: Update links to the correct locations. Sorry for the confusion. [1] https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf [2] https://etherpad.opendev.org/p/magnum-ptg-yoga On 5/10/21 9:14 pm, feilong wrote: > > Hello team, > > Our Yoga PTG will be held virtually in October. We have booked Oct 18, > at 22:00-00:00 UTC and Oct 20 9:00-11:00UTC [1] for our Yoga PTG. I > opened an etherpad [2] to organize the topics of the meetings. Please > feel free to add your topics! Thank you. > > > [1] > https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf > > [2] https://etherpad.opendev.org/p/magnum-ptg-yoga > > > > -- > Cheers & Best regards, > ------------------------------------------------------------------------------ > Feilong Wang (???) (he/him) > Head of Research & Development > > Catalyst Cloud > Aotearoa's own > > Mob: +64 21 0832 6348 | www.catalystcloud.nz > Level 6, 150 Willis Street, Wellington 6011, New Zealand > > CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. > It may contain privileged, confidential or copyright information. If you are > not the named recipient, any use, reliance upon, disclosure or copying of this > email or its attachments is unauthorised. If you have received this email in > error, please reply via email or call +64 21 0832 6348. > ------------------------------------------------------------------------------ -- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (???) (he/him) Head of Research & Development Catalyst Cloud Aotearoa's own Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Tue Oct 5 20:35:44 2021 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 5 Oct 2021 15:35:44 -0500 Subject: [all][ptg] October 2021 Registration & Schedule Message-ID: Hi everyone! The October 2021 Project Teams Gathering is right around the corner and the official schedule is live! You can download it here [0], or find it on the PTG website [1]. The PTGbot should be up to date by the end of the week [2] to reflect what is in the ethercalc which is now locked! The PTGbot is the during-event website to keep track of what's being discussed and any last-minute schedule changes. It is driven from the discussion in the #openinfra-events IRC channel where the PTGbot listens. Friendly reminder that the IRC network has changed from freenode to OFTC. Also, please don't forget to register[3] because that's how you'll receive event details, passwords, and other relevant information about the PTG. Please let us know if you have any questions! Thanks! Ashlee & Kendall (diablo_rojo) [0] Schedule https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf [1] PTG Website www.openstack.org/ptg [2] PTGbot: https://ptgbot.opendev.org/ [3] PTG Registration: https://openinfra-ptg.eventbrite.com From arnaud.morin at gmail.com Tue Oct 5 21:06:56 2021 From: arnaud.morin at gmail.com (Arnaud) Date: Tue, 05 Oct 2021 23:06:56 +0200 Subject: [install] Install on OVH dedicated servers In-Reply-To: References: Message-ID: <6597E673-0547-4D57-9CCD-4BAF632F7246@gmail.com> Hello, That's a hard question. The answer mostly depends on what hardware you will use, how many instances and computes you plan to have, etc. But, there is no reason that prevent you to successfully run an OpenStack infrastructure on OVH servers. As you said, the OVH public cloud offer is based on OpenStack and it works. And even if the hardware used for this offer is different from the one you will find in public catalog, there is no major difference in how they manage the servers (a server is a server ;)). Regards, Arnaud (from ovh / public cloud team) Le 5 octobre 2021 16:52:16 GMT+02:00, "P. P." a ?crit?: >Hello all, > >I know that OVH uses Openstack to offer their public cloud services. > >I would like to know if someone was able to use their dedicated servers to build a private cloud based on Openstack. > >Do you think OVH dedicated server hardware + Vrack can provide sufficient requirements for a production environment? > >Thank you. >P. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akanevsk at redhat.com Tue Oct 5 21:15:22 2021 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Tue, 5 Oct 2021 16:15:22 -0500 Subject: [Swift][Interop] PTG time In-Reply-To: References: <20210902092144.1a44225f@suzdal.zaitcev.lan> Message-ID: Swift team, can we confirm the date and time for a joint meeting between Swift and Interop WG? Will Monday 16:00 or 16:30 UTC work for you? Thanks, Arkady On Thu, Sep 2, 2021 at 10:13 AM Arkady Kanevsky wrote: > Thanks Pete. > > On Thu, Sep 2, 2021 at 9:21 AM Pete Zaitcev wrote: > >> On Fri, 6 Aug 2021 11:32:22 -0500 >> Arkady Kanevsky wrote: >> >> > Interop team would like time on Yoga PTG Monday or Tu between 21-22 UTC >> > to discuss Interop guideline coverage for Swift. >> >> I suspect this fell through the cracks, it's not on Swift PTG Etherpad. >> I'll poke our PTL. The slots you're proposing aren't in conflict with >> existing PTG schedule, so this should work. >> >> -- Pete >> >> > > -- > Arkady Kanevsky, Ph.D. > Phone: 972 707-6456 > Corporate Phone: 919 729-5744 ext. 8176456 > -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed Oct 6 06:37:12 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 6 Oct 2021 12:07:12 +0530 Subject: [glance] Yoga PTG schedule Message-ID: Hello All, Greetings!!! Yoga PTG is around the corner and if you haven't already registered, please do so as soon as possible [1]. I have created an etherpad [2] and also added day wise topics along with timings we are going to discuss. Kindly let me know if you have any concerns with allotted time slots. We also have some slots open on Tuesday, Wednesday and Thursday for unplanned discussions. So please feel free to add your topics if you still haven't added yet. As a reminder, these are the time slots for our discussion. Tuesday 19 October 2021 1400 UTC to 1700 UTC Wednesday 20 October 2021 1400 UTC to 1700 UTC Thursday 21 October 2021 1400 UTC to 1700 UTC Friday 22 October 2021 1400 UTC to 1700 UTC NOTE: At the moment we don't have any sessions scheduled on Friday, if there are any last moment request(s)/topic(s) we will discuss them on Friday else we will conclude our PTG on Thursday 21st October. We will be using bluejeans for our discussion, kindly try to use it once before the actual discussion. The meeting URL is mentioned in etherpad [2] and will be the same throughout the PTG. [1] https://www.eventbrite.com/e/project-teams-gathering-october-2021-tickets-161235669227 [2] https://etherpad.opendev.org/p/yoga-glance-ptg Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Wed Oct 6 08:25:09 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Wed, 6 Oct 2021 08:25:09 +0000 Subject: [cyborg] No meeting on 06th October Message-ID: <83fff7ba09b64b3ea9c9d3f016cb7fee@inspur.com> Hi All, All Cyborg Core contributor are in holiday, so we will cancel this weekly meeting (October 6th). According to schedule will meeting directly on October 13th Thanks. brinzhang Inspur Electronic Information Industry Co.,Ltd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Oct 6 09:55:49 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 6 Oct 2021 11:55:49 +0200 Subject: [masakari] Proposal to cancel weekly meetings In-Reply-To: References: Message-ID: I received no negative feedback so I went ahead and proposed a change to cancel this meeting officially. [1] [1] https://review.opendev.org/c/opendev/irc-meetings/+/812650 -yoctozepto On Wed, 29 Sept 2021 at 18:06, Rados?aw Piliszek wrote: > > Dears, > > Due to low attendance and the current schedule being uncomfortable for > me, I propose to cancel the weekly meetings and suggest we coordinate > via this mailing list and do ad-hoc chats on IRC as I'm simply lurking > there most of the time and answering the messages. > > Kind regards, > > -yoctozepto From paspao at gmail.com Wed Oct 6 10:27:04 2021 From: paspao at gmail.com (P. P.) Date: Wed, 6 Oct 2021 12:27:04 +0200 Subject: [install] Install on OVH dedicated servers In-Reply-To: <6597E673-0547-4D57-9CCD-4BAF632F7246@gmail.com> References: <6597E673-0547-4D57-9CCD-4BAF632F7246@gmail.com> Message-ID: <08932F82-3726-49BA-BC52-B0478C90CE03@gmail.com> Hello Arnaud, thanks for your reply. Yes OVH has pretty large dedicated servers too. My main concern is about networking Vrack speed to choose, they offer 1Gbps on middle class Advance servers to 25Gbps on High end Scale servers. And for sure storage nodes will need higher bandwidth than control nodes. Any suggestion on minimal bandwidth requirement per type of node? Thank you. P. > Il giorno 5 ott 2021, alle ore 23:06, Arnaud ha scritto: > > Hello, > > That's a hard question. The answer mostly depends on what hardware you will use, how many instances and computes you plan to have, etc. > > But, there is no reason that prevent you to successfully run an OpenStack infrastructure on OVH servers. > > As you said, the OVH public cloud offer is based on OpenStack and it works. > And even if the hardware used for this offer is different from the one you will find in public catalog, there is no major difference in how they manage the servers (a server is a server ;)). > > Regards, > Arnaud (from ovh / public cloud team) > > Le 5 octobre 2021 16:52:16 GMT+02:00, "P. P." a ?crit : > Hello all, > > I know that OVH uses Openstack to offer their public cloud services. > > I would like to know if someone was able to use their dedicated servers to build a private cloud based on Openstack. > > Do you think OVH dedicated server hardware + Vrack can provide sufficient requirements for a production environment? > > Thank you. > P. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at bitswalk.com Wed Oct 6 12:52:53 2021 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Wed, 6 Oct 2021 14:52:53 +0200 Subject: [KEYSTONE][POLICIES] - Overrides that don't work? Message-ID: Hi team, I'm having a weird behavior with my Openstack platform that makes me think I may have misunderstood some mechanisms on the way policies are working and especially the overriding. So, long story short, I've few services that get custom policies such as glance that behave as expected, Keystone's one aren't. All in all, here is what I'm understanding of the mechanism: This is the keystone policy that I'm looking to override: https://paste.openstack.org/show/bwuF6jFISscRllWdUURL/ This policy default can be found in here: https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 Here is the policy that I'm testing: https://paste.openstack.org/show/bHQ0PXvOro4lXNTlxlie/ I know, this policy isn't taking care of the admin role but it's not the point. >From my understanding, any user with the project-manager role should be able to add any available user on any available group as long as the project-manager domain is the same as the target. However, when I'm doing that, keystone complains that I'm not authorized to do so because the user token scope is 'PROJECT' where it should be 'SYSTEM' or 'DOMAIN'. Now, I wouldn't be surprised of that message being thrown out with the default policy as it's stated on the code with the following: https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 So the question is, if the custom policy doesn't override the default scope_types how am I supposed to make it work? I hope it was clear enough, but if not, feel free to ask me for more information. PS: I've tried to assign this role with a domain scope to my user and I've still the same issue. Thanks a lot everyone! -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Oct 6 13:09:19 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 6 Oct 2021 15:09:19 +0200 Subject: [monasca][Release-job-failures] Release of openstack/monasca-agent for ref refs/tags/6.0.0 failed In-Reply-To: References: Message-ID: Hello Monasca team, Please have a look at the job error below. Indeed your publish-monasca-agent-docker-images fail with a cryptography error [1]. Indeed cryptography has been recently upgraded from version 3.4.8 to the version 35.0.0 [2]. This could explain the reason why this job fails to build rust. We think that your used docker image needs some updating to solve this issue. For more details about the experienced issue please have a look at the jobs links below (the forwarded email). You should also note that the same problem appears with monasca-notification. Thank you for reading. [1] ``` 2021-10-06 11:22:30.035239 | ubuntu-focal | writing manifest file 'src/cryptography.egg-info/SOURCES.txt' 2021-10-06 11:22:30.035260 | ubuntu-focal | copying src/cryptography/py.typed -> build/lib.linux-x86_64-3.6/cryptography 2021-10-06 11:22:30.035282 | ubuntu-focal | creating build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/_rust 2021-10-06 11:22:30.035303 | ubuntu-focal | copying src/cryptography/hazmat/bindings/_rust/__init__.pyi -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/_rust 2021-10-06 11:22:30.035325 | ubuntu-focal | copying src/cryptography/hazmat/bindings/_rust/asn1.pyi -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/_rust 2021-10-06 11:22:30.035358 | ubuntu-focal | copying src/cryptography/hazmat/bindings/_rust/ocsp.pyi -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/_rust 2021-10-06 11:22:30.035381 | ubuntu-focal | copying src/cryptography/hazmat/bindings/_rust/x509.pyi -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/_rust 2021-10-06 11:22:30.035422 | ubuntu-focal | running build_ext 2021-10-06 11:22:30.035446 | ubuntu-focal | generating cffi module 'build/temp.linux-x86_64-3.6/_openssl.c' 2021-10-06 11:22:30.035467 | ubuntu-focal | creating build/temp.linux-x86_64-3.6 2021-10-06 11:22:30.035489 | ubuntu-focal | running build_rust 2021-10-06 11:22:30.035510 | ubuntu-focal | 2021-10-06 11:22:30.035532 | ubuntu-focal | =============================DEBUG ASSISTANCE============================= 2021-10-06 11:22:30.035553 | ubuntu-focal | If you are seeing a compilation error please try the following steps to 2021-10-06 11:22:30.035575 | ubuntu-focal | successfully install cryptography: 2021-10-06 11:22:30.035596 | ubuntu-focal | 1) Upgrade to the latest pip and try again. This will fix errors for most 2021-10-06 11:22:30.035618 | ubuntu-focal | users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip 2021-10-06 11:22:30.035639 | ubuntu-focal | 2) Read https://cryptography.io/en/latest/installation/ for specific 2021-10-06 11:22:30.035660 | ubuntu-focal | instructions for your platform. 2021-10-06 11:22:30.035682 | ubuntu-focal | 3) Check our frequently asked questions for more information: 2021-10-06 11:22:30.035703 | ubuntu-focal | https://cryptography.io/en/latest/faq/ 2021-10-06 11:22:30.035724 | ubuntu-focal | 4) Ensure you have a recent Rust toolchain installed: 2021-10-06 11:22:30.035763 | ubuntu-focal | https://cryptography.io/en/latest/installation/#rust 2021-10-06 11:22:30.035786 | ubuntu-focal | 2021-10-06 11:22:30.035807 | ubuntu-focal | Python: 3.6.8 2021-10-06 11:22:30.035828 | ubuntu-focal | platform: Linux-5.4.0-88-generic-x86_64-with 2021-10-06 11:22:30.035850 | ubuntu-focal | pip: n/a 2021-10-06 11:22:30.035871 | ubuntu-focal | setuptools: 58.2.0 2021-10-06 11:22:30.035893 | ubuntu-focal | setuptools_rust: 0.12.1 2021-10-06 11:22:30.035914 | ubuntu-focal | =============================DEBUG ASSISTANCE============================= 2021-10-06 11:22:30.035936 | ubuntu-focal | 2021-10-06 11:22:30.035959 | ubuntu-focal | error: can't find Rust compiler 2021-10-06 11:22:30.035981 | ubuntu-focal | 2021-10-06 11:22:30.036009 | ubuntu-focal | If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler. 2021-10-06 11:22:30.036040 | ubuntu-focal | 2021-10-06 11:22:30.036061 | ubuntu-focal | To update pip, run: 2021-10-06 11:22:30.036082 | ubuntu-focal | 2021-10-06 11:22:30.036103 | ubuntu-focal | pip install --upgrade pip 2021-10-06 11:22:30.036124 | ubuntu-focal | 2021-10-06 11:22:30.036145 | ubuntu-focal | and then retry package installation. 2021-10-06 11:22:30.036166 | ubuntu-focal | 2021-10-06 11:22:30.036206 | ubuntu-focal | If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain. 2021-10-06 11:22:30.036237 | ubuntu-focal | 2021-10-06 11:22:30.036259 | ubuntu-focal | This package requires Rust >=1.41.0. 2021-10-06 11:22:30.036280 | ubuntu-focal | ---------------------------------------- 2021-10-06 11:22:30.036305 | ubuntu-focal | [0m [91m ERROR: Failed building wheel for cryptography ``` [2] https://opendev.org/openstack/requirements/commit/1fa22ce584ef8a5f5ec0c0e606e5e0daf38de148 ---------- Forwarded message --------- De : Date: mer. 6 oct. 2021 ? 13:37 Subject: [Release-job-failures] Release of openstack/monasca-agent for ref refs/tags/6.0.0 failed To: Build failed. - openstack-upload-github-mirror https://zuul.opendev.org/t/openstack/build/1bb52ee7d5e74741b8b5f180ba48061f : SUCCESS in 1m 32s - release-openstack-python https://zuul.opendev.org/t/openstack/build/ecd6b609a8f1440f98f521d018aae2bb : SUCCESS in 5m 39s - announce-release https://zuul.opendev.org/t/openstack/build/6022c2ed55bb411498b5359ed606a3a1 : SUCCESS in 7m 43s - propose-update-constraints https://zuul.opendev.org/t/openstack/build/0f767adb04294e05ab2f54175c38c12e : SUCCESS in 7m 46s - publish-monasca-agent-docker-images https://zuul.opendev.org/t/openstack/build/09e6e50adb8649e8a61083e8fa0cc602 : POST_FAILURE in 13m 19s _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Oct 6 13:44:56 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Oct 2021 13:44:56 +0000 Subject: [monasca][Release-job-failures] Release of openstack/monasca-agent for ref refs/tags/6.0.0 failed In-Reply-To: References: Message-ID: <20211006134456.k2fwb3zr5mriuvt2@yuggoth.org> On 2021-10-06 15:09:19 +0200 (+0200), Herve Beraud wrote: [...] > cryptography has been recently upgraded from version 3.4.8 to the > version 35.0.0 [...] Note that was updated on master, not stable/xena, but the container image build seems to have chosen the master branch constraints. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From adivya1.singh at gmail.com Wed Oct 6 14:01:04 2021 From: adivya1.singh at gmail.com (Adivya Singh) Date: Wed, 6 Oct 2021 19:31:04 +0530 Subject: API used for update Image in Glance Message-ID: Hi Team, Can you please list me , what i have to do different while updating Image in Glance using a API call, Can Some body share the Syntax for the same, Do i need to create a JSON file for the same Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Oct 6 14:12:26 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 6 Oct 2021 11:12:26 -0300 Subject: [cinder] Bug deputy report for week of 10-06-2021 Message-ID: This is a bug report from 09-29-2021-15-09 to 10-06-2021. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High - https://bugs.launchpad.net/cinder/+bug/1946059 "NFS: revert to snapshot not working". Assigned to Rajat Dhasmana. Medium - https://bugs.launchpad.net/cinder/+bug/1945824 "[Pure Storage] Clone CG from CG snapshot fails in PowerVC". Assigned to Simon Dodsle. - https://bugs.launchpad.net/cinder/+bug/1945571 "C-bak configure more than one worker issue". Unassigned. Low - https://bugs.launchpad.net/cinder/+bug/1946167 "ddt version incompatibility for victoria branch ". Unassigned. Incomplete - https://bugs.launchpad.net/cinder/+bug/1945500 "[stable/wallaby] filter reserved image properties". Unassigned. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed Oct 6 14:19:05 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 6 Oct 2021 19:49:05 +0530 Subject: API used for update Image in Glance In-Reply-To: References: Message-ID: Hello Adviya, If you are using python-glanceclient then just use command; `glance help image-update` and it will list out the helof text for you. You can use those options to update your image information. General syntax is; $ glance image-update If you want to update visibility of image from public to private then; glance image-update --visibility private Thanks & Best Regards, Abhishek Kekane On Wed, Oct 6, 2021 at 7:35 PM Adivya Singh wrote: > Hi Team, > > Can you please list me , what i have to do different while updating Image > in Glance using a API call, Can Some body share the Syntax for the same, Do > i need to create a JSON file for the same > > Regards > Adivya Singh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Wed Oct 6 14:26:00 2021 From: berndbausch at gmail.com (Bernd Bausch) Date: Wed, 6 Oct 2021 23:26:00 +0900 Subject: API used for update Image in Glance In-Reply-To: References: Message-ID: There is an /Update Image /API, documented at https://docs.openstack.org/api-ref/image/v2/index.html?expanded=update-image-detail#update-image. It does require an HTTP request body in JSON format. However, it updates the /image catalog entry, /not the image data. If you want to replace image data, this is not possible, since the image catalog entry contains a checksum that can't be modified. Modified image data would not correspond to the checksum anymore (see the second note under https://docs.openstack.org/api-ref/image/v2, which also states "images are immutable"). Bernd Bausch. On 2021/10/06 11:01 PM, Adivya Singh wrote: > Can you please list me , what i have to do different while updating > Image in Glance using a API call, Can Some body share the Syntax for > the same, Do i need to create a JSON file for the same -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Oct 6 14:32:21 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 6 Oct 2021 16:32:21 +0200 Subject: OpenStack Xena is officially released! Message-ID: The official OpenStack Xena release announcement has been sent out: http://lists.openstack.org/pipermail/openstack-announce/2021-October/002056.html Thanks to all who were a part of the Xena development cycle! This marks the official opening of the releases repo for Yoga, and freezes are now lifted. Xena is now a fully normal stable branch, and the normal stable policy now applies. Thanks! Herv? Beraud and the Release Management team -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Wed Oct 6 15:01:21 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 6 Oct 2021 17:01:21 +0200 Subject: OpenStack Xena is officially released! In-Reply-To: References: Message-ID: Congratulations for this new release! And thank you all. On Wed, Oct 6, 2021 at 4:40 PM Herve Beraud wrote: > The official OpenStack Xena release announcement has been sent out: > > > http://lists.openstack.org/pipermail/openstack-announce/2021-October/002056.html > > Thanks to all who were a part of the Xena development cycle! > > This marks the official opening of the releases repo for Yoga, and freezes > are now lifted. Xena is now a fully normal stable branch, and the normal > stable policy now applies. > > Thanks! > > Herv? Beraud and the Release Management team > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Wed Oct 6 15:07:46 2021 From: amy at demarco.com (Amy Marrich) Date: Wed, 6 Oct 2021 10:07:46 -0500 Subject: OpenStack Xena is officially released! In-Reply-To: References: Message-ID: Great work everyone! Amy (spotz) On Wed, Oct 6, 2021 at 9:36 AM Herve Beraud wrote: > The official OpenStack Xena release announcement has been sent out: > > > http://lists.openstack.org/pipermail/openstack-announce/2021-October/002056.html > > Thanks to all who were a part of the Xena development cycle! > > This marks the official opening of the releases repo for Yoga, and freezes > are now lifted. Xena is now a fully normal stable branch, and the normal > stable policy now applies. > > Thanks! > > Herv? Beraud and the Release Management team > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Oct 6 15:26:02 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 6 Oct 2021 17:26:02 +0200 Subject: [masakari] Yoga PTG Message-ID: Hello all, This is a reminder that Masakari Yoga PTG is to happen Tuesday October 19, 2021 06:00 - 08:00 (UTC). Please add your name and discussion topic proposals to the etherpad. [1]. Thank you in advance and see you soon! [1] https://etherpad.opendev.org/p/masakari-yoga-ptg -yoctozepto From jing.c.zhang at nokia.com Wed Oct 6 12:29:42 2021 From: jing.c.zhang at nokia.com (Zhang, Jing C. (Nokia - CA/Ottawa)) Date: Wed, 6 Oct 2021 12:29:42 +0000 Subject: [Octavia] Can not create LB on SRIOV network Message-ID: I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV...). I left a comment under this story, I re-post my questions there, hoping someone knows the answer. Thank you so much Jing https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV Interface Config Guide (Openstack) Hi, In Openstack train release, creating Octavia LB on SRIOV network fails. I come here to search if there is already a plan to add this support, and see this story. This story gives the impression that the capability is already supported, it is a matter of adding user guide. So, my question is, in which Openstack release, creating LB on SRIOV network is supported? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Wed Oct 6 17:20:15 2021 From: arnaud.morin at gmail.com (Arnaud) Date: Wed, 06 Oct 2021 19:20:15 +0200 Subject: [install] Install on OVH dedicated servers In-Reply-To: <08932F82-3726-49BA-BC52-B0478C90CE03@gmail.com> References: <6597E673-0547-4D57-9CCD-4BAF632F7246@gmail.com> <08932F82-3726-49BA-BC52-B0478C90CE03@gmail.com> Message-ID: <688F546C-5172-474F-9C07-DBD232FA053F@gmail.com> More is always better ;) 1g might not be enough for storage, but again it depends on the workload and how you storage will be used. And for compute, 1g might also not be enough. On the other hand, it should be enough for a starting lab growing slowly. Cheers, Arnaud Le 6 octobre 2021 12:27:04 GMT+02:00, "P. P." a ?crit?: >Hello Arnaud, > >thanks for your reply. > >Yes OVH has pretty large dedicated servers too. > >My main concern is about networking Vrack speed to choose, they offer 1Gbps on middle class Advance servers to 25Gbps on High end Scale servers. > >And for sure storage nodes will need higher bandwidth than control nodes. > >Any suggestion on minimal bandwidth requirement per type of node? > >Thank you. >P. > >> Il giorno 5 ott 2021, alle ore 23:06, Arnaud ha scritto: >> >> Hello, >> >> That's a hard question. The answer mostly depends on what hardware you will use, how many instances and computes you plan to have, etc. >> >> But, there is no reason that prevent you to successfully run an OpenStack infrastructure on OVH servers. >> >> As you said, the OVH public cloud offer is based on OpenStack and it works. >> And even if the hardware used for this offer is different from the one you will find in public catalog, there is no major difference in how they manage the servers (a server is a server ;)). >> >> Regards, >> Arnaud (from ovh / public cloud team) >> >> Le 5 octobre 2021 16:52:16 GMT+02:00, "P. P." a ?crit : >> Hello all, >> >> I know that OVH uses Openstack to offer their public cloud services. >> >> I would like to know if someone was able to use their dedicated servers to build a private cloud based on Openstack. >> >> Do you think OVH dedicated server hardware + Vrack can provide sufficient requirements for a production environment? >> >> Thank you. >> P. >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tburke at nvidia.com Wed Oct 6 19:06:08 2021 From: tburke at nvidia.com (Timothy Burke) Date: Wed, 6 Oct 2021 19:06:08 +0000 Subject: [Swift][Interop] PTG time In-Reply-To: References: <20210902092144.1a44225f@suzdal.zaitcev.lan> Message-ID: Sorry for the delay in getting back to you -- yeah, Monday, 16:00 UTC should be fine. What all would you like to discuss? Is there any prep it'd be nice for me to do ahead of the meeting? Tim ________________________________ From: Arkady Kanevsky Sent: Tuesday, October 5, 2021 2:15 PM To: Pete Zaitcev Cc: openstack-discuss Subject: Re: [Swift][Interop] PTG time External email: Use caution opening links or attachments Swift team, can we confirm the date and time for a joint meeting between Swift and Interop WG? Will Monday 16:00 or 16:30 UTC work for you? Thanks, Arkady On Thu, Sep 2, 2021 at 10:13 AM Arkady Kanevsky > wrote: Thanks Pete. On Thu, Sep 2, 2021 at 9:21 AM Pete Zaitcev > wrote: On Fri, 6 Aug 2021 11:32:22 -0500 Arkady Kanevsky > wrote: > Interop team would like time on Yoga PTG Monday or Tu between 21-22 UTC > to discuss Interop guideline coverage for Swift. I suspect this fell through the cracks, it's not on Swift PTG Etherpad. I'll poke our PTL. The slots you're proposing aren't in conflict with existing PTG schedule, so this should work. -- Pete -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From akanevsk at redhat.com Wed Oct 6 20:07:02 2021 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Wed, 6 Oct 2021 15:07:02 -0500 Subject: [Swift][Interop] PTG time In-Reply-To: References: <20210902092144.1a44225f@suzdal.zaitcev.lan> Message-ID: Tim, I want to refresh the Swift team about Interop and what it covers for swift. Then discuss what we are proposing for next guideline for swift and you for feedback on it, and discuss any changes in Tempest coverage and what additional functionality & tests happen in the Xena cycle. Finally, if any of the APis introduced in previous cycles that are not covered by interop guidelines are ready for promotion for the interoperability coverage. I will share a short slide deck a week before the meeting. Thanks, Arkady On Wed, Oct 6, 2021 at 2:06 PM Timothy Burke wrote: > Sorry for the delay in getting back to you -- yeah, Monday, 16:00 UTC > should be fine. What all would you like to discuss? Is there any prep it'd > be nice for me to do ahead of the meeting? > > Tim > ------------------------------ > *From:* Arkady Kanevsky > *Sent:* Tuesday, October 5, 2021 2:15 PM > *To:* Pete Zaitcev > *Cc:* openstack-discuss > *Subject:* Re: [Swift][Interop] PTG time > > *External email: Use caution opening links or attachments* > Swift team, > can we confirm the date and time for a joint meeting between Swift and > Interop WG? > Will Monday 16:00 or 16:30 UTC work for you? > Thanks, > Arkady > > On Thu, Sep 2, 2021 at 10:13 AM Arkady Kanevsky > wrote: > > Thanks Pete. > > On Thu, Sep 2, 2021 at 9:21 AM Pete Zaitcev wrote: > > On Fri, 6 Aug 2021 11:32:22 -0500 > Arkady Kanevsky wrote: > > > Interop team would like time on Yoga PTG Monday or Tu between 21-22 UTC > > to discuss Interop guideline coverage for Swift. > > I suspect this fell through the cracks, it's not on Swift PTG Etherpad. > I'll poke our PTL. The slots you're proposing aren't in conflict with > existing PTG schedule, so this should work. > > -- Pete > > > > -- > Arkady Kanevsky, Ph.D. > Phone: 972 707-6456 > Corporate Phone: 919 729-5744 ext. 8176456 > > > > -- > Arkady Kanevsky, Ph.D. > Phone: 972 707-6456 > Corporate Phone: 919 729-5744 ext. 8176456 > -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed Oct 6 20:47:40 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 6 Oct 2021 13:47:40 -0700 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Hi Jing, To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. I have not tried this and would be interested to hear if it works for you. If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. Michael [1] https://wiki.openstack.org/wiki/Octavia/Roadmap [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html [3] https://docs.openstack.org/octavia/latest/admin/flavors.html [4] https://etherpad.opendev.org/p/yoga-ptg-octavia On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > Thank you so much > > > > Jing > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV Interface Config Guide (Openstack) > > > > Hi, > In Openstack train release, creating Octavia LB on SRIOV network fails. > I come here to search if there is already a plan to add this support, and see this story. > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > Thank you > > > > > > > > From gmann at ghanshyammann.com Thu Oct 7 00:25:52 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 06 Oct 2021 19:25:52 -0500 Subject: [all][tc] Technical Committee next weekly meeting on Oct 7th at 1500 UTC In-Reply-To: <17c4d7a0f6d.1124813d4556887.8685917442078267933@ghanshyammann.com> References: <17c4d7a0f6d.1124813d4556887.8685917442078267933@ghanshyammann.com> Message-ID: <17c58246528.cff0e3a1628401.1133517936975134547@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC meeting schedule at 1500 UTC. yoctozepto will chair tomorrow meeting. This is will be video call on google meet, details are there in below link: https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Project Health checks framework ** https://etherpad.opendev.org/p/health_check ** https://review.opendev.org/c/openstack/governance/+/810037 * Stable team process change ** https://review.opendev.org/c/openstack/governance/+/810721 * Xena Tracker ** https://etherpad.opendev.org/p/tc-xena-tracker * Technical Writing (doc) SIG need a chair and more maintainers ** Current Chair (only maintainer in this SIG) Stephen Finucane will not continue it in the next cycle(Yoga) ** http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025161.html * Place to maintain the external hosted ELK, E-R, O-H services ** https://etherpad.opendev.org/p/elk-service-maintenance-plan * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 04 Oct 2021 17:43:37 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for Oct 7th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, Oct 6th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From sorrison at gmail.com Thu Oct 7 02:28:55 2021 From: sorrison at gmail.com (Sam Morrison) Date: Thu, 7 Oct 2021 13:28:55 +1100 Subject: [kolla] parent tags Message-ID: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> I?m trying to be able to build a projects container without having to rebuild the parents which have different tags. The workflow I?m trying to achieve is: Build base and openstack-base with a tag of wallaby Build a container image for barbican with a tag of the version of barbican that is returned when doing `git describe` Build a container image for nova with a tag of the version of barbican that is returned when doing `git describe` etc.etc. I don?t seem to be able to do this without having to also build a new base and openstack-base with the same tag which is slow and also means a lot of disk space. Just wondering how other people do this sort of stuff? Any ideas? Thanks, Sam From bkslash at poczta.onet.pl Thu Oct 7 06:58:51 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Thu, 7 Oct 2021 08:58:51 +0200 Subject: Neutron VPNaaS - how to change driver from OpenSwan to StrongSwan? [kolla-ansible][neutron] Message-ID: <1BFF5034-4A5D-4AE6-B444-08A26F594F46@poczta.onet.pl> Hi everyone, because I still have problem with growing memory consumption when vpnaas extension is enabled (https://bugs.launchpad.net/neutron/+bug/1940071), I?m trying to test another solutions. And now - while openstack uses strongswan (and this driver is described in documentation) as vpnaas driver, kolla-ansible (v11, victoria) uses openswan? So is there any way to force kolla-ansible to use strongswan? Best regards Adam Toma? From mark at stackhpc.com Thu Oct 7 08:41:55 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 7 Oct 2021 09:41:55 +0100 Subject: [kolla] parent tags In-Reply-To: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> References: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> Message-ID: Hi Sam, I don't generally do that, and Kolla isn't really set up to make it easy. You could tag the base containers with the new tag: docker pull -base:wallaby docker tag -base:wallaby -base: Mark On Thu, 7 Oct 2021 at 03:34, Sam Morrison wrote: > > I?m trying to be able to build a projects container without having to rebuild the parents which have different tags. > > The workflow I?m trying to achieve is: > > Build base and openstack-base with a tag of wallaby > > Build a container image for barbican with a tag of the version of barbican that is returned when doing `git describe` > Build a container image for nova with a tag of the version of barbican that is returned when doing `git describe` > etc.etc. > > I don?t seem to be able to do this without having to also build a new base and openstack-base with the same tag which is slow and also means a lot of disk space. > > Just wondering how other people do this sort of stuff? > Any ideas? > > Thanks, > Sam > > > From amy at demarco.com Thu Oct 7 13:53:32 2021 From: amy at demarco.com (Amy Marrich) Date: Thu, 7 Oct 2021 08:53:32 -0500 Subject: RDO vSocial during the PTG Message-ID: Hi Everyone, I'm pleased to announce that RDO will be sponsoring a virtual social during the PTG, Thursday at 17:00 during the break. Last PTG's Trivia Social was a great success, but to do something different this time around we will be doing a virtual Escape Room. The room is a mixture of text and images so should be bandwidth friendly and we'll use Meetpad for the breakout rooms. We'll be doing an Intermediate level room and the team that finishes first will receive prizes! Because I need to purchase passes you will need to register in advance at: https://eventyay.com/e/e7299da7 There is a team signup page you'll receive after registering where you can add your name to a team of 5-8 people. While we can definitely have more people participating in the teams then passes, the intent is to allow everyone to actively participate. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonanderson at uchicago.edu Thu Oct 7 15:47:24 2021 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Thu, 7 Oct 2021 15:47:24 +0000 Subject: [kolla] parent tags In-Reply-To: References: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> Message-ID: <19227A29-3F33-4EF3-B68B-AC6ABF87FB2B@uchicago.edu> Sam, I think Mark?s idea is in general stronger than what I will describe, if all you?re after is different aliases. It sounds like you are trying to iterate on two images (Barbican and Nova), presumably changing the source of the former frequently, and don?t want to build the entire ancestor chain each time. I had to do something similar because we have a fork of Horizon we work on a lot. Here is my hacky solution: https://github.com/ChameleonCloud/kolla/commit/79611111c03cc86be91a86a9ccd296abc7aa3a3e We are on Train w/ some other Kolla forks so I can?t guarantee that will apply cleanly, but it?s a small change. It involves adding build-args to some Dockerfiles, in your case I suppose barbican-base, but also nova-base. It?s a bit clunky but gets the job done for us. /Jason On Oct 7, 2021, at 3:41 AM, Mark Goddard > wrote: Hi Sam, I don't generally do that, and Kolla isn't really set up to make it easy. You could tag the base containers with the new tag: docker pull -base:wallaby docker tag -base:wallaby -base: Mark On Thu, 7 Oct 2021 at 03:34, Sam Morrison > wrote: I?m trying to be able to build a projects container without having to rebuild the parents which have different tags. The workflow I?m trying to achieve is: Build base and openstack-base with a tag of wallaby Build a container image for barbican with a tag of the version of barbican that is returned when doing `git describe` Build a container image for nova with a tag of the version of barbican that is returned when doing `git describe` etc.etc. I don?t seem to be able to do this without having to also build a new base and openstack-base with the same tag which is slow and also means a lot of disk space. Just wondering how other people do this sort of stuff? Any ideas? Thanks, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu Oct 7 15:52:40 2021 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 7 Oct 2021 17:52:40 +0200 Subject: [neutron] Drivers meeting agenda - 08.10.2021 Message-ID: Hi Neutrinos, The agenda for tomorrow's drivers meeting is at [1]. We have 1 RFE to discuss: * https://bugs.launchpad.net/neutron/+bug/1946251 API: allow to disable anti-spoofing but not SGs [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda See you at the meeting tomorrow. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp.methot at planethoster.info Thu Oct 7 21:37:37 2021 From: jp.methot at planethoster.info (J-P Methot) Date: Thu, 7 Oct 2021 17:37:37 -0400 Subject: [neutron] East-West networking issue on DVR after failed attempt at starting a new instance Message-ID: <96819905-f32e-546a-83f3-33c390631907@planethoster.info> Hi, We use Openstack Wallaby installed through Kolla-ansible on this setup. Here's a quick rundown of the issue we just noticed: -We try popping an instance which fails because of a storage issue. -Nova tries to create the instance on 3 different nodes before failing. -We notice that instances on these 3 nodes and only those instances cannot connect to each other anymore. -Doing Tcpdump tests, we realize that pings are received by each instance, but never replied to. -Restarting the neutron-openvswitch-agent container fixes this issue. I suspect l2population might have something to do with this. Is the ARP table rebuilt when the openvswitch-agent is restarted? -- Jean-Philippe M?thot Senior Openstack system administrator Administrateur syst?me Openstack s?nior PlanetHoster inc. From jing.c.zhang at nokia.com Thu Oct 7 22:18:15 2021 From: jing.c.zhang at nokia.com (Zhang, Jing C. (Nokia - CA/Ottawa)) Date: Thu, 7 Oct 2021 22:18:15 +0000 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Hi Michael, Thank you so much for the information. I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: https://docs.openstack.org/nova/train/admin/pci-passthrough.html https://docs.openstack.org/nova/latest/admin/pci-passthrough.html ========================= Here is the detail: Env: NIC is intel 82599, creating VM with SRIOV direct port works well. Nova.conf passthrough_whitelist={"devname":"ens1f0","physical_network":"physnet5"} passthrough_whitelist={"devname":"ens1f1","physical_network":"physnet6"} Sriov_agent.ini [sriov_nic] physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: alias = { "vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf", "numa_policy": "required" } (2) Used the extra-spec in nova flavor openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" (3) Failed to create VM with this flavor, sriov agent log does not show port event, for sure also failed to create LB, PortBindingFailed (4) Tried multiple formats to add whitelist for PF and VF in nova.conf for nova-compute, and retried, still failed passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","devname":"ens1f0","physical_network":"physnet5"} #PF passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","physical_network":"physnet5"} #VF The sriov agent log does not show port event for any of them. -----Original Message----- From: Michael Johnson Sent: Wednesday, October 6, 2021 4:48 PM To: Zhang, Jing C. (Nokia - CA/Ottawa) Cc: openstack-discuss at lists.openstack.org Subject: Re: [Octavia] Can not create LB on SRIOV network Hi Jing, To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. I have not tried this and would be interested to hear if it works for you. If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. Michael [1] https://wiki.openstack.org/wiki/Octavia/Roadmap [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html [3] https://docs.openstack.org/octavia/latest/admin/flavors.html [4] https://etherpad.opendev.org/p/yoga-ptg-octavia On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > Thank you so much > > > > Jing > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > Interface Config Guide (Openstack) > > > > Hi, > In Openstack train release, creating Octavia LB on SRIOV network fails. > I come here to search if there is already a plan to add this support, and see this story. > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > Thank you > > > > > > > > From jing.c.zhang at nokia.com Fri Oct 8 00:36:08 2021 From: jing.c.zhang at nokia.com (Zhang, Jing C. (Nokia - CA/Ottawa)) Date: Fri, 8 Oct 2021 00:36:08 +0000 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Hi Michael, I made a mistake when creating VM manually, I should use --nic option not --network option. After correcting that, I can create VM with the extra-flavor: $ openstack server create --flavor octavia-flavor --image Centos7 --nic port-id=test-port --security-group demo-secgroup --key-name demo-key test-vm $ nova list --all --fields name,status,host,networks | grep test-vm | 8548400b-725a-405a-aeeb-ed1d208915e2 | test-vm | ACTIVE | overcloud-sriovperformancecompute-201-1.localdomain | ext-net1=10.5.201.149 A 2nd VF interface is seen inside the VM: [centos at test-vm ~]$ ip a ... 3: eth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0a:b2:d4:85:a2:e6 brd ff:ff:ff:ff:ff:ff This MAC is not seen by neutron though: $ openstack port list | grep 0a:b2:d4:85:a2:e6 [empty] ===================== However when I tried to create LB with the same VM flavor, it failed at the same place as before. Looking at worker.log, it seems the error is similar to use --network option to create the VM manually. But you are the expert. "Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52" Here is the full list of command line: $ openstack flavor list | grep octavia-flavor | eb312b9a-d04d-4a88-9db2-7a88ce167cff | octavia-flavor | 4096 | 0 | 0 | 4 | True | openstack loadbalancer flavorprofile create --name ofp1 --provider amphora --flavor-data '{"compute_flavor": "eb312b9a-d04d-4a88-9db2-7a88ce167cff"}' openstack loadbalancer flavor create --name of1 --flavorprofile ofp1 --enable openstack loadbalancer create --name lb1 --flavor of1 --vip-port-id test-port --vip-subnet-id ext-subnet1 |__Flow 'octavia-create-loadbalancer-flow': PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py", line 399, in execute 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker loadbalancer, loadbalancer.vip, amphora, subnet) 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 391, in plug_aap_port 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker interface = self._plug_amphora_vip(amphora, subnet) 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 123, in _plug_amphora_vip 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker raise base.PlugVIPException(message) 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker -----Original Message----- From: Zhang, Jing C. (Nokia - CA/Ottawa) Sent: Thursday, October 7, 2021 6:18 PM To: Michael Johnson Cc: openstack-discuss at lists.openstack.org Subject: RE: [Octavia] Can not create LB on SRIOV network Hi Michael, Thank you so much for the information. I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: https://docs.openstack.org/nova/train/admin/pci-passthrough.html https://docs.openstack.org/nova/latest/admin/pci-passthrough.html ========================= Here is the detail: Env: NIC is intel 82599, creating VM with SRIOV direct port works well. Nova.conf passthrough_whitelist={"devname":"ens1f0","physical_network":"physnet5"} passthrough_whitelist={"devname":"ens1f1","physical_network":"physnet6"} Sriov_agent.ini [sriov_nic] physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: alias = { "vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf", "numa_policy": "required" } (2) Used the extra-spec in nova flavor openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" (3) Failed to create VM with this flavor, sriov agent log does not show port event, for sure also failed to create LB, PortBindingFailed (4) Tried multiple formats to add whitelist for PF and VF in nova.conf for nova-compute, and retried, still failed passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","devname":"ens1f0","physical_network":"physnet5"} #PF passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","physical_network":"physnet5"} #VF The sriov agent log does not show port event for any of them. -----Original Message----- From: Michael Johnson Sent: Wednesday, October 6, 2021 4:48 PM To: Zhang, Jing C. (Nokia - CA/Ottawa) Cc: openstack-discuss at lists.openstack.org Subject: Re: [Octavia] Can not create LB on SRIOV network Hi Jing, To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. I have not tried this and would be interested to hear if it works for you. If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. Michael [1] https://wiki.openstack.org/wiki/Octavia/Roadmap [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html [3] https://docs.openstack.org/octavia/latest/admin/flavors.html [4] https://etherpad.opendev.org/p/yoga-ptg-octavia On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > Thank you so much > > > > Jing > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > Interface Config Guide (Openstack) > > > > Hi, > In Openstack train release, creating Octavia LB on SRIOV network fails. > I come here to search if there is already a plan to add this support, and see this story. > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > Thank you > > > > > > > > From skaplons at redhat.com Fri Oct 8 06:06:11 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 08 Oct 2021 08:06:11 +0200 Subject: [neutron] East-West networking issue on DVR after failed attempt at starting a new instance In-Reply-To: <96819905-f32e-546a-83f3-33c390631907@planethoster.info> References: <96819905-f32e-546a-83f3-33c390631907@planethoster.info> Message-ID: <2649897.lI8ThQJ3AA@p1> Hi, On czwartek, 7 pa?dziernika 2021 23:37:37 CEST J-P Methot wrote: > Hi, > > We use Openstack Wallaby installed through Kolla-ansible on this setup. > Here's a quick rundown of the issue we just noticed: > > -We try popping an instance which fails because of a storage issue. > > -Nova tries to create the instance on 3 different nodes before failing. > > -We notice that instances on these 3 nodes and only those instances > cannot connect to each other anymore. > > -Doing Tcpdump tests, we realize that pings are received by each > instance, but never replied to. > > -Restarting the neutron-openvswitch-agent container fixes this issue. > > I suspect l2population might have something to do with this. Is the ARP > table rebuilt when the openvswitch-agent is restarted? If You are using dvr and l2population, You have arp_reponder enabled so arp replies for tunnel networks are done locally in the ovs bridges. When You restart neutron-openvswitch-agent, it will regenerate all OF rules so yes, if some rules were missing, restart should add them again. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From syedammad83 at gmail.com Fri Oct 8 07:13:38 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Fri, 8 Oct 2021 12:13:38 +0500 Subject: [heat] xena stack deployment failed Message-ID: Hi, I have upgraded my heat from wallaby to xena. When I am trying to create magnum cluster its giving below error in heat engine logs. Currently in whole stack, I have upgraded heat and magnum to latest release. Before upgrading heat from xena to wallaby, the stack deployment was successful. 2021-10-08 12:06:14.107 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g -i -X GET http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [404] Connection: keep-alive Content-Length: 112 Content-Type: application/json Date: Fri, 08 Oct 2021 07:06:14 GMT X-Compute-Request-Id: req-d07e25b0-375f-48e6-ac4b-d76b41848e6a X-Openstack-Request-Id: req-d07e25b0-375f-48e6-ac4b-d76b41848e6a _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: {"message": "The resource could not be found.

\n\n\n", "code": "404 Not Found", "title": "Not Found"} _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to compute for http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f used request id req-d07e25b0-375f-48e6-ac4b-d76b41848e6a request /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g -i -X GET http://controller-khi04.rapid.pk:8774/v2.1/ -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [200] Connection: keep-alive Content-Length: 399 Content-Type: application/json Date: Fri, 08 Oct 2021 07:06:14 GMT Openstack-Api-Version: compute 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-9b1f2443-fdee-41dd-9139-57376afd7bef X-Openstack-Nova-Api-Version: 2.1 X-Openstack-Request-Id: req-9b1f2443-fdee-41dd-9139-57376afd7bef _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: {"version": {"id": "v2.1", "status": "CURRENT", "version": "2.88", "min_version": "2.1", "updated": "2013-07-23T11:33:21Z", "links": [{"rel": "self", "href": "http://controller-khi04.rapid.pk:8774/v2.1/"}, {"rel": "describedby", "type": "text/html", "href": "http://docs.openstack.org/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1"}]}} _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to compute for http://controller-khi04.rapid.pk:8774/v2.1/ used request id req-9b1f2443-fdee-41dd-9139-57376afd7bef request /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 2021-10-08 12:06:14.129 2064 INFO heat.engine.resource [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] CREATE: ServerGroup "worker_nodes_server_group" Stack "k8s-cluster6-7cnnuz4hrfrz" [5501d6a4-59a6-4f76-b25e-ec43e0822361] 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource Traceback (most recent call last): 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 916, in _action_recorder 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 1028, in _do_action 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield from self.action_handler_task(action, args=handler_args) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 970, in action_handler_task 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource handler_data = handler(*args) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resources/openstack/nova/server_group.py", line 98, in handle_create 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource server_group = client.server_groups.create(name=name, 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/novaclient/api_versions.py", line 393, in substitution 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource return methods[-1].func(obj, *args, **kwargs) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource TypeError: create() got an unexpected keyword argument 'policies' 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource 2021-10-08 12:06:14.143 2064 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Stack CREATE FAILED (k8s-cluster6-7cnnuz4hrfrz): Resource CREATE failed: TypeError: resources.worker_nodes_server_group: create() got an unexpected keyword argument 'policies' 2021-10-08 12:06:14.146 2064 DEBUG heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Persisting stack k8s-cluster6-7cnnuz4hrfrz status CREATE FAILED _send_notification_and_add_event /usr/lib/python3/dist-packages/heat/engine/stack.py:1109 2021-10-08 12:06:15.009 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2434 for update 2021-10-08 12:06:15.040 2063 DEBUG heat.engine.worker [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 2021-10-08 12:06:16.016 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2437 for update 2021-10-08 12:06:16.048 2061 DEBUG heat.engine.worker [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 2021-10-08 12:06:17.026 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2443 for update 2021-10-08 12:06:17.066 2062 DEBUG heat.engine.worker [req-61cb3eba-0cf0-47f7-8fdb-8e9375888dc4 - - - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From ueha.ayumu at fujitsu.com Fri Oct 8 07:59:52 2021 From: ueha.ayumu at fujitsu.com (ueha.ayumu at fujitsu.com) Date: Fri, 8 Oct 2021 07:59:52 +0000 Subject: [heat] xena stack deployment failed In-Reply-To: References: Message-ID: Hi Ammad It seems to be the same as the cause of the bug report I issued to Heat, but there has been no response from the Heat team. https://storyboard.openstack.org/#!/story/2009164 To: Heat team Could you please confirm this problem? Thanks. Regards, Ueha From: Ammad Syed Sent: Friday, October 8, 2021 4:14 PM To: openstack-discuss Subject: [heat] xena stack deployment failed Hi, I have upgraded my heat from wallaby to xena. When I am trying to create magnum cluster its giving below error in heat engine logs. Currently in whole stack, I have upgraded heat and magnum to latest release. Before upgrading heat from xena to wallaby, the stack deployment was successful. 2021-10-08 12:06:14.107 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g -i -X GET http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [404] Connection: keep-alive Content-Length: 112 Content-Type: application/json Date: Fri, 08 Oct 2021 07:06:14 GMT X-Compute-Request-Id: req-d07e25b0-375f-48e6-ac4b-d76b41848e6a X-Openstack-Request-Id: req-d07e25b0-375f-48e6-ac4b-d76b41848e6a _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: {"message": "The resource could not be found.

\n\n\n", "code": "404 Not Found", "title": "Not Found"} _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to compute for http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f used request id req-d07e25b0-375f-48e6-ac4b-d76b41848e6a request /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g -i -X GET http://controller-khi04.rapid.pk:8774/v2.1/ -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [200] Connection: keep-alive Content-Length: 399 Content-Type: application/json Date: Fri, 08 Oct 2021 07:06:14 GMT Openstack-Api-Version: compute 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-9b1f2443-fdee-41dd-9139-57376afd7bef X-Openstack-Nova-Api-Version: 2.1 X-Openstack-Request-Id: req-9b1f2443-fdee-41dd-9139-57376afd7bef _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: {"version": {"id": "v2.1", "status": "CURRENT", "version": "2.88", "min_version": "2.1", "updated": "2013-07-23T11:33:21Z", "links": [{"rel": "self", "href": "http://controller-khi04.rapid.pk:8774/v2.1/"}, {"rel": "describedby", "type": "text/html", "href": "http://docs.openstack.org/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1"}]}} _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to compute for http://controller-khi04.rapid.pk:8774/v2.1/ used request id req-9b1f2443-fdee-41dd-9139-57376afd7bef request /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 2021-10-08 12:06:14.129 2064 INFO heat.engine.resource [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] CREATE: ServerGroup "worker_nodes_server_group" Stack "k8s-cluster6-7cnnuz4hrfrz" [5501d6a4-59a6-4f76-b25e-ec43e0822361] 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource Traceback (most recent call last): 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 916, in _action_recorder 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 1028, in _do_action 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield from self.action_handler_task(action, args=handler_args) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 970, in action_handler_task 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource handler_data = handler(*args) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resources/openstack/nova/server_group.py", line 98, in handle_create 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource server_group = client.server_groups.create(name=name, 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/novaclient/api_versions.py", line 393, in substitution 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource return methods[-1].func(obj, *args, **kwargs) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource TypeError: create() got an unexpected keyword argument 'policies' 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource 2021-10-08 12:06:14.143 2064 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Stack CREATE FAILED (k8s-cluster6-7cnnuz4hrfrz): Resource CREATE failed: TypeError: resources.worker_nodes_server_group: create() got an unexpected keyword argument 'policies' 2021-10-08 12:06:14.146 2064 DEBUG heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Persisting stack k8s-cluster6-7cnnuz4hrfrz status CREATE FAILED _send_notification_and_add_event /usr/lib/python3/dist-packages/heat/engine/stack.py:1109 2021-10-08 12:06:15.009 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2434 for update 2021-10-08 12:06:15.040 2063 DEBUG heat.engine.worker [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 2021-10-08 12:06:16.016 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2437 for update 2021-10-08 12:06:16.048 2061 DEBUG heat.engine.worker [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 2021-10-08 12:06:17.026 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2443 for update 2021-10-08 12:06:17.066 2062 DEBUG heat.engine.worker [req-61cb3eba-0cf0-47f7-8fdb-8e9375888dc4 - - - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Fri Oct 8 09:01:04 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Fri, 8 Oct 2021 14:31:04 +0530 Subject: [heat] xena stack deployment failed In-Reply-To: References: Message-ID: On Fri, Oct 8, 2021 at 1:31 PM ueha.ayumu at fujitsu.com < ueha.ayumu at fujitsu.com> wrote: > Hi Ammad > > > > It seems to be the same as the cause of the bug report I issued to Heat, > but there has been no response from the Heat team. > > https://storyboard.openstack.org/#!/story/2009164 > > > > To: Heat team > > Could you please confirm this problem? Thanks. > > > Yeah there is a regression. I've proposed a fix[1] now. [1] https://review.opendev.org/c/openstack/heat/+/813124 Regards, > > Ueha > > > > *From:* Ammad Syed > *Sent:* Friday, October 8, 2021 4:14 PM > *To:* openstack-discuss > *Subject:* [heat] xena stack deployment failed > > > > Hi, > > > > I have upgraded my heat from wallaby to xena. When I am trying to create > magnum cluster its giving below error in heat engine logs. > > > > Currently in whole stack, I have upgraded heat and magnum to latest > release. Before upgrading heat from xena to wallaby, the stack > deployment was successful. > > > > 2021-10-08 12:06:14.107 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g > -i -X GET > http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f > -H "Accept: application/json" -H "User-Agent: python-novaclient" -H > "X-Auth-Token: > {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" > -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request > /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 > 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [404] > Connection: keep-alive Content-Length: 112 Content-Type: application/json > Date: Fri, 08 Oct 2021 07:06:14 GMT X-Compute-Request-Id: > req-d07e25b0-375f-48e6-ac4b-d76b41848e6a X-Openstack-Request-Id: > req-d07e25b0-375f-48e6-ac4b-d76b41848e6a _http_log_response > /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 > 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: > {"message": "The resource could not be found.

\n\n\n", "code": > "404 Not Found", "title": "Not Found"} _http_log_response > /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 > 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to > compute for > http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f > used request id req-d07e25b0-375f-48e6-ac4b-d76b41848e6a request > /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 > 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g > -i -X GET http://controller-khi04.rapid.pk:8774/v2.1/ -H "Accept: > application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: > {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" > -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request > /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 > 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [200] > Connection: keep-alive Content-Length: 399 Content-Type: application/json > Date: Fri, 08 Oct 2021 07:06:14 GMT Openstack-Api-Version: compute 2.1 > Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version > X-Compute-Request-Id: req-9b1f2443-fdee-41dd-9139-57376afd7bef > X-Openstack-Nova-Api-Version: 2.1 X-Openstack-Request-Id: > req-9b1f2443-fdee-41dd-9139-57376afd7bef _http_log_response > /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 > 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: > {"version": {"id": "v2.1", "status": "CURRENT", "version": "2.88", > "min_version": "2.1", "updated": "2013-07-23T11:33:21Z", "links": [{"rel": > "self", "href": "http://controller-khi04.rapid.pk:8774/v2.1/"}, {"rel": > "describedby", "type": "text/html", "href": "http://docs.openstack.org/"}], > "media-types": [{"base": "application/json", "type": > "application/vnd.openstack.compute+json;version=2.1"}]}} _http_log_response > /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 > 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to > compute for http://controller-khi04.rapid.pk:8774/v2.1/ used request id > req-9b1f2443-fdee-41dd-9139-57376afd7bef request > /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 > 2021-10-08 12:06:14.129 2064 INFO heat.engine.resource > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] CREATE: > ServerGroup "worker_nodes_server_group" Stack "k8s-cluster6-7cnnuz4hrfrz" > [5501d6a4-59a6-4f76-b25e-ec43e0822361] > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource Traceback (most > recent call last): > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File > "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 916, in > _action_recorder > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File > "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 1028, in > _do_action > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield from > self.action_handler_task(action, args=handler_args) > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File > "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 970, in > action_handler_task > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource handler_data = > handler(*args) > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File > "/usr/lib/python3/dist-packages/heat/engine/resources/openstack/nova/server_group.py", > line 98, in handle_create > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource server_group = > client.server_groups.create(name=name, > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File > "/usr/lib/python3/dist-packages/novaclient/api_versions.py", line 393, in > substitution > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource return > methods[-1].func(obj, *args, **kwargs) > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource TypeError: > create() got an unexpected keyword argument 'policies' > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource > 2021-10-08 12:06:14.143 2064 INFO heat.engine.stack > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Stack CREATE > FAILED (k8s-cluster6-7cnnuz4hrfrz): Resource CREATE failed: TypeError: > resources.worker_nodes_server_group: create() got an unexpected keyword > argument 'policies' > 2021-10-08 12:06:14.146 2064 DEBUG heat.engine.stack > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Persisting > stack k8s-cluster6-7cnnuz4hrfrz status CREATE FAILED > _send_notification_and_add_event > /usr/lib/python3/dist-packages/heat/engine/stack.py:1109 > 2021-10-08 12:06:15.009 2061 INFO heat.engine.stack > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering > resource 2434 for update > 2021-10-08 12:06:15.040 2063 DEBUG heat.engine.worker > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] > [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. > check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 > 2021-10-08 12:06:16.016 2061 INFO heat.engine.stack > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering > resource 2437 for update > 2021-10-08 12:06:16.048 2061 DEBUG heat.engine.worker > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] > [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. > check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 > 2021-10-08 12:06:17.026 2061 INFO heat.engine.stack > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering > resource 2443 for update > 2021-10-08 12:06:17.066 2062 DEBUG heat.engine.worker > [req-61cb3eba-0cf0-47f7-8fdb-8e9375888dc4 - - - - -] > [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. > check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 > > > > - Ammad > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Oct 8 11:17:10 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 8 Oct 2021 13:17:10 +0200 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images Message-ID: Hello, I've just updated my kolla wallaby with latest images. When I create volume from image on ceph it works. When I create volume from image on nfs netapp ontap, it does not work. The following is reported in cinder-volume.log: 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", line 950, in _create_from_image_cache_or_download 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server model_update = self._create_from_image_download( 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", line 766, in _create_from_image_download 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server volume_utils.copy_image_to_volume(self.driver, context, volume, 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", line 1158, in copy_image_to_volume 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise exception.ImageCopyFailure(reason=ex.stderr) 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server cinder.exception.ImageCopyFailure: Failed to copy image to volume: qemu-img: /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: error while converting raw: Failed to lock byte 101 Any help please ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From midhunlaln66 at gmail.com Fri Oct 8 12:42:23 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Fri, 8 Oct 2021 18:12:23 +0530 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors Message-ID: Hi team, -->Successfully I installed Openstack ansible 23.1.0.dev35. --->I logged in to horizon and created a new network and launched a vm but I am getting an error. Error: Failed to perform requested operation on instance "hope", the instance has an error status: Please try again later [Error: Build of instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the network(s), not rescheduling.]. -->Then I checked log | fault | {'code': 500, 'created': '2021-10-08T12:26:44Z', 'message': 'Build of instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the network(s), not rescheduling.', 'details': 'Traceback (most recent call last):\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7235, in _create_guest_with_network\n post_xml_callback=post_xml_callback)\n File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n next(self.gen)\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 479, in wait_for_instance_event\n actual_event = event.wait()\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", line 125, in wait\n result = hub.switch()\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 313, in switch\n return self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 4200, in spawn\n cleanup_instance_disks=created_disks)\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7258, in _create_guest_with_network\n raise exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 2219, in _do_build_and_run_instance\n filter_properties, request_spec, accel_uuids)\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 2458, in _build_and_run_instance\n reason=msg)\nnova.exception.BuildAbortException: Build of instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the network(s), not rescheduling.\n'} | Please help me with this error. Thanks & Regards Midhunlal N B -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 12:45:54 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 08:45:54 -0400 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: What options are you using for the NFS client on the controllers side? There are some recommended settings that Netapp can provide. On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano wrote: > Hello, > I've just updated my kolla wallaby with latest images. When I create > volume from image on ceph it works. > When I create volume from image on nfs netapp ontap, it does not work. > The following is reported in cinder-volume.log: > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", > line 950, in _create_from_image_cache_or_download > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server > model_update = self._create_from_image_download( > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", > line 766, in _create_from_image_download > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server > volume_utils.copy_image_to_volume(self.driver, context, volume, > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", > line 1158, in copy_image_to_volume > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise > exception.ImageCopyFailure(reason=ex.stderr) > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server > cinder.exception.ImageCopyFailure: Failed to copy image to volume: > qemu-img: > /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: > error while converting raw: Failed to lock byte 101 > > Any help please ? > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 12:49:42 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 08:49:42 -0400 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: You will need to look at the neutron-server logs + the ovs/libviirt agent logs on the compute. The error returned from the VM creation is not useful most of the time. Was this a vxlan or vlan network? On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb wrote: > Hi team, > -->Successfully I installed Openstack ansible 23.1.0.dev35. > --->I logged in to horizon and created a new network and launched a vm > but I am getting an error. > > Error: Failed to perform requested operation on instance "hope", the > instance has an error status: Please try again later [Error: Build of > instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate > the network(s), not rescheduling.]. > > -->Then I checked log > > | fault | {'code': 500, 'created': > '2021-10-08T12:26:44Z', 'message': 'Build of instance > b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the > network(s), not rescheduling.', 'details': 'Traceback (most recent call > last):\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 7235, in _create_guest_with_network\n > post_xml_callback=post_xml_callback)\n File > "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n > next(self.gen)\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 479, in wait_for_instance_event\n actual_event = event.wait()\n > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", > line 125, in wait\n result = hub.switch()\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", > line 313, in switch\n return > self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring > handling of the above exception, another exception occurred:\n\nTraceback > (most recent call last):\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 4200, in spawn\n cleanup_instance_disks=created_disks)\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 7258, in _create_guest_with_network\n raise > exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: > Virtual Interface creation failed\n\nDuring handling of the above > exception, another exception occurred:\n\nTraceback (most recent call > last):\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 2219, in _do_build_and_run_instance\n filter_properties, > request_spec, accel_uuids)\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 2458, in _build_and_run_instance\n > reason=msg)\nnova.exception.BuildAbortException: Build of instance > b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the > network(s), not rescheduling.\n'} | > > Please help me with this error. > > > Thanks & Regards > Midhunlal N B > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Oct 8 12:51:24 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 8 Oct 2021 14:51:24 +0200 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: Hello Laurent, I am using nfs_mount_options = nfsvers=3,lookupcache=pos I always use the above options. I have this issue only with the last cinder images of wallaby Thanks Ignazio Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < laurentfdumont at gmail.com> ha scritto: > What options are you using for the NFS client on the controllers side? > There are some recommended settings that Netapp can provide. > > On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano > wrote: > >> Hello, >> I've just updated my kolla wallaby with latest images. When I create >> volume from image on ceph it works. >> When I create volume from image on nfs netapp ontap, it does not work. >> The following is reported in cinder-volume.log: >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >> line 950, in _create_from_image_cache_or_download >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >> model_update = self._create_from_image_download( >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >> line 766, in _create_from_image_download >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >> volume_utils.copy_image_to_volume(self.driver, context, volume, >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >> line 1158, in copy_image_to_volume >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >> exception.ImageCopyFailure(reason=ex.stderr) >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >> qemu-img: >> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >> error while converting raw: Failed to lock byte 101 >> >> Any help please ? >> Ignazio >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From midhunlaln66 at gmail.com Fri Oct 8 13:05:03 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Fri, 8 Oct 2021 18:35:03 +0530 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: Hi Laurent, Thank you very much for your reply.we configured our network as per official document .Please take a look at below details. --->Controller node configured with below interfaces bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan ---> Compute node bond1,bond0,br-mgmt,br-vxlan,br-storage I don't have much more experience in openstack,I think here we used vlan network. Thanks & Regards Midhunlal N B +918921245637 On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont wrote: > You will need to look at the neutron-server logs + the ovs/libviirt agent > logs on the compute. The error returned from the VM creation is not useful > most of the time. > > Was this a vxlan or vlan network? > > On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb > wrote: > >> Hi team, >> -->Successfully I installed Openstack ansible 23.1.0.dev35. >> --->I logged in to horizon and created a new network and launched a vm >> but I am getting an error. >> >> Error: Failed to perform requested operation on instance "hope", the >> instance has an error status: Please try again later [Error: Build of >> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >> the network(s), not rescheduling.]. >> >> -->Then I checked log >> >> | fault | {'code': 500, 'created': >> '2021-10-08T12:26:44Z', 'message': 'Build of instance >> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >> network(s), not rescheduling.', 'details': 'Traceback (most recent call >> last):\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7235, in _create_guest_with_network\n >> post_xml_callback=post_xml_callback)\n File >> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >> next(self.gen)\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >> line 125, in wait\n result = hub.switch()\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >> line 313, in switch\n return >> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >> handling of the above exception, another exception occurred:\n\nTraceback >> (most recent call last):\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7258, in _create_guest_with_network\n raise >> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >> Virtual Interface creation failed\n\nDuring handling of the above >> exception, another exception occurred:\n\nTraceback (most recent call >> last):\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 2219, in _do_build_and_run_instance\n filter_properties, >> request_spec, accel_uuids)\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 2458, in _build_and_run_instance\n >> reason=msg)\nnova.exception.BuildAbortException: Build of instance >> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >> network(s), not rescheduling.\n'} | >> >> Please help me with this error. >> >> >> Thanks & Regards >> Midhunlal N B >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 13:07:06 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 09:07:06 -0400 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: You can try a few options to see if it helps. It might be a question of NFSv3 or V4 or the Netapp driver changes themselves. https://forum.opennebula.io/t/nfs-v3-datastore-and-failed-to-lock-byte-100/7482 On Fri, Oct 8, 2021 at 8:51 AM Ignazio Cassano wrote: > Hello Laurent, > I am using nfs_mount_options = nfsvers=3,lookupcache=pos > I always use the above options. > I have this issue only with the last cinder images of wallaby > Thanks > Ignazio > > Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < > laurentfdumont at gmail.com> ha scritto: > >> What options are you using for the NFS client on the controllers side? >> There are some recommended settings that Netapp can provide. >> >> On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano >> wrote: >> >>> Hello, >>> I've just updated my kolla wallaby with latest images. When I create >>> volume from image on ceph it works. >>> When I create volume from image on nfs netapp ontap, it does not work. >>> The following is reported in cinder-volume.log: >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>> line 950, in _create_from_image_cache_or_download >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>> model_update = self._create_from_image_download( >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>> line 766, in _create_from_image_download >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>> volume_utils.copy_image_to_volume(self.driver, context, volume, >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >>> line 1158, in copy_image_to_volume >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >>> exception.ImageCopyFailure(reason=ex.stderr) >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >>> qemu-img: >>> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >>> error while converting raw: Failed to lock byte 101 >>> >>> Any help please ? >>> Ignazio >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 13:14:18 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 09:14:18 -0400 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: There are essentially two types of networks, vlan and vxlan, that can be attached to a VM. Ideally, you want to look at the logs on the controllers and the compute node. Openstack-ansible seems to send stuff here https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F . On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb wrote: > Hi Laurent, > Thank you very much for your reply.we configured our network as per > official document .Please take a look at below details. > --->Controller node configured with below interfaces > bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan > > ---> Compute node > bond1,bond0,br-mgmt,br-vxlan,br-storage > > I don't have much more experience in openstack,I think here we used vlan > network. > > Thanks & Regards > Midhunlal N B > +918921245637 > > > On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont > wrote: > >> You will need to look at the neutron-server logs + the ovs/libviirt agent >> logs on the compute. The error returned from the VM creation is not useful >> most of the time. >> >> Was this a vxlan or vlan network? >> >> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >> wrote: >> >>> Hi team, >>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>> --->I logged in to horizon and created a new network and launched a vm >>> but I am getting an error. >>> >>> Error: Failed to perform requested operation on instance "hope", the >>> instance has an error status: Please try again later [Error: Build of >>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>> the network(s), not rescheduling.]. >>> >>> -->Then I checked log >>> >>> | fault | {'code': 500, 'created': >>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>> last):\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7235, in _create_guest_with_network\n >>> post_xml_callback=post_xml_callback)\n File >>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>> next(self.gen)\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>> line 125, in wait\n result = hub.switch()\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>> line 313, in switch\n return >>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>> handling of the above exception, another exception occurred:\n\nTraceback >>> (most recent call last):\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7258, in _create_guest_with_network\n raise >>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>> Virtual Interface creation failed\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 2219, in _do_build_and_run_instance\n filter_properties, >>> request_spec, accel_uuids)\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 2458, in _build_and_run_instance\n >>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>> network(s), not rescheduling.\n'} | >>> >>> Please help me with this error. >>> >>> >>> Thanks & Regards >>> Midhunlal N B >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From midhunlaln66 at gmail.com Fri Oct 8 13:53:17 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Fri, 8 Oct 2021 19:23:17 +0530 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: Hi, This is the log i am getting while launching a new vm Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds to destroy the instance on the hypervisor. Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds to detach 1 volumes for instance. Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to allocate network(s): nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Traceback (most recent call last): 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7235, in _create_guest_with_network 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] post_xml_callback=post_xml_callback) 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] next(self.gen) 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 479, in wait_for_instance_event 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] actual_event = event.wait() 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", line 125, in wait 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] result = hub.switch() 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 313, in switch 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] return self.greenlet.switch() 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] eventlet.timeout.Timeout: 300 seconds 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] During handling of the above exception, another exception occurred: 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Traceback (most recent call last): 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 2397, in _build_and_run_instance 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] accel_info=accel_info) 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 4200, in spawn 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] cleanup_instance_disks=created_disks) 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7258, in _create_guest_with_network 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] raise exception.VirtualInterfaceCreateException() 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the network(s), not rescheduling.: nova.exception.BuildAbortException: Build of instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the network(s), not rescheduling. Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] Successfully unplugged vif VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds to deallocate network for instance. Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume 07041181-318b-4fae-b71e-02ac7b11bca3 Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to the instance being registered to the remote host None.: nova.exception.BuildAbortException: Build of instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the network(s), not rescheduling. Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] Delete attachment failed for attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: 404: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found.: nova.exception.VolumeAttachmentNotFound: Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] Deleted allocation for instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 INFO nova.compute.manager [-] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 Thanks & Regards Midhunlal N B On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont wrote: > There are essentially two types of networks, vlan and vxlan, that can be > attached to a VM. Ideally, you want to look at the logs on the controllers > and the compute node. > > Openstack-ansible seems to send stuff here > https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F > . > > On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb > wrote: > >> Hi Laurent, >> Thank you very much for your reply.we configured our network as per >> official document .Please take a look at below details. >> --->Controller node configured with below interfaces >> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >> >> ---> Compute node >> bond1,bond0,br-mgmt,br-vxlan,br-storage >> >> I don't have much more experience in openstack,I think here we used vlan >> network. >> >> Thanks & Regards >> Midhunlal N B >> +918921245637 >> >> >> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont >> wrote: >> >>> You will need to look at the neutron-server logs + the ovs/libviirt >>> agent logs on the compute. The error returned from the VM creation is not >>> useful most of the time. >>> >>> Was this a vxlan or vlan network? >>> >>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>> wrote: >>> >>>> Hi team, >>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>> --->I logged in to horizon and created a new network and launched a vm >>>> but I am getting an error. >>>> >>>> Error: Failed to perform requested operation on instance "hope", the >>>> instance has an error status: Please try again later [Error: Build of >>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>> the network(s), not rescheduling.]. >>>> >>>> -->Then I checked log >>>> >>>> | fault | {'code': 500, 'created': >>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>> last):\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>> line 7235, in _create_guest_with_network\n >>>> post_xml_callback=post_xml_callback)\n File >>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>> next(self.gen)\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>> File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>> line 125, in wait\n result = hub.switch()\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>> line 313, in switch\n return >>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>> handling of the above exception, another exception occurred:\n\nTraceback >>>> (most recent call last):\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>> line 7258, in _create_guest_with_network\n raise >>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>> Virtual Interface creation failed\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>> request_spec, accel_uuids)\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>> line 2458, in _build_and_run_instance\n >>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>> network(s), not rescheduling.\n'} | >>>> >>>> Please help me with this error. >>>> >>>> >>>> Thanks & Regards >>>> Midhunlal N B >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.kavanagh at canonical.com Fri Oct 8 14:07:04 2021 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Fri, 8 Oct 2021 15:07:04 +0100 Subject: [charms] Yoga PTG Message-ID: Hi all, The OpenStack charms PTG sessions are booked as: - Wednesday 20th October - 14.00 - 17.00 UTC in the Icehouse room - Thursday 21st October - 14.00 - 17.00 UTC also in the Icehouse room Please add your name and discussion topic proposals to the etherpad. [1]. The etherpad also has links to the PTG main site, schedule and Charms. Thank you in advance and see you soon! Alex (tinwood) [1] https://etherpad.opendev.org/p/charms-yoga-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Oct 8 14:18:25 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 8 Oct 2021 16:18:25 +0200 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: I will try next week and write test results. Thanks Il giorno ven 8 ott 2021 alle ore 15:07 Laurent Dumont < laurentfdumont at gmail.com> ha scritto: > You can try a few options to see if it helps. It might be a question of > NFSv3 or V4 or the Netapp driver changes themselves. > > > https://forum.opennebula.io/t/nfs-v3-datastore-and-failed-to-lock-byte-100/7482 > > On Fri, Oct 8, 2021 at 8:51 AM Ignazio Cassano > wrote: > >> Hello Laurent, >> I am using nfs_mount_options = nfsvers=3,lookupcache=pos >> I always use the above options. >> I have this issue only with the last cinder images of wallaby >> Thanks >> Ignazio >> >> Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < >> laurentfdumont at gmail.com> ha scritto: >> >>> What options are you using for the NFS client on the controllers side? >>> There are some recommended settings that Netapp can provide. >>> >>> On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano >>> wrote: >>> >>>> Hello, >>>> I've just updated my kolla wallaby with latest images. When I create >>>> volume from image on ceph it works. >>>> When I create volume from image on nfs netapp ontap, it does not work. >>>> The following is reported in cinder-volume.log: >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>> line 950, in _create_from_image_cache_or_download >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>> model_update = self._create_from_image_download( >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>> line 766, in _create_from_image_download >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>> volume_utils.copy_image_to_volume(self.driver, context, volume, >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >>>> line 1158, in copy_image_to_volume >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >>>> exception.ImageCopyFailure(reason=ex.stderr) >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >>>> qemu-img: >>>> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >>>> error while converting raw: Failed to lock byte 101 >>>> >>>> Any help please ? >>>> Ignazio >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 14:56:25 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 10:56:25 -0400 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: These are the nova-compute logs but I think it just catches the error from the neutron component. Any logs from neutron-server, ovs-agent, libvirt-agent? Can you share the "openstack network show NETWORK_ID_HERE" of the network you are attaching the VM to? On Fri, Oct 8, 2021 at 9:53 AM Midhunlal Nb wrote: > Hi, > This is the log i am getting while launching a new vm > > > Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 > INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds > to destroy the instance on the hypervisor. > Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 > INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds > to detach 1 volumes for instance. > Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to > allocate network(s): nova.exception.VirtualInterfaceCreateException: > Virtual Interface creation failed > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > Traceback (most recent call last): > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 7235, in _create_guest_with_network > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > post_xml_callback=post_xml_callback) > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > next(self.gen) > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 479, in wait_for_instance_event > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > actual_event = event.wait() > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", > line 125, in wait > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > result = hub.switch() > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", > line 313, in switch > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > return self.greenlet.switch() > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > eventlet.timeout.Timeout: 300 seconds > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > During handling of the above exception, another exception occurred: > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > Traceback (most recent call last): > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 2397, in _build_and_run_instance > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > accel_info=accel_info) > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 4200, in spawn > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > cleanup_instance_disks=created_disks) > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 7258, in _create_guest_with_network > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > raise exception.VirtualInterfaceCreateException() > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > nova.exception.VirtualInterfaceCreateException: Virtual Interface creation > failed > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 > ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance > 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the > network(s), not rescheduling.: nova.exception.BuildAbortException: Build of > instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate > the network(s), not rescheduling. > Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 > INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] Successfully unplugged vif > VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') > Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 > INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds > to deallocate network for instance. > Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 > INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume > 07041181-318b-4fae-b71e-02ac7b11bca3 > Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 > ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call > for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to > the instance being registered to the remote host None.: > nova.exception.BuildAbortException: Build of instance > 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the > network(s), not rescheduling. > Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 > ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] Delete attachment failed for attachment > 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be > found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. > (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: > 404: cinderclient.exceptions.NotFound: Volume attachment could not be found > with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP > 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) > Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 > WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due > to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be > found.: nova.exception.VolumeAttachmentNotFound: Volume attachment > 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. > Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 > INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] Deleted allocation for instance > 364564c2-bfa6-4354-a4da-a18a3fef43c3 > Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 > INFO nova.compute.manager [-] [instance: > 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) > Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 > WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 > > > Thanks & Regards > Midhunlal N B > > > > On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont > wrote: > >> There are essentially two types of networks, vlan and vxlan, that can be >> attached to a VM. Ideally, you want to look at the logs on the controllers >> and the compute node. >> >> Openstack-ansible seems to send stuff here >> https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F >> . >> >> On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb >> wrote: >> >>> Hi Laurent, >>> Thank you very much for your reply.we configured our network as per >>> official document .Please take a look at below details. >>> --->Controller node configured with below interfaces >>> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >>> >>> ---> Compute node >>> bond1,bond0,br-mgmt,br-vxlan,br-storage >>> >>> I don't have much more experience in openstack,I think here we used vlan >>> network. >>> >>> Thanks & Regards >>> Midhunlal N B >>> +918921245637 >>> >>> >>> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont >>> wrote: >>> >>>> You will need to look at the neutron-server logs + the ovs/libviirt >>>> agent logs on the compute. The error returned from the VM creation is not >>>> useful most of the time. >>>> >>>> Was this a vxlan or vlan network? >>>> >>>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>>> wrote: >>>> >>>>> Hi team, >>>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>>> --->I logged in to horizon and created a new network and launched a >>>>> vm but I am getting an error. >>>>> >>>>> Error: Failed to perform requested operation on instance "hope", the >>>>> instance has an error status: Please try again later [Error: Build of >>>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>>> the network(s), not rescheduling.]. >>>>> >>>>> -->Then I checked log >>>>> >>>>> | fault | {'code': 500, 'created': >>>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>>> last):\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>> line 7235, in _create_guest_with_network\n >>>>> post_xml_callback=post_xml_callback)\n File >>>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>>> next(self.gen)\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>>> File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>>> line 125, in wait\n result = hub.switch()\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>>> line 313, in switch\n return >>>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>> (most recent call last):\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n >>>>> File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>> line 7258, in _create_guest_with_network\n raise >>>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>>> Virtual Interface creation failed\n\nDuring handling of the above >>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>> last):\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>>> request_spec, accel_uuids)\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>> line 2458, in _build_and_run_instance\n >>>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>> network(s), not rescheduling.\n'} | >>>>> >>>>> Please help me with this error. >>>>> >>>>> >>>>> Thanks & Regards >>>>> Midhunlal N B >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Fri Oct 8 15:57:25 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Fri, 8 Oct 2021 17:57:25 +0200 Subject: [ironic] Yoga PTG schedule Message-ID: Hello Ironicers! In our etherpad [1] we have 18 topics for this PTG and we have a total of 11 slots. This is the proposed schedule (we will discuss in our upstream meeting on Monday). *Monday (18 Oct) - Room Juno 15:00 - 17:00 UTC* * Support OpenBMC * Persistent memory Support * Redfish Host Connection Interface * Boot from Volume + UEFI *Tuesday (19 Oct) - Room Juno 14:00 - 17:00 UTC* * Posting to placement ourselves * The rise of compossible hardware, again * Self-configuring Ironic Service * Is there any way we can drive a co-operative use mode of ironic amongst some of the users? *Wednesday (18 Oct) - Room Juno 14:00 - 16:00 UTC* * Prioritize 3rd party CI in a box * Secure RBAC items in Yoga * Bulk operations *Thursday (18 Oct) - Room Kilo 14:00 - 16:00 UTC* * having to go look at logs is an antipattern * pxe-grub * Remove instance (non-BFV, non-ramdisk) networking booting * Direct SDN Integrations *Friday (22 Oct) - Room Kilo 14:00 - 16:00 UTC* * Eliminate manual commands * Certificate Management * Stopping use of wiki.openstack.org In case we don't have enough time we can book more slots if the community is ok and the slots are available. We will also have a section in the etherpad for last-minute topics =) [1] https://etherpad.opendev.org/p/ironic-yoga-ptg -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.andre at redhat.com Fri Oct 8 16:14:55 2021 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Fri, 8 Oct 2021 18:14:55 +0200 Subject: Introducing Hubtty, a Gertty fork for Github code reviews Message-ID: Hi all, First off, apologies if this isn't the right forum, this has nothing to do with OpenStack development. I'm trying to reach out to the many Gertty users hiding here who might want a similar tool for their Github code reviews. I'm happy to announce the first release of Hubtty [1], a fork of Gertty that I adapted to the Github API and workflow. It has the same look and feel but differs in a few things I detailed in the release changelog [2]. This first version focuses on porting Gertty to Github and works reasonably well. Myself and other intrepid developers already use it every day and I personally find it very convenient for managing incoming PR reviews. In the coming versions I'd like to integrate better with the Github features and improve UX. Try it with `pip install hubtty` and let me know what you think of it. Note that Hubtty can't submit reviews to repositories for which the parent organization has enabled third-party application restrictions without explicitly allowing hubtty [3]. I'm working around the issue by using a token generated by the `gh` app. Martin [1] https://github.com/hubtty/hubtty [2] https://github.com/hubtty/hubtty/blob/v0.1/CHANGELOG.md [3] https://github.com/hubtty/hubtty/issues/20 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Oct 8 18:00:36 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 Oct 2021 13:00:36 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 8th Oct, 21: Reading: 5 min Message-ID: <17c611064bc.128403fc0750677.6030027808869137231@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * TC this week video meeting held on Oct 7th Thursday. * Most of the meeting discussions are summarized below (Completed or in-progress activities section). We forgot to record the meeting, an apology for that. I am summarizing each topic discussion, or you can also check some summary and transcript (these are autogenerated and not so perfect, though) @ - https://meetings.opendev.org/meetings/tc/2021/tc.2021-10-07-15.01.log.html - https://review.opendev.org/c/openstack/governance/+/813112/1/reference/tc-meeting-transcripts/OpenStack+Technical+Committee+Video+meeting+Transcript(2021-10-07).txt * We will have next week's IRC meeting on Oct 14th, Thursday 15:00 UTC, feel free the topic in agenda[1] by Oct 13th. 2. What we completed this week: ========================= * Added the cinder-netapp charm to Openstack charms[2] * Retired puppet-freezer[3] 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ * TC is using the etherpad[4] for Xena cycle working item. We will be checking and updating the status biweekly on the same etherpad. * Current status is: 9 completed, 3 to-be-discussed in PTG, 1 in-progress Open Reviews ----------------- * Four open reviews for ongoing activities[5]. Place to maintain the external hosted ELK, E-R, O-H services ------------------------------------------------------------------------- * We continue the discussion[6] on final checks and updates from Allison. Below is the summary: * Allison updated on the offer of the donated credits (45k/year from AWS) to run the ELK services. * Current ELK stack will be migrated on OpenSearch (an open source fork of Elasticsearch) cluster on AWS managed services. * We discussed and agreed to start/move these ELK service work under TACT SIG with the help of Daniel Pawlik, Ross Tomlinson, and Allison (along with Jeremy, Clark as backup/helping in migration). * Allison will continue work on setting up the accounts. * A huge thanks to Allison for driving and arranging the required resources. Add project health check tool ----------------------------------- * Using stats of review/code should not be the only criteria to decide the project health as it depends on many other factors, including the nature of project. * We agreed to use the generated stats only as an early warning tool and not to publish those anywhere which can be wrongly interpreted as project health or so. * We will continue discussing it in PTG for the next steps on this and what to do on TC liaison things. * Meanwhile, we are reviewing Rico proposal on collecting stats tool[7]. Stable Core team process change --------------------------------------- * Current proposal is under review[8]. Feel free to provide early feedback if you have any. Call for 'Technical Writing' SIG Chair/Maintainers ---------------------------------------------------------- * As you might have read in the email from Elod[9], Stephen, who is the current chair for this SIG is not planning to continue to chair. * This SIG has accomplished the work for what it was formed, and now most of the documentation are managed on projects side. TC agreed to move this SIG to complete state and move the repos under TC (tc members will be added to core group in those repos). * Any advisory work on documentation which is what this SIG was doing, will be handle in TC. TC tags analysis ------------------- * Operator feedback is asked on open infra newsletter too, and we will continue the discussion in PTG and will take the final decision based on feedback we receive, if any[10]. Project updates ------------------- * Retiring js-openstack-lib [11] Yoga release community-wide goal ----------------------------------------- * Please add the possible candidates in this etherpad [12]. * Current status: "Secure RBAC" is selected for Yoga cycle[13]. PTG planning ---------------- * We are collecting the PTG topics in etherpad[14], feel free to add any topic you would like to discuss. * We discussed the live stream of one of the TC PTG sessions like we did last time. Once we have more topics in etherpad, then we can select the appropriate one. Test support for TLS default: ---------------------------------- * Rico has started a separate email thread over testing with tls-proxy enabled[15], we encourage projects to participate in that testing and help to enable the tls-proxy in gate testing. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[16]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [17] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [18] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/809011 [3] https://review.opendev.org/c/openstack/governance/+/808679 [4] https://etherpad.opendev.org/p/tc-xena-tracke [5] https://review.opendev.org/q/projects:openstack/governance+status:open [6] https://etherpad.opendev.org/p/elk-service-maintenance-plan [7] https://review.opendev.org/c/openstack/governance/+/810037 [8] https://review.opendev.org/c/openstack/governance/+/810721 [9] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025161.html [10] https://governance.openstack.org/tc/reference/tags/index.html [11] https://review.opendev.org/c/openstack/governance/+/798540 [12] https://review.opendev.org/c/openstack/governance/+/807163 [13] https://etherpad.opendev.org/p/y-series-goals [14] https://etherpad.opendev.org/p/tc-yoga-ptg [15] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023000.html [16] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [17] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [18] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From ignaziocassano at gmail.com Fri Oct 8 18:09:28 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 8 Oct 2021 20:09:28 +0200 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: Hello, I soved my problem. Since I have multiple cinder backends I had to set scheduler_default_filters = DriverFilter in default section of cinder.conf This solved. Ignazio Il giorno ven 8 ott 2021 alle ore 16:59 Laurent Dumont < laurentfdumont at gmail.com> ha scritto: > These are the nova-compute logs but I think it just catches the error from > the neutron component. Any logs from neutron-server, ovs-agent, > libvirt-agent? > > Can you share the "openstack network show NETWORK_ID_HERE" of the network > you are attaching the VM to? > > On Fri, Oct 8, 2021 at 9:53 AM Midhunlal Nb > wrote: > >> Hi, >> This is the log i am getting while launching a new vm >> >> >> Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds >> to destroy the instance on the hypervisor. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds >> to detach 1 volumes for instance. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to >> allocate network(s): nova.exception.VirtualInterfaceCreateException: >> Virtual Interface creation failed >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Traceback (most recent call last): >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7235, in _create_guest_with_network >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> post_xml_callback=post_xml_callback) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> next(self.gen) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 479, in wait_for_instance_event >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> actual_event = event.wait() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >> line 125, in wait >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> result = hub.switch() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >> line 313, in switch >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> return self.greenlet.switch() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> eventlet.timeout.Timeout: 300 seconds >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> During handling of the above exception, another exception occurred: >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Traceback (most recent call last): >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 2397, in _build_and_run_instance >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> accel_info=accel_info) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 4200, in spawn >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> cleanup_instance_disks=created_disks) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7258, in _create_guest_with_network >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> raise exception.VirtualInterfaceCreateException() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> nova.exception.VirtualInterfaceCreateException: Virtual Interface creation >> failed >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 >> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >> network(s), not rescheduling.: nova.exception.BuildAbortException: Build of >> instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate >> the network(s), not rescheduling. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 >> INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Successfully unplugged vif >> VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds >> to deallocate network for instance. >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume >> 07041181-318b-4fae-b71e-02ac7b11bca3 >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 >> ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call >> for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to >> the instance being registered to the remote host None.: >> nova.exception.BuildAbortException: Build of instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >> network(s), not rescheduling. >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 >> ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Delete attachment failed for attachment >> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be >> found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. >> (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: >> 404: cinderclient.exceptions.NotFound: Volume attachment could not be found >> with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP >> 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 >> WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due >> to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be >> found.: nova.exception.VolumeAttachmentNotFound: Volume attachment >> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. >> Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 >> INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Deleted allocation for instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 >> Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 >> INFO nova.compute.manager [-] [instance: >> 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) >> Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 >> WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 >> >> >> Thanks & Regards >> Midhunlal N B >> >> >> >> On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont >> wrote: >> >>> There are essentially two types of networks, vlan and vxlan, that can be >>> attached to a VM. Ideally, you want to look at the logs on the controllers >>> and the compute node. >>> >>> Openstack-ansible seems to send stuff here >>> https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F >>> . >>> >>> On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb >>> wrote: >>> >>>> Hi Laurent, >>>> Thank you very much for your reply.we configured our network as per >>>> official document .Please take a look at below details. >>>> --->Controller node configured with below interfaces >>>> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >>>> >>>> ---> Compute node >>>> bond1,bond0,br-mgmt,br-vxlan,br-storage >>>> >>>> I don't have much more experience in openstack,I think here we used >>>> vlan network. >>>> >>>> Thanks & Regards >>>> Midhunlal N B >>>> +918921245637 >>>> >>>> >>>> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont >>>> wrote: >>>> >>>>> You will need to look at the neutron-server logs + the ovs/libviirt >>>>> agent logs on the compute. The error returned from the VM creation is not >>>>> useful most of the time. >>>>> >>>>> Was this a vxlan or vlan network? >>>>> >>>>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>>>> wrote: >>>>> >>>>>> Hi team, >>>>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>>>> --->I logged in to horizon and created a new network and launched a >>>>>> vm but I am getting an error. >>>>>> >>>>>> Error: Failed to perform requested operation on instance "hope", the >>>>>> instance has an error status: Please try again later [Error: Build of >>>>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>>>> the network(s), not rescheduling.]. >>>>>> >>>>>> -->Then I checked log >>>>>> >>>>>> | fault | {'code': 500, 'created': >>>>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>>>> last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 7235, in _create_guest_with_network\n >>>>>> post_xml_callback=post_xml_callback)\n File >>>>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>>>> next(self.gen)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>>>> File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>>>> line 125, in wait\n result = hub.switch()\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>>>> line 313, in switch\n return >>>>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>> (most recent call last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n >>>>>> File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 7258, in _create_guest_with_network\n raise >>>>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>>>> Virtual Interface creation failed\n\nDuring handling of the above >>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>> last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>>>> request_spec, accel_uuids)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2458, in _build_and_run_instance\n >>>>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>> network(s), not rescheduling.\n'} | >>>>>> >>>>>> Please help me with this error. >>>>>> >>>>>> >>>>>> Thanks & Regards >>>>>> Midhunlal N B >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 21:00:12 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 17:00:12 -0400 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: I think that's on the wrong thread Ignazio :D On Fri, Oct 8, 2021 at 2:09 PM Ignazio Cassano wrote: > Hello, I soved my problem. > Since I have multiple cinder backends I had to set > scheduler_default_filters = DriverFilter > in default section of cinder.conf > This solved. > Ignazio > > Il giorno ven 8 ott 2021 alle ore 16:59 Laurent Dumont < > laurentfdumont at gmail.com> ha scritto: > >> These are the nova-compute logs but I think it just catches the error >> from the neutron component. Any logs from neutron-server, ovs-agent, >> libvirt-agent? >> >> Can you share the "openstack network show NETWORK_ID_HERE" of the network >> you are attaching the VM to? >> >> On Fri, Oct 8, 2021 at 9:53 AM Midhunlal Nb >> wrote: >> >>> Hi, >>> This is the log i am getting while launching a new vm >>> >>> >>> Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds >>> to destroy the instance on the hypervisor. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds >>> to detach 1 volumes for instance. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to >>> allocate network(s): nova.exception.VirtualInterfaceCreateException: >>> Virtual Interface creation failed >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Traceback (most recent call last): >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7235, in _create_guest_with_network >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> post_xml_callback=post_xml_callback) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> next(self.gen) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 479, in wait_for_instance_event >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> actual_event = event.wait() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>> line 125, in wait >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> result = hub.switch() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>> line 313, in switch >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> return self.greenlet.switch() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> eventlet.timeout.Timeout: 300 seconds >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> During handling of the above exception, another exception occurred: >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Traceback (most recent call last): >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 2397, in _build_and_run_instance >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> accel_info=accel_info) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 4200, in spawn >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> cleanup_instance_disks=created_disks) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7258, in _create_guest_with_network >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> raise exception.VirtualInterfaceCreateException() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> nova.exception.VirtualInterfaceCreateException: Virtual Interface creation >>> failed >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 >>> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >>> network(s), not rescheduling.: nova.exception.BuildAbortException: Build of >>> instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate >>> the network(s), not rescheduling. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 >>> INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Successfully unplugged vif >>> VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds >>> to deallocate network for instance. >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume >>> 07041181-318b-4fae-b71e-02ac7b11bca3 >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 >>> ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call >>> for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to >>> the instance being registered to the remote host None.: >>> nova.exception.BuildAbortException: Build of instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >>> network(s), not rescheduling. >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 >>> ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Delete attachment failed for attachment >>> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be >>> found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. >>> (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: >>> 404: cinderclient.exceptions.NotFound: Volume attachment could not be found >>> with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP >>> 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 >>> WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due >>> to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be >>> found.: nova.exception.VolumeAttachmentNotFound: Volume attachment >>> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. >>> Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 >>> INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Deleted allocation for instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 >>> Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 >>> INFO nova.compute.manager [-] [instance: >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) >>> Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 >>> WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 >>> >>> >>> Thanks & Regards >>> Midhunlal N B >>> >>> >>> >>> On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont >>> wrote: >>> >>>> There are essentially two types of networks, vlan and vxlan, that can >>>> be attached to a VM. Ideally, you want to look at the logs on the >>>> controllers and the compute node. >>>> >>>> Openstack-ansible seems to send stuff here >>>> https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F >>>> . >>>> >>>> On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb >>>> wrote: >>>> >>>>> Hi Laurent, >>>>> Thank you very much for your reply.we configured our network as per >>>>> official document .Please take a look at below details. >>>>> --->Controller node configured with below interfaces >>>>> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >>>>> >>>>> ---> Compute node >>>>> bond1,bond0,br-mgmt,br-vxlan,br-storage >>>>> >>>>> I don't have much more experience in openstack,I think here we used >>>>> vlan network. >>>>> >>>>> Thanks & Regards >>>>> Midhunlal N B >>>>> +918921245637 >>>>> >>>>> >>>>> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont < >>>>> laurentfdumont at gmail.com> wrote: >>>>> >>>>>> You will need to look at the neutron-server logs + the ovs/libviirt >>>>>> agent logs on the compute. The error returned from the VM creation is not >>>>>> useful most of the time. >>>>>> >>>>>> Was this a vxlan or vlan network? >>>>>> >>>>>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>>>>> wrote: >>>>>> >>>>>>> Hi team, >>>>>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>>>>> --->I logged in to horizon and created a new network and launched a >>>>>>> vm but I am getting an error. >>>>>>> >>>>>>> Error: Failed to perform requested operation on instance "hope", the >>>>>>> instance has an error status: Please try again later [Error: Build of >>>>>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>>>>> the network(s), not rescheduling.]. >>>>>>> >>>>>>> -->Then I checked log >>>>>>> >>>>>>> | fault | {'code': 500, 'created': >>>>>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>>>>> last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 7235, in _create_guest_with_network\n >>>>>>> post_xml_callback=post_xml_callback)\n File >>>>>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>>>>> next(self.gen)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>>>>> File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>>>>> line 125, in wait\n result = hub.switch()\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>>>>> line 313, in switch\n return >>>>>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>> (most recent call last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n >>>>>>> File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 7258, in _create_guest_with_network\n raise >>>>>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>>>>> Virtual Interface creation failed\n\nDuring handling of the above >>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>> last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>>>>> request_spec, accel_uuids)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2458, in _build_and_run_instance\n >>>>>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>>> network(s), not rescheduling.\n'} | >>>>>>> >>>>>>> Please help me with this error. >>>>>>> >>>>>>> >>>>>>> Thanks & Regards >>>>>>> Midhunlal N B >>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 21:00:30 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 17:00:30 -0400 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: Pingback from other thread Does it work with a Netapp/Ontap NFS volume now? On Fri, Oct 8, 2021 at 10:18 AM Ignazio Cassano wrote: > I will try next week and write test results. > Thanks > > > Il giorno ven 8 ott 2021 alle ore 15:07 Laurent Dumont < > laurentfdumont at gmail.com> ha scritto: > >> You can try a few options to see if it helps. It might be a question of >> NFSv3 or V4 or the Netapp driver changes themselves. >> >> >> https://forum.opennebula.io/t/nfs-v3-datastore-and-failed-to-lock-byte-100/7482 >> >> On Fri, Oct 8, 2021 at 8:51 AM Ignazio Cassano >> wrote: >> >>> Hello Laurent, >>> I am using nfs_mount_options = nfsvers=3,lookupcache=pos >>> I always use the above options. >>> I have this issue only with the last cinder images of wallaby >>> Thanks >>> Ignazio >>> >>> Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < >>> laurentfdumont at gmail.com> ha scritto: >>> >>>> What options are you using for the NFS client on the controllers side? >>>> There are some recommended settings that Netapp can provide. >>>> >>>> On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> Hello, >>>>> I've just updated my kolla wallaby with latest images. When I create >>>>> volume from image on ceph it works. >>>>> When I create volume from image on nfs netapp ontap, it does not work. >>>>> The following is reported in cinder-volume.log: >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>> line 950, in _create_from_image_cache_or_download >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>> model_update = self._create_from_image_download( >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>> line 766, in _create_from_image_download >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>> volume_utils.copy_image_to_volume(self.driver, context, volume, >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >>>>> line 1158, in copy_image_to_volume >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >>>>> exception.ImageCopyFailure(reason=ex.stderr) >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >>>>> qemu-img: >>>>> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >>>>> error while converting raw: Failed to lock byte 101 >>>>> >>>>> Any help please ? >>>>> Ignazio >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraden at verisign.com Sat Oct 9 02:26:15 2021 From: abraden at verisign.com (Braden, Albert) Date: Sat, 9 Oct 2021 02:26:15 +0000 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Message-ID: Hello everyone. It's great to be back working on OpenStack again. I'm at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. Before applying the change, we see the DNS record in the recordset: $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | $ and we can pull it from the DNS server on the controllers: $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 After applying the change, we don't see it: $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | $ $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra $ We see this in the logs: 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] It appears that Designate is trying to create the new record before the deletion of the old one finishes. Is anyone else seeing this on Train? The same set of actions doesn't cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Oct 9 08:07:28 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 9 Oct 2021 10:07:28 +0200 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: Yes, I removed cinder containers. I deployed cinder again. After the above procedure, netapp volume changed the error: cannot find valid backend. So after the reinstallarion I was not able to create netapp volume at all. I inserted the parameter I mentioned in my previous email, and now I am able to create both empty and from image netapp volumes. Ignazio Il Ven 8 Ott 2021, 23:00 Laurent Dumont ha scritto: > Pingback from other thread > > Does it work with a Netapp/Ontap NFS volume now? > > On Fri, Oct 8, 2021 at 10:18 AM Ignazio Cassano > wrote: > >> I will try next week and write test results. >> Thanks >> >> >> Il giorno ven 8 ott 2021 alle ore 15:07 Laurent Dumont < >> laurentfdumont at gmail.com> ha scritto: >> >>> You can try a few options to see if it helps. It might be a question of >>> NFSv3 or V4 or the Netapp driver changes themselves. >>> >>> >>> https://forum.opennebula.io/t/nfs-v3-datastore-and-failed-to-lock-byte-100/7482 >>> >>> On Fri, Oct 8, 2021 at 8:51 AM Ignazio Cassano >>> wrote: >>> >>>> Hello Laurent, >>>> I am using nfs_mount_options = nfsvers=3,lookupcache=pos >>>> I always use the above options. >>>> I have this issue only with the last cinder images of wallaby >>>> Thanks >>>> Ignazio >>>> >>>> Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < >>>> laurentfdumont at gmail.com> ha scritto: >>>> >>>>> What options are you using for the NFS client on the controllers side? >>>>> There are some recommended settings that Netapp can provide. >>>>> >>>>> On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> Hello, >>>>>> I've just updated my kolla wallaby with latest images. When I create >>>>>> volume from image on ceph it works. >>>>>> When I create volume from image on nfs netapp ontap, it does not work. >>>>>> The following is reported in cinder-volume.log: >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>>> line 950, in _create_from_image_cache_or_download >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>> model_update = self._create_from_image_download( >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>>> line 766, in _create_from_image_download >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>> volume_utils.copy_image_to_volume(self.driver, context, volume, >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >>>>>> line 1158, in copy_image_to_volume >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >>>>>> exception.ImageCopyFailure(reason=ex.stderr) >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >>>>>> qemu-img: >>>>>> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >>>>>> error while converting raw: Failed to lock byte 101 >>>>>> >>>>>> Any help please ? >>>>>> Ignazio >>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Oct 9 08:12:37 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 9 Oct 2021 10:12:37 +0200 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: Sorry, I sent the solution in a wrong thread. I solved inserting default_filter=Driver_Filters in cinder.conf. Probably it solved because I have both netapp and ceph backends. Ignazio Il Sab 9 Ott 2021, 10:07 Ignazio Cassano ha scritto: > Yes, I removed cinder containers. > I deployed cinder again. > After the above procedure, netapp volume changed the error: cannot find > valid backend. > So after the reinstallarion I was not able to create netapp volume at all. > I inserted the parameter I mentioned in my previous email, and now I am > able to create both empty and from image netapp volumes. > Ignazio > > > > Il Ven 8 Ott 2021, 23:00 Laurent Dumont ha > scritto: > >> Pingback from other thread >> >> Does it work with a Netapp/Ontap NFS volume now? >> >> On Fri, Oct 8, 2021 at 10:18 AM Ignazio Cassano >> wrote: >> >>> I will try next week and write test results. >>> Thanks >>> >>> >>> Il giorno ven 8 ott 2021 alle ore 15:07 Laurent Dumont < >>> laurentfdumont at gmail.com> ha scritto: >>> >>>> You can try a few options to see if it helps. It might be a question of >>>> NFSv3 or V4 or the Netapp driver changes themselves. >>>> >>>> >>>> https://forum.opennebula.io/t/nfs-v3-datastore-and-failed-to-lock-byte-100/7482 >>>> >>>> On Fri, Oct 8, 2021 at 8:51 AM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> Hello Laurent, >>>>> I am using nfs_mount_options = nfsvers=3,lookupcache=pos >>>>> I always use the above options. >>>>> I have this issue only with the last cinder images of wallaby >>>>> Thanks >>>>> Ignazio >>>>> >>>>> Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < >>>>> laurentfdumont at gmail.com> ha scritto: >>>>> >>>>>> What options are you using for the NFS client on the controllers >>>>>> side? There are some recommended settings that Netapp can provide. >>>>>> >>>>>> On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano < >>>>>> ignaziocassano at gmail.com> wrote: >>>>>> >>>>>>> Hello, >>>>>>> I've just updated my kolla wallaby with latest images. When I create >>>>>>> volume from image on ceph it works. >>>>>>> When I create volume from image on nfs netapp ontap, it does not >>>>>>> work. >>>>>>> The following is reported in cinder-volume.log: >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>>>> line 950, in _create_from_image_cache_or_download >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>>> model_update = self._create_from_image_download( >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>>>> line 766, in _create_from_image_download >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>>> volume_utils.copy_image_to_volume(self.driver, context, volume, >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >>>>>>> line 1158, in copy_image_to_volume >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >>>>>>> exception.ImageCopyFailure(reason=ex.stderr) >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>>> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >>>>>>> qemu-img: >>>>>>> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >>>>>>> error while converting raw: Failed to lock byte 101 >>>>>>> >>>>>>> Any help please ? >>>>>>> Ignazio >>>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Oct 9 09:45:15 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 9 Oct 2021 11:45:15 +0200 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: I am sorry. Ignazio Il Ven 8 Ott 2021, 20:09 Ignazio Cassano ha scritto: > Hello, I soved my problem. > Since I have multiple cinder backends I had to set > scheduler_default_filters = DriverFilter > in default section of cinder.conf > This solved. > Ignazio > > Il giorno ven 8 ott 2021 alle ore 16:59 Laurent Dumont < > laurentfdumont at gmail.com> ha scritto: > >> These are the nova-compute logs but I think it just catches the error >> from the neutron component. Any logs from neutron-server, ovs-agent, >> libvirt-agent? >> >> Can you share the "openstack network show NETWORK_ID_HERE" of the network >> you are attaching the VM to? >> >> On Fri, Oct 8, 2021 at 9:53 AM Midhunlal Nb >> wrote: >> >>> Hi, >>> This is the log i am getting while launching a new vm >>> >>> >>> Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds >>> to destroy the instance on the hypervisor. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds >>> to detach 1 volumes for instance. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to >>> allocate network(s): nova.exception.VirtualInterfaceCreateException: >>> Virtual Interface creation failed >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Traceback (most recent call last): >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7235, in _create_guest_with_network >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> post_xml_callback=post_xml_callback) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> next(self.gen) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 479, in wait_for_instance_event >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> actual_event = event.wait() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>> line 125, in wait >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> result = hub.switch() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>> line 313, in switch >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> return self.greenlet.switch() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> eventlet.timeout.Timeout: 300 seconds >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> During handling of the above exception, another exception occurred: >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Traceback (most recent call last): >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 2397, in _build_and_run_instance >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> accel_info=accel_info) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 4200, in spawn >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> cleanup_instance_disks=created_disks) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7258, in _create_guest_with_network >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> raise exception.VirtualInterfaceCreateException() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> nova.exception.VirtualInterfaceCreateException: Virtual Interface creation >>> failed >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 >>> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >>> network(s), not rescheduling.: nova.exception.BuildAbortException: Build of >>> instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate >>> the network(s), not rescheduling. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 >>> INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Successfully unplugged vif >>> VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds >>> to deallocate network for instance. >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume >>> 07041181-318b-4fae-b71e-02ac7b11bca3 >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 >>> ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call >>> for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to >>> the instance being registered to the remote host None.: >>> nova.exception.BuildAbortException: Build of instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >>> network(s), not rescheduling. >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 >>> ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Delete attachment failed for attachment >>> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be >>> found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. >>> (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: >>> 404: cinderclient.exceptions.NotFound: Volume attachment could not be found >>> with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP >>> 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 >>> WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due >>> to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be >>> found.: nova.exception.VolumeAttachmentNotFound: Volume attachment >>> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. >>> Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 >>> INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Deleted allocation for instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 >>> Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 >>> INFO nova.compute.manager [-] [instance: >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) >>> Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 >>> WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 >>> >>> >>> Thanks & Regards >>> Midhunlal N B >>> >>> >>> >>> On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont >>> wrote: >>> >>>> There are essentially two types of networks, vlan and vxlan, that can >>>> be attached to a VM. Ideally, you want to look at the logs on the >>>> controllers and the compute node. >>>> >>>> Openstack-ansible seems to send stuff here >>>> https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F >>>> . >>>> >>>> On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb >>>> wrote: >>>> >>>>> Hi Laurent, >>>>> Thank you very much for your reply.we configured our network as per >>>>> official document .Please take a look at below details. >>>>> --->Controller node configured with below interfaces >>>>> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >>>>> >>>>> ---> Compute node >>>>> bond1,bond0,br-mgmt,br-vxlan,br-storage >>>>> >>>>> I don't have much more experience in openstack,I think here we used >>>>> vlan network. >>>>> >>>>> Thanks & Regards >>>>> Midhunlal N B >>>>> +918921245637 >>>>> >>>>> >>>>> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont < >>>>> laurentfdumont at gmail.com> wrote: >>>>> >>>>>> You will need to look at the neutron-server logs + the ovs/libviirt >>>>>> agent logs on the compute. The error returned from the VM creation is not >>>>>> useful most of the time. >>>>>> >>>>>> Was this a vxlan or vlan network? >>>>>> >>>>>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>>>>> wrote: >>>>>> >>>>>>> Hi team, >>>>>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>>>>> --->I logged in to horizon and created a new network and launched a >>>>>>> vm but I am getting an error. >>>>>>> >>>>>>> Error: Failed to perform requested operation on instance "hope", the >>>>>>> instance has an error status: Please try again later [Error: Build of >>>>>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>>>>> the network(s), not rescheduling.]. >>>>>>> >>>>>>> -->Then I checked log >>>>>>> >>>>>>> | fault | {'code': 500, 'created': >>>>>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>>>>> last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 7235, in _create_guest_with_network\n >>>>>>> post_xml_callback=post_xml_callback)\n File >>>>>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>>>>> next(self.gen)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>>>>> File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>>>>> line 125, in wait\n result = hub.switch()\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>>>>> line 313, in switch\n return >>>>>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>> (most recent call last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n >>>>>>> File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 7258, in _create_guest_with_network\n raise >>>>>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>>>>> Virtual Interface creation failed\n\nDuring handling of the above >>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>> last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>>>>> request_spec, accel_uuids)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2458, in _build_and_run_instance\n >>>>>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>>> network(s), not rescheduling.\n'} | >>>>>>> >>>>>>> Please help me with this error. >>>>>>> >>>>>>> >>>>>>> Thanks & Regards >>>>>>> Midhunlal N B >>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Fri Oct 8 11:18:01 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Fri, 8 Oct 2021 16:48:01 +0530 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command Message-ID: Hi Team, I am installing Tripleo using the below link https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html In the Introspect section, When I executed the command openstack tripleo validator run --group pre-introspection I got the following error: +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu | PASSED | localhost | localhost | | 0:00:01.261 | | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space | PASSED | localhost | localhost | | 0:00:04.480 | | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram | PASSED | localhost | localhost | | 0:00:02.173 | | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode | PASSED | localhost | localhost | | 0:00:01.546 | | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway | FAILED | undercloud | No host matched | | | | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space | FAILED | undercloud | No host matched | | | | 2f0239db-d530-48eb-b606-f82179e72e50 | undercloud-neutron-sanity-check | FAILED | undercloud | No host matched | | | | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range | FAILED | undercloud | No host matched | | | | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection | FAILED | undercloud | No host matched | | | | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush | FAILED | undercloud | No host matched | | | +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ Then I created the following inventory file: [Undercloud] undercloud Passed this command while running the pre-introspection command. It then executed successfully. But with Pre-deployment, it is still failing even after passing the inventory +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e | PASSED | localhost | localhost | | 0:00:00.504 | | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns | PASSED | localhost | localhost | | 0:00:00.481 | | 93611c13-49a2-4cae-ad87-099546459481 | service-status | PASSED | all | undercloud | | 0:00:06.942 | | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux | PASSED | all | undercloud | | 0:00:02.433 | | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version | FAILED | all | undercloud | | 0:00:03.576 | | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed | PASSED | undercloud | undercloud | | 0:00:02.850 | | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed | FAILED | allovercloud | No host matched | | | | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment | FAILED | undercloud | undercloud | | 0:00:31.559 | | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug | FAILED | undercloud | undercloud | | 0:00:02.057 | | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud | | 0:00:00.884 | | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted | FAILED | undercloud | undercloud | | 0:00:02.138 | | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count | PASSED | undercloud | undercloud | | 0:00:06.164 | | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count | FAILED | undercloud | undercloud | | 0:00:00.934 | | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning | FAILED | undercloud | undercloud | | 0:00:02.456 | | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration | FAILED | undercloud | undercloud | | 0:00:00.882 | | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment | FAILED | undercloud | undercloud | | 0:00:00.880 | | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks | FAILED | undercloud | undercloud | | 0:00:01.934 | | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans | FAILED | undercloud | undercloud | | 0:00:01.931 | | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding | PASSED | all | undercloud | | 0:00:00.366 | +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ Also this step of passing the inventory file is not mentioned anywhere in the document. Is there anything I am missing? Regards Anirudh Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmeng at uvic.ca Fri Oct 8 17:36:27 2021 From: dmeng at uvic.ca (dmeng) Date: Fri, 08 Oct 2021 10:36:27 -0700 Subject: [sdk]: Remove volumes stuck in error deleting status Message-ID: Hello there, Hope everything is going well. I would like to know if there is any method that could remove the volume stuck in the "error-deleting" status? We are using the chameleon openstack cloud but we are not the admin user there, so couldn't use the "cinder force-delete" or "cinder reset-state" command. Wondering if there is any other method we could use to remove those volumes in our own project? And also wondering what might cause this "error-deleting" problem? We use openstacksdk block storage service, "cinder.delete_volume()" method to remove volumes, and it works fine before. Thanks and have a great day! Catherine -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuscoyu at gmail.com Sat Oct 9 12:19:17 2021 From: fuscoyu at gmail.com (fusco lu) Date: Sat, 9 Oct 2021 20:19:17 +0800 Subject: =?UTF-8?Q?Why_is_kolla=2Dkubernetes_not_maintained_anymore=EF=BC=9F?= Message-ID: hi,everyone Can you tell me the reason for not being maintained? -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Sat Oct 9 21:17:56 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 9 Oct 2021 17:17:56 -0400 Subject: [sdk]: Remove volumes stuck in error deleting status In-Reply-To: References: Message-ID: Is this the NSF Openstack cloud? Usually, if a delete fails, you'll need to get an Openstack admin to have a look. It's not a good sign most of the time. On Sat, Oct 9, 2021 at 5:12 PM dmeng wrote: > Hello there, > > Hope everything is going well. > > > > I would like to know if there is any method that could remove the volume > stuck in the "error-deleting" status? We are using the chameleon openstack > cloud but we are not the admin user there, so couldn't use the "cinder > force-delete" or "cinder reset-state" command. Wondering if there is any > other method we could use to remove those volumes in our own project? And > also wondering what might cause this "error-deleting" problem? We use > openstacksdk block storage service, "cinder.delete_volume()" method to > remove volumes, and it works fine before. > > Thanks and have a great day! > Catherine > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Mon Oct 11 03:11:22 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Mon, 11 Oct 2021 08:41:22 +0530 Subject: [Horizon] Yoga PTG Schedule Message-ID: Hello everyone, I have booked the below slots for Horizon Yoga PTG: Monday, October 18, 15:00 - 17:00 UTC Tuesday, October 19, 15:00 - 17:00 UTC Wednesday, October 20, 16:00 - 17:00 UTC I have also created Etherpad to collect topics for ptg discussion [1]. Feel free to add your topics. Please Let me know in case you have any topics to discuss and the above schedule doesn't work for you and We will see how to manage that. Also Please register for PTG if you haven't done it yet [2]. Thank you, Vishal Manchanda(irc: vishalmanchanda) [1] https://etherpad.opendev.org/p/horizon-yoga-ptg [2] https://www.eventbrite.com/e/project-teams-gathering-october-2021-tickets-161235669227 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Mon Oct 11 04:39:39 2021 From: sorrison at gmail.com (Sam Morrison) Date: Mon, 11 Oct 2021 15:39:39 +1100 Subject: [kolla] parent tags In-Reply-To: <19227A29-3F33-4EF3-B68B-AC6ABF87FB2B@uchicago.edu> References: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> <19227A29-3F33-4EF3-B68B-AC6ABF87FB2B@uchicago.edu> Message-ID: Thank Jason and Mark, I think just adding another tag at the end of the build process is what we are going to do. On a related note doe anyone have any tips on how to version a horizon container because it has multiple repos inside. Eg. We have the source for horizon and then source for each plugin which have different versions. With Debian they are all separate debs and installed differently with separate version and makes tracking things really easy. In the container world it makes it a bit harder. I?m thinking we need to have our kolla-build.conf specify specific git refs and then when we update this file incorporate that somehow into the versioning. Sam > On 8 Oct 2021, at 2:47 am, Jason Anderson wrote: > > Sam, I think Mark?s idea is in general stronger than what I will describe, if all you?re after is different aliases. It sounds like you are trying to iterate on two images (Barbican and Nova), presumably changing the source of the former frequently, and don?t want to build the entire ancestor chain each time. > > I had to do something similar because we have a fork of Horizon we work on a lot. Here is my hacky solution: https://github.com/ChameleonCloud/kolla/commit/79611111c03cc86be91a86a9ccd296abc7aa3a3e > > We are on Train w/ some other Kolla forks so I can?t guarantee that will apply cleanly, but it?s a small change. It involves adding build-args to some Dockerfiles, in your case I suppose barbican-base, but also nova-base. It?s a bit clunky but gets the job done for us. > > /Jason > >> On Oct 7, 2021, at 3:41 AM, Mark Goddard > wrote: >> >> Hi Sam, >> >> I don't generally do that, and Kolla isn't really set up to make it >> easy. You could tag the base containers with the new tag: >> >> docker pull -base:wallaby >> docker tag -base:wallaby -base: >> >> Mark >> >> On Thu, 7 Oct 2021 at 03:34, Sam Morrison > wrote: >>> >>> I?m trying to be able to build a projects container without having to rebuild the parents which have different tags. >>> >>> The workflow I?m trying to achieve is: >>> >>> Build base and openstack-base with a tag of wallaby >>> >>> Build a container image for barbican with a tag of the version of barbican that is returned when doing `git describe` >>> Build a container image for nova with a tag of the version of barbican that is returned when doing `git describe` >>> etc.etc. >>> >>> I don?t seem to be able to do this without having to also build a new base and openstack-base with the same tag which is slow and also means a lot of disk space. >>> >>> Just wondering how other people do this sort of stuff? >>> Any ideas? >>> >>> Thanks, >>> Sam >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Oct 11 08:13:13 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 11 Oct 2021 09:13:13 +0100 Subject: [kolla] parent tags In-Reply-To: References: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> <19227A29-3F33-4EF3-B68B-AC6ABF87FB2B@uchicago.edu> Message-ID: On Mon, 11 Oct 2021 at 05:39, Sam Morrison wrote: > > Thank Jason and Mark, > > I think just adding another tag at the end of the build process is what we are going to do. > > On a related note doe anyone have any tips on how to version a horizon container because it has multiple repos inside. > > Eg. We have the source for horizon and then source for each plugin which have different versions. > With Debian they are all separate debs and installed differently with separate version and makes tracking things really easy. > > In the container world it makes it a bit harder. > I?m thinking we need to have our kolla-build.conf specify specific git refs and then when we update this file incorporate that somehow into the versioning. This is probably one reason why kolla doesn't do it this way - there isn't always a single versioned thing that's being deployed. Every service has dependencies. In this case I'd suggest going with the version of horizon. > > Sam > > > > > On 8 Oct 2021, at 2:47 am, Jason Anderson wrote: > > Sam, I think Mark?s idea is in general stronger than what I will describe, if all you?re after is different aliases. It sounds like you are trying to iterate on two images (Barbican and Nova), presumably changing the source of the former frequently, and don?t want to build the entire ancestor chain each time. > > I had to do something similar because we have a fork of Horizon we work on a lot. Here is my hacky solution: https://github.com/ChameleonCloud/kolla/commit/79611111c03cc86be91a86a9ccd296abc7aa3a3e > > We are on Train w/ some other Kolla forks so I can?t guarantee that will apply cleanly, but it?s a small change. It involves adding build-args to some Dockerfiles, in your case I suppose barbican-base, but also nova-base. It?s a bit clunky but gets the job done for us. > > /Jason > > On Oct 7, 2021, at 3:41 AM, Mark Goddard wrote: > > Hi Sam, > > I don't generally do that, and Kolla isn't really set up to make it > easy. You could tag the base containers with the new tag: > > docker pull -base:wallaby > docker tag -base:wallaby -base: > > Mark > > On Thu, 7 Oct 2021 at 03:34, Sam Morrison wrote: > > > I?m trying to be able to build a projects container without having to rebuild the parents which have different tags. > > The workflow I?m trying to achieve is: > > Build base and openstack-base with a tag of wallaby > > Build a container image for barbican with a tag of the version of barbican that is returned when doing `git describe` > Build a container image for nova with a tag of the version of barbican that is returned when doing `git describe` > etc.etc. > > I don?t seem to be able to do this without having to also build a new base and openstack-base with the same tag which is slow and also means a lot of disk space. > > Just wondering how other people do this sort of stuff? > Any ideas? > > Thanks, > Sam > > > > > > From tjoen at dds.nl Mon Oct 11 08:49:38 2021 From: tjoen at dds.nl (tjoen) Date: Mon, 11 Oct 2021 10:49:38 +0200 Subject: [Xena] It works! Message-ID: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> Just testing every release since Train on an LFS system with Python-3.9 cryptography-35.0.0 is necessary Thank you all From ralonsoh at redhat.com Mon Oct 11 13:30:44 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 11 Oct 2021 15:30:44 +0200 Subject: [neutron] Bug deputy, report of week 2021-10-4 Message-ID: Hello Neutrinos: This is the last week report: High: - https://bugs.launchpad.net/neutron/+bug/1946187: HA routers not going to be "primary" at all. Unassigned. - https://bugs.launchpad.net/neutron/+bug/1931696. ovs offload broken from neutron 16.3.0 onwards. Assigned. - https://review.opendev.org/c/openstack/neutron/+/812641 - https://bugs.launchpad.net/neutron/+bug/1946318: [ovn] Memory consumption grows over time due to MAC_Binding entries in SB database. Assigned. - https://review.opendev.org/c/openstack/neutron/+/812805 - https://bugs.launchpad.net/neutron/+bug/1946456: [OVN] Scheduling of HA Chassis Group for external port does not work when no chassis has 'enable-chassis-as-gw' option set. Unassigned. - https://bugs.launchpad.net/neutron/+bug/1946588: [OVN]Metadata get warn logs after boot instance server about "MetadataServiceReadyWaitTimeoutException". Assigned. - https://review.opendev.org/c/openstack/neutron/+/813376 Medium: - https://bugs.launchpad.net/neutron/+bug/1946186: Fullstack test neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_router_fip_qos_after_admin_state_down_up failing intermittently. Unassigned. - https://bugs.launchpad.net/neutron/+bug/1946479: [OVN migration] qr- interfaces and trunk subports aren't cleaned after migration to ML2/OVN. Assigned. - https://review.opendev.org/c/openstack/neutron/+/813186 - https://review.opendev.org/c/openstack/neutron/+/813187 - https://bugs.launchpad.net/neutron/+bug/1946589: [OVN] localport might not be updated when create multiple subnets for its network. Unassigned. Low: - https://bugs.launchpad.net/neutron/+bug/1945954: [os-ken] Missing subclass for SUBTYPE_RIB_*_MULTICAST in mrtlib. Assigned. - https://review.opendev.org/c/openstack/os-ken/+/812293 - https://bugs.launchpad.net/neutron/+bug/1946023: [OVN] Check OVN Port_Group compatibility. Assigned. - https://review.opendev.org/c/openstack/neutron/+/812176 - https://bugs.launchpad.net/neutron/+bug/1946250: Neutron API reference should explain the intended behavior of port security extension. Unassigned. Whishlist: - https://bugs.launchpad.net/neutron/+bug/1946251: [RFE] API: allow to disable anti-spoofing but not SGs. Assigned. Duplicated: - https://bugs.launchpad.net/neutron/+bug/1945646: Nova fails to live migrate instance with upper-case port MAC Incomplete: - https://bugs.launchpad.net/neutron/+bug/1946535: Segment plugin disabled delete network will raise exception. - Maybe ?segments? plugin is loaded in this deployment. - https://bugs.launchpad.net/neutron/+bug/1946624: OVSDB Error: Transaction causes multiple rows in "Port_Group" table to have identical values. - Maybe duplicated of https://bugs.launchpad.net/neutron/+bug/1938766. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonykarera at gmail.com Mon Oct 11 13:35:06 2021 From: tonykarera at gmail.com (Karera Tony) Date: Mon, 11 Oct 2021 15:35:06 +0200 Subject: Restarting Openstack Victoria using kolla-ansible Message-ID: Hello Team, I am trying to deploy openstack Victoria ..... Below I install kolla-ansible on the deployment server , I first clone the * git clone --branch stable/victoria https://opendev.org/openstack/kolla-ansible* but when I run the deployment without uncommenting openstack_release .. By default it deploys wallaby And when I uncomment and type victoria ...Some of the containers keep restarting esp Horizon Any idea on how to resolve this ? Even the kolla content that I use for deployment, I get it from the kolla-ansible directory that I cloned Regards Tony Karera -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Mon Oct 11 13:39:10 2021 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 11 Oct 2021 13:39:10 +0000 Subject: [neutron] openflow rules tools Message-ID: Hello, When using native ovs in neutron, we endup with a lot of openflow rules on ovs side. Debugging it with regular ovs-ofctl --color dump-flows is kind of painful. Is there any tool that the community is using to manage that? Thanks in advance! Arnaud. From fungi at yuggoth.org Mon Oct 11 14:20:09 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 11 Oct 2021 14:20:09 +0000 Subject: [Xena] It works! In-Reply-To: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> References: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> Message-ID: <20211011142008.heixx43ckfsaoyd4@yuggoth.org> On 2021-10-11 10:49:38 +0200 (+0200), tjoen wrote: > Just testing every release since Train on an LFS system with Python-3.9 > cryptography-35.0.0 is necessary Thanks for testing! Just be aware that Python 3.8 is the most recent interpreter targeted by Xena: https://governance.openstack.org/tc/reference/runtimes/xena.html Discussion is underway at the PTG next week to determine what the tested runtimes should be for Yoga, but testing with 3.9 is being suggested (or maybe even 3.10): https://etherpad.opendev.org/p/tc-yoga-ptg -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From aschultz at redhat.com Mon Oct 11 14:25:51 2021 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 11 Oct 2021 08:25:51 -0600 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: On Sat, Oct 9, 2021 at 3:11 PM Anirudh Gupta wrote: > > Hi Team, > > I am installing Tripleo using the below link > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html > > In the Introspect section, When I executed the command > openstack tripleo validator run --group pre-introspection > > I got the following error: > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu | PASSED | localhost | localhost | | 0:00:01.261 | > | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space | PASSED | localhost | localhost | | 0:00:04.480 | > | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram | PASSED | localhost | localhost | | 0:00:02.173 | > | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode | PASSED | localhost | localhost | | 0:00:01.546 | > | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway | FAILED | undercloud | No host matched | | | > | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space | FAILED | undercloud | No host matched | | | > | 2f0239db-d530-48eb-b606-f82179e72e50 | undercloud-neutron-sanity-check | FAILED | undercloud | No host matched | | | > | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range | FAILED | undercloud | No host matched | | | > | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection | FAILED | undercloud | No host matched | | | > | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush | FAILED | undercloud | No host matched | | | > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > > > Then I created the following inventory file: > [Undercloud] > undercloud > > Passed this command while running the pre-introspection command. > It then executed successfully. > > > But with Pre-deployment, it is still failing even after passing the inventory > > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e | PASSED | localhost | localhost | | 0:00:00.504 | > | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns | PASSED | localhost | localhost | | 0:00:00.481 | > | 93611c13-49a2-4cae-ad87-099546459481 | service-status | PASSED | all | undercloud | | 0:00:06.942 | > | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux | PASSED | all | undercloud | | 0:00:02.433 | > | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version | FAILED | all | undercloud | | 0:00:03.576 | > | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed | PASSED | undercloud | undercloud | | 0:00:02.850 | > | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed | FAILED | allovercloud | No host matched | | | > | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment | FAILED | undercloud | undercloud | | 0:00:31.559 | > | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug | FAILED | undercloud | undercloud | | 0:00:02.057 | > | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud | | 0:00:00.884 | > | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted | FAILED | undercloud | undercloud | | 0:00:02.138 | > | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count | PASSED | undercloud | undercloud | | 0:00:06.164 | > | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count | FAILED | undercloud | undercloud | | 0:00:00.934 | > | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning | FAILED | undercloud | undercloud | | 0:00:02.456 | > | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration | FAILED | undercloud | undercloud | | 0:00:00.882 | > | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment | FAILED | undercloud | undercloud | | 0:00:00.880 | > | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks | FAILED | undercloud | undercloud | | 0:00:01.934 | > | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans | FAILED | undercloud | undercloud | | 0:00:01.931 | > | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding | PASSED | all | undercloud | | 0:00:00.366 | > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > > Also this step of passing the inventory file is not mentioned anywhere in the document. Is there anything I am missing? > It's likely that the documentation is out of date for the validation calls. I don't believe we test this in CI so it's probably broken. The validation calls are generally optional so you should be ok to proceed with introspection > Regards > Anirudh Gupta > From openstack at nemebean.com Mon Oct 11 15:25:29 2021 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Oct 2021 10:25:29 -0500 Subject: [KEYSTONE][POLICIES] - Overrides that don't work? In-Reply-To: References: Message-ID: I don't believe it's possible to override the scope of a policy rule. In this case it sounds like the user should request a domain-scoped token to perform this operation. For details on who to do that, see https://docs.openstack.org/keystone/wallaby/admin/tokens-overview.html#authorization-scopes On 10/6/21 7:52 AM, Ga?l THEROND wrote: > Hi team, > > I'm having a weird behavior with my Openstack platform that makes me > think I may have misunderstood some mechanisms on the way policies are > working and especially the overriding. > > So, long story short, I've few services that get custom policies such as > glance that behave as expected, Keystone's one aren't. > > All in all, here is what I'm understanding of the mechanism: > > This is the keystone policy that I'm looking to override: > https://paste.openstack.org/show/bwuF6jFISscRllWdUURL/ > > > This policy default can be found in here: > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > > Here is the policy that I'm testing: > https://paste.openstack.org/show/bHQ0PXvOro4lXNTlxlie/ > > > I know, this policy isn't taking care of the admin role but it's not the > point. > > From my understanding, any user with the project-manager role should be > able to add any available user on any available group as long as the > project-manager domain is the same as the target. > > However, when I'm doing that, keystone complains that I'm not authorized > to do so because the user token scope is 'PROJECT' where it should be > 'SYSTEM' or 'DOMAIN'. > > Now, I wouldn't be surprised of that message being thrown?out with the > default policy as it's stated on the code with the following: > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > > So the question is, if the custom policy doesn't override the default > scope_types how am I supposed to make it work? > > I hope it was clear enough, but if not, feel free to ask me for more > information. > > PS: I've tried to assign this role with a domain scope to my user and > I've still the same issue. > > Thanks a lot everyone! > > From openstack at nemebean.com Mon Oct 11 15:27:18 2021 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Oct 2021 10:27:18 -0500 Subject: [docs] Keystone docs missing for Xena Message-ID: <8e9b9ddd-48c4-c144-cad6-63ec0682c5e8@nemebean.com> Hey, I was just looking for the Keystone docs and discovered that they are not listed on https://docs.openstack.org/xena/projects.html. If I s/wallaby/xena/ on the wallaby version then it resolves, so it looks like the docs are published they just aren't included in the index for some reason. -Ben From gmann at ghanshyammann.com Mon Oct 11 15:34:42 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 11 Oct 2021 10:34:42 -0500 Subject: [all][tc] Technical Committee next weekly meeting on Oct 14th at 1500 UTC Message-ID: <17c6ffde3e7.116bfff55951568.8663051282986901805@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for Oct 14th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, Oct 13th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From laurentfdumont at gmail.com Mon Oct 11 15:52:52 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 11 Oct 2021 11:52:52 -0400 Subject: [neutron] openflow rules tools In-Reply-To: References: Message-ID: Also interested in this. Reading rules in dump-flows is an absolute pain. In an ideal world, I would have never have to. We some stuff on our side that I'll see if I can share. On Mon, Oct 11, 2021 at 9:41 AM Arnaud Morin wrote: > Hello, > > When using native ovs in neutron, we endup with a lot of openflow rules > on ovs side. > > Debugging it with regular ovs-ofctl --color dump-flows is kind of > painful. > > Is there any tool that the community is using to manage that? > > Thanks in advance! > > Arnaud. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Mon Oct 11 16:26:18 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Mon, 11 Oct 2021 18:26:18 +0200 Subject: [ptl][release][stable][EM] Extended Maintenance - Ussuri Message-ID: Hi, As Xena was released last week and we are in a less busy period, now it is a good time to call your attention to the following: In a month Ussuri is planned to transition to Extended Maintenance phase [1] (planned date: 2021-11-12). I have generated the list of the current *open* and *unreleased* changes in stable/ussuri for the follows-policy tagged repositories [2] (where there are such patches). These lists could help the teams who are planning to do a *final* release on Ussuri before moving stable/ussuri branches to Extended Maintenance. Feel free to edit and extend these lists to track your progress! * At the transition date the Release Team will tag the *latest* Ussuri releases of repositories with *ussuri-em* tag. * After the transition stable/ussuri will be still open for bug fixes, but there won't be official releases anymore. *NOTE*: teams, please focus on wrapping up your libraries first if there is any concern about the changes, in order to avoid broken (final!) releases! Thanks, El?d [1] https://releases.openstack.org/ [2] https://etherpad.opendev.org/p/ussuri-final-release-before-em From ashlee at openstack.org Mon Oct 11 16:41:51 2021 From: ashlee at openstack.org (Ashlee Ferguson) Date: Mon, 11 Oct 2021 11:41:51 -0500 Subject: [all][PTG] October 2021 - PTGbot, Etherpads, & IRC Message-ID: <14361607-09BD-43B6-BB8D-7CCC2053F576@openstack.org> Hello! We just wanted to take a second to point out a couple things that have changed since the last PTG as we all get ready for the next PTG. Firstly, the PTGbot is up to date and ready to go *at it's new URL[1]*-- as are the autogenerated etherpads! There you can find the schedule page, etherpads, etc. If you/your team have already created an etherpad, please feel free to use the PTGbot to override the default, auto-generated one[2]. Secondly, just a reminder that with the migration to being more inclusive of all Open Infrastructure Foundation projects we will be using the #openinfra-events IRC channel on the OFTC network! And again, if you haven't yet, please register[3]! Its free and important for getting the zoom information, etc. Thanks! Ashlee (ashferg) and Kendall (diablo_rojo) [1] PTGbot: https://ptg.opendev.org/ [2] PTGbot Etherpad Override Command: https://opendev.org/openstack/ptgbot/src/branch/master/README.rst#etherpad [3] PTG Registration: https://openinfra-ptg.eventbrite.com From tjoen at dds.nl Mon Oct 11 16:49:51 2021 From: tjoen at dds.nl (tjoen) Date: Mon, 11 Oct 2021 18:49:51 +0200 Subject: [Xena] It works! In-Reply-To: <20211011142008.heixx43ckfsaoyd4@yuggoth.org> References: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> <20211011142008.heixx43ckfsaoyd4@yuggoth.org> Message-ID: <69f236d2-0441-c688-562d-19c818b8030a@dds.nl> On 10/11/21 16:20, Jeremy Stanley wrote: > On 2021-10-11 10:49:38 +0200 (+0200), tjoen wrote: >> Just testing every release since Train on an LFS system with Python-3.9 >> cryptography-35.0.0 is necessary Forgotten to mention that that new cryptography only applies to opensl-3.0.0 > Thanks for testing! Just be aware that Python 3.8 is the most recent > interpreter targeted by Xena: > > https://governance.openstack.org/tc/reference/runtimes/xena.html Thx for that link. I'll consult it at next release > Discussion is underway at the PTG next week to determine what the > tested runtimes should be for Yoga, but testing with 3.9 is being > suggested (or maybe even 3.10): I have put 2022-03-30 in my agenda > https://etherpad.opendev.org/p/tc-yoga-ptg > From ashlee at openstack.org Mon Oct 11 17:02:04 2021 From: ashlee at openstack.org (Ashlee Ferguson) Date: Mon, 11 Oct 2021 12:02:04 -0500 Subject: OpenInfra Live - October 14, 2021 at 9am CT Message-ID: <82053AAC-6A32-488D-A531-16BD80100091@openstack.org> Hi everyone, This week?s OpenInfra Live episode is brought to you by the OpenStack community. Networking is complex, and Neutron is one of the most difficult parts of OpenStack to scale. In this episode of the Large Scale OpenStack show, we will explore early architectural choices you can make, recommended drivers, features to avoid if your ultimate goal is to scale to a very large deployment. Join OpenStack developers and operators as they share their Neutron scaling best practices. Episode: Large Scale OpenStack: Neutron scaling best practices Date and time: October 14, 2021 at 9am CT (1400 UTC) You can watch us live on: YouTube: https://www.youtube.com/watch?v=4ZLqILbLIpQ LinkedIn: https://www.linkedin.com/video/event/urn:li:ugcPost:6851936962715222016/ Facebook: https://www.facebook.com/104139126308032/posts/4407685335953368/ WeChat: recording will be posted on OpenStack WeChat after the live stream Speakers: Thierry Carrez (OpenInfra Foundation) David Comay (Bloomberg) Ibrahim Derraz (Exaion) Slawek Kaplonski (Red Hat) Lajos Katona (Ericsson) Mohammed Naser (VEXXHOST) Michal Nasiadka (StackHPC) Have an idea for a future episode? Share it now at ideas.openinfra.live . Register now for OpenInfra Live: Keynotes, a special edition of OpenInfra Live on November 17-18th starting at 1500 UTC: https://openinfralivekeynotes.eventbrite.com/ Thanks! Ashlee -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon Oct 11 17:06:40 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 11 Oct 2021 10:06:40 -0700 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Interesting, thank you for trying that out. We call the nova "interface_attach" and pass in the port_id you provided on the load balancer create command line. In the worker log, above the "tree" log lines, is there another ERROR log line that includes the exception returned from nova? Also, I would be interested to see what nova logged as to why it was unable to attach the port. That may be in the main nova logs, or possibly on the compute host nova logs. Michael On Thu, Oct 7, 2021 at 5:36 PM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > Hi Michael, > > I made a mistake when creating VM manually, I should use --nic option not --network option. After correcting that, I can create VM with the extra-flavor: > > $ openstack server create --flavor octavia-flavor --image Centos7 --nic port-id=test-port --security-group demo-secgroup --key-name demo-key test-vm > > $ nova list --all --fields name,status,host,networks | grep test-vm > | 8548400b-725a-405a-aeeb-ed1d208915e2 | test-vm | ACTIVE | overcloud-sriovperformancecompute-201-1.localdomain | ext-net1=10.5.201.149 > > A 2nd VF interface is seen inside the VM: > > [centos at test-vm ~]$ ip a > ... > 3: eth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 0a:b2:d4:85:a2:e6 brd ff:ff:ff:ff:ff:ff > > This MAC is not seen by neutron though: > > $ openstack port list | grep 0a:b2:d4:85:a2:e6 > > [empty] > > ===================== > However when I tried to create LB with the same VM flavor, it failed at the same place as before. > > Looking at worker.log, it seems the error is similar to use --network option to create the VM manually. But you are the expert. > > "Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52" > > Here is the full list of command line: > > $ openstack flavor list | grep octavia-flavor > | eb312b9a-d04d-4a88-9db2-7a88ce167cff | octavia-flavor | 4096 | 0 | 0 | 4 | True | > > openstack loadbalancer flavorprofile create --name ofp1 --provider amphora --flavor-data '{"compute_flavor": "eb312b9a-d04d-4a88-9db2-7a88ce167cff"}' > openstack loadbalancer flavor create --name of1 --flavorprofile ofp1 --enable > openstack loadbalancer create --name lb1 --flavor of1 --vip-port-id test-port --vip-subnet-id ext-subnet1 > > > |__Flow 'octavia-create-loadbalancer-flow': PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py", line 399, in execute > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker loadbalancer, loadbalancer.vip, amphora, subnet) > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 391, in plug_aap_port > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker interface = self._plug_amphora_vip(amphora, subnet) > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 123, in _plug_amphora_vip > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker raise base.PlugVIPException(message) > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker > > > -----Original Message----- > From: Zhang, Jing C. (Nokia - CA/Ottawa) > Sent: Thursday, October 7, 2021 6:18 PM > To: Michael Johnson > Cc: openstack-discuss at lists.openstack.org > Subject: RE: [Octavia] Can not create LB on SRIOV network > > Hi Michael, > > Thank you so much for the information. > > I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. > > However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: > https://docs.openstack.org/nova/train/admin/pci-passthrough.html > https://docs.openstack.org/nova/latest/admin/pci-passthrough.html > > ========================= > Here is the detail: > > Env: NIC is intel 82599, creating VM with SRIOV direct port works well. > > Nova.conf > > passthrough_whitelist={"devname":"ens1f0","physical_network":"physnet5"} > passthrough_whitelist={"devname":"ens1f1","physical_network":"physnet6"} > > Sriov_agent.ini > > [sriov_nic] > physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 > > (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: > > alias = { "vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf", "numa_policy": "required" } > > (2) Used the extra-spec in nova flavor > > openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" > > (3) Failed to create VM with this flavor, sriov agent log does not show port event, for sure also failed to create LB, PortBindingFailed > > > (4) Tried multiple formats to add whitelist for PF and VF in nova.conf for nova-compute, and retried, still failed > > passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","devname":"ens1f0","physical_network":"physnet5"} #PF passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","physical_network":"physnet5"} #VF > > The sriov agent log does not show port event for any of them. > > > > > -----Original Message----- > From: Michael Johnson > Sent: Wednesday, October 6, 2021 4:48 PM > To: Zhang, Jing C. (Nokia - CA/Ottawa) > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [Octavia] Can not create LB on SRIOV network > > Hi Jing, > > To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. > > It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. > > You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. > This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. > > I have not tried this and would be interested to hear if it works for you. > > If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. > > Michael > > [1] https://wiki.openstack.org/wiki/Octavia/Roadmap > [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html > [3] https://docs.openstack.org/octavia/latest/admin/flavors.html > [4] https://etherpad.opendev.org/p/yoga-ptg-octavia > > On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > > > > > Thank you so much > > > > > > > > Jing > > > > > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > > Interface Config Guide (Openstack) > > > > > > > > Hi, > > In Openstack train release, creating Octavia LB on SRIOV network fails. > > I come here to search if there is already a plan to add this support, and see this story. > > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > > Thank you > > > > > > > > > > > > > > > > From johnsomor at gmail.com Mon Oct 11 17:14:52 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 11 Oct 2021 10:14:52 -0700 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: References: Message-ID: Hi Albert, Have you configured your distributed lock manager for Designate? [coordination] backend_url = Michael On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > Before applying the change, we see the DNS record in the recordset: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > > > and we can pull it from the DNS server on the controllers: > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > After applying the change, we don?t see it: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > $ > > > > We see this in the logs: > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From gael.therond at bitswalk.com Mon Oct 11 17:18:53 2021 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Mon, 11 Oct 2021 19:18:53 +0200 Subject: [KEYSTONE][POLICIES] - Overrides that don't work? In-Reply-To: References: Message-ID: Hi ben! Thanks a lot for the answer! Ok I?ll get a look at that, but if I correctly understand a user with a role of project-admin attached to him as a scoped to domain he should be able to add users to a group once the policy update right? Once again thanks a lot for your answer! Le lun. 11 oct. 2021 ? 17:25, Ben Nemec a ?crit : > I don't believe it's possible to override the scope of a policy rule. In > this case it sounds like the user should request a domain-scoped token > to perform this operation. For details on who to do that, see > > https://docs.openstack.org/keystone/wallaby/admin/tokens-overview.html#authorization-scopes > > On 10/6/21 7:52 AM, Ga?l THEROND wrote: > > Hi team, > > > > I'm having a weird behavior with my Openstack platform that makes me > > think I may have misunderstood some mechanisms on the way policies are > > working and especially the overriding. > > > > So, long story short, I've few services that get custom policies such as > > glance that behave as expected, Keystone's one aren't. > > > > All in all, here is what I'm understanding of the mechanism: > > > > This is the keystone policy that I'm looking to override: > > https://paste.openstack.org/show/bwuF6jFISscRllWdUURL/ > > > > > > This policy default can be found in here: > > > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > > > > > Here is the policy that I'm testing: > > https://paste.openstack.org/show/bHQ0PXvOro4lXNTlxlie/ > > > > > > I know, this policy isn't taking care of the admin role but it's not the > > point. > > > > From my understanding, any user with the project-manager role should be > > able to add any available user on any available group as long as the > > project-manager domain is the same as the target. > > > > However, when I'm doing that, keystone complains that I'm not authorized > > to do so because the user token scope is 'PROJECT' where it should be > > 'SYSTEM' or 'DOMAIN'. > > > > Now, I wouldn't be surprised of that message being thrown out with the > > default policy as it's stated on the code with the following: > > > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > > > > > So the question is, if the custom policy doesn't override the default > > scope_types how am I supposed to make it work? > > > > I hope it was clear enough, but if not, feel free to ask me for more > > information. > > > > PS: I've tried to assign this role with a domain scope to my user and > > I've still the same issue. > > > > Thanks a lot everyone! > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Oct 11 17:21:53 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 11 Oct 2021 18:21:53 +0100 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: On Mon, Oct 11, 2021 at 6:12 PM Michael Johnson wrote: > > Interesting, thank you for trying that out. > > We call the nova "interface_attach" and pass in the port_id you > provided on the load balancer create command line. > > In the worker log, above the "tree" log lines, is there another ERROR > log line that includes the exception returned from nova? until very recently nova did not support interface attach for sriov interfaces. https://specs.openstack.org/openstack/nova-specs/specs/victoria/implemented/sriov-interface-attach-detach.html today we do allow it but we do not guarentee it will work. if there are not enoch pci slots in the vm or there are not enough VF on the host that are attached to the correct phsynet the attach will fail. the most comon reason the attach fails is either numa affintiy cannot be acived or there is an issue in the guest/qemu the guest kernel need to repond to the hotplug event when qemu tries to add the device if it does not it will fail. keeping all of tha tin mind for sriov attach to work octavia will have to create the port with vnic_type=driect or one of the other valid options like macvtap or direct phsyical. you cannot attach sriov device that can be used with octavia using flavor extra specs. > > Also, I would be interested to see what nova logged as to why it was > unable to attach the port. That may be in the main nova logs, or > possibly on the compute host nova logs. > > Michael > > On Thu, Oct 7, 2021 at 5:36 PM Zhang, Jing C. (Nokia - CA/Ottawa) > wrote: > > > > Hi Michael, > > > > I made a mistake when creating VM manually, I should use --nic option not --network option. After correcting that, I can create VM with the extra-flavor: > > > > $ openstack server create --flavor octavia-flavor --image Centos7 --nic port-id=test-port --security-group demo-secgroup --key-name demo-key test-vm > > > > $ nova list --all --fields name,status,host,networks | grep test-vm > > | 8548400b-725a-405a-aeeb-ed1d208915e2 | test-vm | ACTIVE | overcloud-sriovperformancecompute-201-1.localdomain | ext-net1=10.5.201.149 > > > > A 2nd VF interface is seen inside the VM: > > > > [centos at test-vm ~]$ ip a > > ... > > 3: eth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 > > link/ether 0a:b2:d4:85:a2:e6 brd ff:ff:ff:ff:ff:ff > > > > This MAC is not seen by neutron though: > > > > $ openstack port list | grep 0a:b2:d4:85:a2:e6 > > > > [empty] > > > > ===================== > > However when I tried to create LB with the same VM flavor, it failed at the same place as before. > > > > Looking at worker.log, it seems the error is similar to use --network option to create the VM manually. But you are the expert. > > > > "Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52" > > > > Here is the full list of command line: > > > > $ openstack flavor list | grep octavia-flavor > > | eb312b9a-d04d-4a88-9db2-7a88ce167cff | octavia-flavor | 4096 | 0 | 0 | 4 | True | > > > > openstack loadbalancer flavorprofile create --name ofp1 --provider amphora --flavor-data '{"compute_flavor": "eb312b9a-d04d-4a88-9db2-7a88ce167cff"}' > > openstack loadbalancer flavor create --name of1 --flavorprofile ofp1 --enable > > openstack loadbalancer create --name lb1 --flavor of1 --vip-port-id test-port --vip-subnet-id ext-subnet1 > > > > > > |__Flow 'octavia-create-loadbalancer-flow': PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py", line 399, in execute > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker loadbalancer, loadbalancer.vip, amphora, subnet) > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 391, in plug_aap_port > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker interface = self._plug_amphora_vip(amphora, subnet) > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 123, in _plug_amphora_vip > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker raise base.PlugVIPException(message) > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker > > > > > > -----Original Message----- > > From: Zhang, Jing C. (Nokia - CA/Ottawa) > > Sent: Thursday, October 7, 2021 6:18 PM > > To: Michael Johnson > > Cc: openstack-discuss at lists.openstack.org > > Subject: RE: [Octavia] Can not create LB on SRIOV network > > > > Hi Michael, > > > > Thank you so much for the information. > > > > I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. > > > > However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: > > https://docs.openstack.org/nova/train/admin/pci-passthrough.html > > https://docs.openstack.org/nova/latest/admin/pci-passthrough.html > > > > ========================= > > Here is the detail: > > > > Env: NIC is intel 82599, creating VM with SRIOV direct port works well. > > > > Nova.conf > > > > passthrough_whitelist={"devname":"ens1f0","physical_network":"physnet5"} > > passthrough_whitelist={"devname":"ens1f1","physical_network":"physnet6"} > > > > Sriov_agent.ini > > > > [sriov_nic] > > physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 > > > > (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: > > > > alias = { "vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf", "numa_policy": "required" } > > > > (2) Used the extra-spec in nova flavor > > > > openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" > > > > (3) Failed to create VM with this flavor, sriov agent log does not show port event, for sure also failed to create LB, PortBindingFailed > > > > > > (4) Tried multiple formats to add whitelist for PF and VF in nova.conf for nova-compute, and retried, still failed > > > > passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","devname":"ens1f0","physical_network":"physnet5"} #PF passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","physical_network":"physnet5"} #VF > > > > The sriov agent log does not show port event for any of them. > > > > > > > > > > -----Original Message----- > > From: Michael Johnson > > Sent: Wednesday, October 6, 2021 4:48 PM > > To: Zhang, Jing C. (Nokia - CA/Ottawa) > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: [Octavia] Can not create LB on SRIOV network > > > > Hi Jing, > > > > To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. > > > > It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. > > > > You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. > > This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. > > > > I have not tried this and would be interested to hear if it works for you. > > > > If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. > > > > Michael > > > > [1] https://wiki.openstack.org/wiki/Octavia/Roadmap > > [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html > > [3] https://docs.openstack.org/octavia/latest/admin/flavors.html > > [4] https://etherpad.opendev.org/p/yoga-ptg-octavia > > > > On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > > > > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > > > > > > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > > > > > > > > > Thank you so much > > > > > > > > > > > > Jing > > > > > > > > > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > > > Interface Config Guide (Openstack) > > > > > > > > > > > > Hi, > > > In Openstack train release, creating Octavia LB on SRIOV network fails. > > > I come here to search if there is already a plan to add this support, and see this story. > > > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > > > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > > > Thank you > > > > > > > > > > > > > > > > > > > > > > > > > From james.slagle at gmail.com Mon Oct 11 17:35:24 2021 From: james.slagle at gmail.com (James Slagle) Date: Mon, 11 Oct 2021 13:35:24 -0400 Subject: [TripleO] PTG proposed schedule Message-ID: I have put up a tentative schedule in the etherpad for each of our proposed sessions: https://etherpad.opendev.org/p/tripleo-yoga-topics If there are any scheduling conflicts with other sessions, please let me know and we will do our best to adjust. We also have time to add a 4th session on Monday, Tuesday, Wednesday, so if you have some last minute topics, feel free to add them. Thanks, and looking forward to seeing (virtually) everyone next week! -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon Oct 11 18:00:13 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 11 Oct 2021 11:00:13 -0700 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Ah, so that is probably the issue. Nova doesn't support the interface attach for SRIOV in Train. We do currently require that the port be hot plugged after boot. I would still be interested in seeing the log messages, just to confirm that is the issue or if we have other work to do. The vnic_type=direct should not be an issue as the port is being passed into Octavia pre-created. I think it was already mentioned that the port was successful when used during boot via the --nic option. Thanks for the pointer Sean. Michael On Mon, Oct 11, 2021 at 10:22 AM Sean Mooney wrote: > > On Mon, Oct 11, 2021 at 6:12 PM Michael Johnson wrote: > > > > Interesting, thank you for trying that out. > > > > We call the nova "interface_attach" and pass in the port_id you > > provided on the load balancer create command line. > > > > In the worker log, above the "tree" log lines, is there another ERROR > > log line that includes the exception returned from nova? > until very recently nova did not support interface attach for sriov interfaces. > https://specs.openstack.org/openstack/nova-specs/specs/victoria/implemented/sriov-interface-attach-detach.html > today we do allow it but we do not guarentee it will work. > > if there are not enoch pci slots in the vm or there are not enough VF > on the host > that are attached to the correct phsynet the attach will fail. > the most comon reason the attach fails is either numa affintiy cannot > be acived or there is an issue in the guest/qemu > the guest kernel need to repond to the hotplug event when qemu tries > to add the device if it does not it will fail. > > keeping all of tha tin mind for sriov attach to work octavia will have > to create the port with vnic_type=driect or one of the other valid > options like macvtap or direct phsyical. > you cannot attach sriov device that can be used with octavia using > flavor extra specs. > > > > > Also, I would be interested to see what nova logged as to why it was > > unable to attach the port. That may be in the main nova logs, or > > possibly on the compute host nova logs. > > > > Michael > > > > On Thu, Oct 7, 2021 at 5:36 PM Zhang, Jing C. (Nokia - CA/Ottawa) > > wrote: > > > > > > Hi Michael, > > > > > > I made a mistake when creating VM manually, I should use --nic option not --network option. After correcting that, I can create VM with the extra-flavor: > > > > > > $ openstack server create --flavor octavia-flavor --image Centos7 --nic port-id=test-port --security-group demo-secgroup --key-name demo-key test-vm > > > > > > $ nova list --all --fields name,status,host,networks | grep test-vm > > > | 8548400b-725a-405a-aeeb-ed1d208915e2 | test-vm | ACTIVE | overcloud-sriovperformancecompute-201-1.localdomain | ext-net1=10.5.201.149 > > > > > > A 2nd VF interface is seen inside the VM: > > > > > > [centos at test-vm ~]$ ip a > > > ... > > > 3: eth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 > > > link/ether 0a:b2:d4:85:a2:e6 brd ff:ff:ff:ff:ff:ff > > > > > > This MAC is not seen by neutron though: > > > > > > $ openstack port list | grep 0a:b2:d4:85:a2:e6 > > > > > > [empty] > > > > > > ===================== > > > However when I tried to create LB with the same VM flavor, it failed at the same place as before. > > > > > > Looking at worker.log, it seems the error is similar to use --network option to create the VM manually. But you are the expert. > > > > > > "Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52" > > > > > > Here is the full list of command line: > > > > > > $ openstack flavor list | grep octavia-flavor > > > | eb312b9a-d04d-4a88-9db2-7a88ce167cff | octavia-flavor | 4096 | 0 | 0 | 4 | True | > > > > > > openstack loadbalancer flavorprofile create --name ofp1 --provider amphora --flavor-data '{"compute_flavor": "eb312b9a-d04d-4a88-9db2-7a88ce167cff"}' > > > openstack loadbalancer flavor create --name of1 --flavorprofile ofp1 --enable > > > openstack loadbalancer create --name lb1 --flavor of1 --vip-port-id test-port --vip-subnet-id ext-subnet1 > > > > > > > > > |__Flow 'octavia-create-loadbalancer-flow': PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py", line 399, in execute > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker loadbalancer, loadbalancer.vip, amphora, subnet) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 391, in plug_aap_port > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker interface = self._plug_amphora_vip(amphora, subnet) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 123, in _plug_amphora_vip > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker raise base.PlugVIPException(message) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker > > > > > > > > > -----Original Message----- > > > From: Zhang, Jing C. (Nokia - CA/Ottawa) > > > Sent: Thursday, October 7, 2021 6:18 PM > > > To: Michael Johnson > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: RE: [Octavia] Can not create LB on SRIOV network > > > > > > Hi Michael, > > > > > > Thank you so much for the information. > > > > > > I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. > > > > > > However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: > > > https://docs.openstack.org/nova/train/admin/pci-passthrough.html > > > https://docs.openstack.org/nova/latest/admin/pci-passthrough.html > > > > > > ========================= > > > Here is the detail: > > > > > > Env: NIC is intel 82599, creating VM with SRIOV direct port works well. > > > > > > Nova.conf > > > > > > passthrough_whitelist={"devname":"ens1f0","physical_network":"physnet5"} > > > passthrough_whitelist={"devname":"ens1f1","physical_network":"physnet6"} > > > > > > Sriov_agent.ini > > > > > > [sriov_nic] > > > physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 > > > > > > (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: > > > > > > alias = { "vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf", "numa_policy": "required" } > > > > > > (2) Used the extra-spec in nova flavor > > > > > > openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" > > > > > > (3) Failed to create VM with this flavor, sriov agent log does not show port event, for sure also failed to create LB, PortBindingFailed > > > > > > > > > (4) Tried multiple formats to add whitelist for PF and VF in nova.conf for nova-compute, and retried, still failed > > > > > > passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","devname":"ens1f0","physical_network":"physnet5"} #PF passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","physical_network":"physnet5"} #VF > > > > > > The sriov agent log does not show port event for any of them. > > > > > > > > > > > > > > > -----Original Message----- > > > From: Michael Johnson > > > Sent: Wednesday, October 6, 2021 4:48 PM > > > To: Zhang, Jing C. (Nokia - CA/Ottawa) > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: Re: [Octavia] Can not create LB on SRIOV network > > > > > > Hi Jing, > > > > > > To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. > > > > > > It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. > > > > > > You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. > > > This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. > > > > > > I have not tried this and would be interested to hear if it works for you. > > > > > > If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. > > > > > > Michael > > > > > > [1] https://wiki.openstack.org/wiki/Octavia/Roadmap > > > [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html > > > [3] https://docs.openstack.org/octavia/latest/admin/flavors.html > > > [4] https://etherpad.opendev.org/p/yoga-ptg-octavia > > > > > > On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > > > > > > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > > > > > > > > > > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > > > > > > > > > > > > > Thank you so much > > > > > > > > > > > > > > > > Jing > > > > > > > > > > > > > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > > > > Interface Config Guide (Openstack) > > > > > > > > > > > > > > > > Hi, > > > > In Openstack train release, creating Octavia LB on SRIOV network fails. > > > > I come here to search if there is already a plan to add this support, and see this story. > > > > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > > > > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > > > > Thank you > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From ihrachys at redhat.com Mon Oct 11 18:05:10 2021 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 11 Oct 2021 14:05:10 -0400 Subject: [neutron] openflow rules tools In-Reply-To: References: Message-ID: On 10/11/21 9:39 AM, Arnaud Morin wrote: > Hello, > > When using native ovs in neutron, we endup with a lot of openflow rules > on ovs side. > > Debugging it with regular ovs-ofctl --color dump-flows is kind of > painful. > > Is there any tool that the community is using to manage that? You can check SB Logical_Flow table with ovn-sbctl lflow-list. You can also use ovn-trace(8) to inspect OVN pipeline behavior. Ihar From arnaud.morin at gmail.com Mon Oct 11 18:05:40 2021 From: arnaud.morin at gmail.com (Arnaud) Date: Mon, 11 Oct 2021 20:05:40 +0200 Subject: [neutron] openflow rules tools In-Reply-To: References: Message-ID: That would be awesome! We also built a tool which is looking for openflow rules related to a tap interface, but since we upgraded and enabled security rules in ovs, the tool isn't working anymore. So before rewriting everything from scratch, I was wondering if the community was also dealing with the same issue. So I am glad to here from you! Let me know :) Cheers Le 11 octobre 2021 17:52:52 GMT+02:00, Laurent Dumont a ?crit?: >Also interested in this. Reading rules in dump-flows is an absolute pain. >In an ideal world, I would have never have to. > >We some stuff on our side that I'll see if I can share. > >On Mon, Oct 11, 2021 at 9:41 AM Arnaud Morin wrote: > >> Hello, >> >> When using native ovs in neutron, we endup with a lot of openflow rules >> on ovs side. >> >> Debugging it with regular ovs-ofctl --color dump-flows is kind of >> painful. >> >> Is there any tool that the community is using to manage that? >> >> Thanks in advance! >> >> Arnaud. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajiv.mucheli at gmail.com Mon Oct 11 16:55:16 2021 From: rajiv.mucheli at gmail.com (rajiv mucheli) Date: Mon, 11 Oct 2021 22:25:16 +0530 Subject: [Barbican] HSM integration with FIPS Operation Enabled Message-ID: Hi, I looked into the available documentation and article but i had no luck validating if Openstack Barbican integration with FIPS Operation mode Enabled works. Any suggestions? The below barbican backend guide shares the available plugins with HSM : https://docs.openstack.org/security-guide/secrets-management/barbican.html Does Barbican now support module generated IV ? which is required for FIPS support in Thales A790 HSM. Regards, Rajiv -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraden at verisign.com Mon Oct 11 18:48:07 2021 From: abraden at verisign.com (Braden, Albert) Date: Mon, 11 Oct 2021 18:48:07 +0000 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: References: Message-ID: <7b85e6646792469aaa7e513ecfda8551@verisign.com> I think so. I see this: ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" Did anything with the distributed lock manager between Queens and Train? -----Original Message----- From: Michael Johnson Sent: Monday, October 11, 2021 1:15 PM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi Albert, Have you configured your distributed lock manager for Designate? [coordination] backend_url = Michael On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > Before applying the change, we see the DNS record in the recordset: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > > > and we can pull it from the DNS server on the controllers: > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > After applying the change, we don?t see it: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > $ > > > > We see this in the logs: > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From abraden at verisign.com Mon Oct 11 18:57:06 2021 From: abraden at verisign.com (Braden, Albert) Date: Mon, 11 Oct 2021 18:57:06 +0000 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: <7b85e6646792469aaa7e513ecfda8551@verisign.com> References: <7b85e6646792469aaa7e513ecfda8551@verisign.com> Message-ID: After investigating further, I realized that we're not running redis, and I think that means that redis_connection_string doesn't get set. Does this mean that we must run redis, or is there a workaround? -----Original Message----- From: Braden, Albert Sent: Monday, October 11, 2021 2:48 PM To: 'johnsomor at gmail.com' Cc: 'openstack-discuss at lists.openstack.org' Subject: RE: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail I think so. I see this: ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" Did anything with the distributed lock manager between Queens and Train? -----Original Message----- From: Michael Johnson Sent: Monday, October 11, 2021 1:15 PM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi Albert, Have you configured your distributed lock manager for Designate? [coordination] backend_url = Michael On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > Before applying the change, we see the DNS record in the recordset: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > > > and we can pull it from the DNS server on the controllers: > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > After applying the change, we don?t see it: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > $ > > > > We see this in the logs: > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From gmann at ghanshyammann.com Mon Oct 11 19:02:32 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 11 Oct 2021 14:02:32 -0500 Subject: [Xena] It works! In-Reply-To: <69f236d2-0441-c688-562d-19c818b8030a@dds.nl> References: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> <20211011142008.heixx43ckfsaoyd4@yuggoth.org> <69f236d2-0441-c688-562d-19c818b8030a@dds.nl> Message-ID: <17c70bc2a63.ede9811c850136.3432835365508929383@ghanshyammann.com> ---- On Mon, 11 Oct 2021 11:49:51 -0500 tjoen wrote ---- > On 10/11/21 16:20, Jeremy Stanley wrote: > > On 2021-10-11 10:49:38 +0200 (+0200), tjoen wrote: > >> Just testing every release since Train on an LFS system with Python-3.9 > >> cryptography-35.0.0 is necessary > > Forgotten to mention that that new cryptography only applies to > opensl-3.0.0 Just to note, we did Xena testing with py3.9 but as non-voting jobs with cryptography===3.4.8. Now cryptography 35.0.0 is used for current master branch testing (non voting py3.9). -gmann > > > Thanks for testing! Just be aware that Python 3.8 is the most recent > > interpreter targeted by Xena: > > > > https://governance.openstack.org/tc/reference/runtimes/xena.html > > Thx for that link. I'll consult it at next release > > > Discussion is underway at the PTG next week to determine what the > > tested runtimes should be for Yoga, but testing with 3.9 is being > > suggested (or maybe even 3.10): > > I have put 2022-03-30 in my agenda > > > https://etherpad.opendev.org/p/tc-yoga-ptg > > > > > From johnsomor at gmail.com Mon Oct 11 20:24:24 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 11 Oct 2021 13:24:24 -0700 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: References: <7b85e6646792469aaa7e513ecfda8551@verisign.com> Message-ID: You will need one of the Tooz supported distributed lock managers: Consul, Memcacded, Redis, or zookeeper. Michael On Mon, Oct 11, 2021 at 11:57 AM Braden, Albert wrote: > > After investigating further, I realized that we're not running redis, and I think that means that redis_connection_string doesn't get set. Does this mean that we must run redis, or is there a workaround? > > -----Original Message----- > From: Braden, Albert > Sent: Monday, October 11, 2021 2:48 PM > To: 'johnsomor at gmail.com' > Cc: 'openstack-discuss at lists.openstack.org' > Subject: RE: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > I think so. I see this: > > ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} > > ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" > > Did anything with the distributed lock manager between Queens and Train? > > -----Original Message----- > From: Michael Johnson > Sent: Monday, October 11, 2021 1:15 PM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > Hi Albert, > > Have you configured your distributed lock manager for Designate? > > [coordination] > backend_url = > > Michael > > On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > > > > > Before applying the change, we see the DNS record in the recordset: > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > $ > > > > > > > > and we can pull it from the DNS server on the controllers: > > > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > > > After applying the change, we don?t see it: > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > $ > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > $ > > > > > > > > We see this in the logs: > > > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From skaplons at redhat.com Mon Oct 11 20:40:25 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 11 Oct 2021 22:40:25 +0200 Subject: [neutron] openflow rules tools In-Reply-To: References: Message-ID: <6212551.lOV4Wx5bFT@p1> Hi, For OVN with have small tool ml2ovn-trace: https://docs.openstack.org/neutron/ latest/ovn/ml2ovn_trace.html in the neutron repo https://docs.openstack.org/ neutron/latest/ovn/ml2ovn_trace.html but that will not be helpful for ML2/OVS at all. On poniedzia?ek, 11 pa?dziernika 2021 20:05:40 CEST Arnaud wrote: > That would be awesome! > > We also built a tool which is looking for openflow rules related to a tap > interface, but since we upgraded and enabled security rules in ovs, the tool > isn't working anymore. Yes, for ML2/OVS with ovs firewall driver it is really painful to debug all those OF rules. > > So before rewriting everything from scratch, I was wondering if the community > was also dealing with the same issue. If You will have anything like that, please share with community :) > > So I am glad to here from you! > Let me know :) > Cheers > > Le 11 octobre 2021 17:52:52 GMT+02:00, Laurent Dumont a ?crit : > >Also interested in this. Reading rules in dump-flows is an absolute pain. > >In an ideal world, I would have never have to. > > > >We some stuff on our side that I'll see if I can share. > > > >On Mon, Oct 11, 2021 at 9:41 AM Arnaud Morin wrote: > >> Hello, > >> > >> When using native ovs in neutron, we endup with a lot of openflow rules > >> on ovs side. > >> > >> Debugging it with regular ovs-ofctl --color dump-flows is kind of > >> painful. > >> > >> Is there any tool that the community is using to manage that? > >> > >> Thanks in advance! > >> > >> Arnaud. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ildiko.vancsa at gmail.com Mon Oct 11 22:45:20 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 11 Oct 2021 15:45:20 -0700 Subject: Edge sessions at the upcoming PTG In-Reply-To: References: Message-ID: <045A602F-C8A8-467A-83E8-A1546C3197D0@gmail.com> Hi, As it?s less than one week now until the PTG, I wanted to put the edge sessions the OpenInfra Edge Computing Group is planning for the PTG on your radar: https://superuser.openstack.org/articles/the-project-teams-gathering-is-coming-lets-talk-edge/ We have topics such as networking, APIs, and automation in edge infrastructures that are relevant for OpenStack as well and it would be great to have the community?s input on these. Our etherpad for the sessions: https://etherpad.opendev.org/p/ecg-ptg-october-2021 Please let me know if you have any questions about the agenda or topics. Thanks and Best Regards, Ildik? > On Sep 27, 2021, at 18:31, Ildiko Vancsa wrote: > > Hi, > > It is a friendly reminder to please check out the edge session the OpenInfra Edge Computing Group is planning for the PTG: https://superuser.openstack.org/articles/the-project-teams-gathering-is-coming-lets-talk-edge/ > > We have topics such as networking, APIs, and automation in edge infrastructures that are relevant for OpenStack as well and it would be great to have the community?s input on these. > > Our etherpad for the sessions: https://etherpad.opendev.org/p/ecg-ptg-october-2021 > > Please let me know if you have any questions about the agenda or topics. > > Thanks and Best Regards, > Ildik? > > >> On Sep 7, 2021, at 16:22, Ildiko Vancsa wrote: >> >> Hi, >> >> I?m reaching out to you to share the agenda of the OpenInfra Edge Computing Group that we put together for the upcoming PTG. I would like to invite everyone who is interested in discussing edge challenges and finding solutions! >> >> We summarized our plans for the event in a short blog post to give some context to each of the topic that we picked to discuss in details. We picked key areas like security, networking, automation and tools, containers and more: https://superuser.openstack.org/articles/the-project-teams-gathering-is-coming-lets-talk-edge/ >> >> Our etherpad for the sessions: https://etherpad.opendev.org/p/ecg-ptg-october-2021 >> >> Please let me know if you have any questions about the agenda or topics. >> >> Thanks and Best Regards, >> Ildik? >> >> > From tjoen at dds.nl Tue Oct 12 06:44:55 2021 From: tjoen at dds.nl (tjoen) Date: Tue, 12 Oct 2021 08:44:55 +0200 Subject: [Xena] It works! In-Reply-To: <17c70bc2a63.ede9811c850136.3432835365508929383@ghanshyammann.com> References: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> <20211011142008.heixx43ckfsaoyd4@yuggoth.org> <69f236d2-0441-c688-562d-19c818b8030a@dds.nl> <17c70bc2a63.ede9811c850136.3432835365508929383@ghanshyammann.com> Message-ID: On 10/11/21 21:02, Ghanshyam Mann wrote: > ---- On Mon, 11 Oct 2021 11:49:51 -0500 tjoen wrote ---- > > > On 2021-10-11 10:49:38 +0200 (+0200), tjoen wrote: > > >> Just testing every release since Train on an LFS system with Python-3.9 > > >> cryptography-35.0.0 is necessary > > > > Forgotten to mention that that new cryptography only applies to > > opensl-3.0.0 > > Just to note, we did Xena testing with py3.9 but as non-voting jobs with > cryptography===3.4.8. That was the version causing segfaults with openssl-3 Worked in Wallaby with openssl-1.1.1l > Now cryptography 35.0.0 is used for current master branch testing (non > voting py3.9). With openssl-3 I hope From mark at stackhpc.com Tue Oct 12 07:37:06 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 12 Oct 2021 08:37:06 +0100 Subject: Restarting Openstack Victoria using kolla-ansible In-Reply-To: References: Message-ID: On Mon, 11 Oct 2021 at 14:39, Karera Tony wrote: > > Hello Team, > I am trying to deploy openstack Victoria ..... > Below I install kolla-ansible on the deployment server , I first clone the * git clone --branch stable/victoria https://opendev.org/openstack/kolla-ansible* but when I run the deployment without uncommenting openstack_release .. By default it deploys wallaby > And when I uncomment and type victoria ...Some of the containers keep restarting esp Horizon > Any idea on how to resolve this ? > Even the kolla content that I use for deployment, I get it from the kolla-ansible directory that I cloned > Regards Hi Tony, kolla-ansible deploys the tag in the openstack_release variable - the default is victoria in the stable/victoria branch. Perhaps you have overridden this via globals.yml, or are accidentally using the stable/wallaby branch? Mark > > Tony Karera > > From tonykarera at gmail.com Tue Oct 12 07:45:45 2021 From: tonykarera at gmail.com (Karera Tony) Date: Tue, 12 Oct 2021 09:45:45 +0200 Subject: Restarting Openstack Victoria using kolla-ansible In-Reply-To: References: Message-ID: Hello Goddard, Actually when you just install kolla-ansible. It defaults to wallaby so what I did is to clone the victoria packages with git clone --branch stable/victoria https://opendev.org/openstack/kolla-ansible and then install kolla-ansible and used the packages for victoria Regards Tony Karera On Tue, Oct 12, 2021 at 9:37 AM Mark Goddard wrote: > On Mon, 11 Oct 2021 at 14:39, Karera Tony wrote: > > > > Hello Team, > > I am trying to deploy openstack Victoria ..... > > Below I install kolla-ansible on the deployment server , I first clone > the * git clone --branch stable/victoria > https://opendev.org/openstack/kolla-ansible* but when I run the > deployment without uncommenting openstack_release .. By default it deploys > wallaby > > And when I uncomment and type victoria ...Some of the containers keep > restarting esp Horizon > > Any idea on how to resolve this ? > > Even the kolla content that I use for deployment, I get it from the > kolla-ansible directory that I cloned > > Regards > > Hi Tony, > kolla-ansible deploys the tag in the openstack_release variable - the > default is victoria in the stable/victoria branch. Perhaps you have > overridden this via globals.yml, or are accidentally using the > stable/wallaby branch? > Mark > > > > > Tony Karera > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Tue Oct 12 08:27:56 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 12 Oct 2021 13:27:56 +0500 Subject: [wallaby][neutron] Distributed floating IP Message-ID: Hi, I am using openstack wallaby. I have seen an issue not sure if its a bug or configuration related issue. I am using ml2/ovn backend with distributed floating IP enabled. I have made my compute node 1 as a gateway chassis where the routers are scheduled. I have then created an instance and NATed a public IP. The instance deployed on compute 2. When I see the IP address via curl ipinfo.io it shows the floating IP that I have NATed. Then I migrated the instance to compute node 1. I had many ping drops for a couple of seconds then its back to normal. I have then seen the IP address via curl ipinfo.io. It showed me the SNAT IP address of router. Then I migrated the instance back to compute node 2, I had ping drops for 20 seconds and then the instance came back. I have seen the IP via curl, it showed the floating IP that I have nated with instance. Is it the expected behavior ? Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From jazeltq at gmail.com Tue Oct 12 08:31:42 2021 From: jazeltq at gmail.com (Jaze Lee) Date: Tue, 12 Oct 2021 16:31:42 +0800 Subject: [nova & libvirt] about attach disk on aarch64 Message-ID: Hello, We run stein openstack in our environment. And already set libvirt value in nova.conf hw_machine_type=aarch64=virt num_pcie_ports = 15 We test and find sometimes disks can not be attached correctly. For example, Built vm with 6 disks, only three disk be there. The others will be inactive in virsh. No obvious error can be found in nova-compute, libvirt, vm os logs. libvirt:5.0.0 qemu:2.12.0-44 openstack-nova: 19.3.2 librbd1:ibrbd1-14.2.16-1.el7.aarch64 Any Suggestions? Thanks a lot From mdemaced at redhat.com Tue Oct 12 10:04:34 2021 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Tue, 12 Oct 2021 12:04:34 +0200 Subject: [kuryr] Virtual PTG October 2021 In-Reply-To: References: Message-ID: Hello, Small update on the Kuryr session: the session that would happen on Oct 22 at 13-14 UTC got moved to Oct 20 13-14 UTC in the *Bexar* room. See you there. Cheers, Maysa Macedo. On Tue, Oct 5, 2021 at 12:05 PM Maysa De Macedo Souza wrote: > Hello, > > With the PTG approaching I would like to remind you that the Kuryr > sessions will be held on Oct 19 7-8 UTC and Oct 22 13-14 UTC > and in case you're interested in discussing any topic with the Kuryr team > to include it to the etherpad[1]. > > [1] https://etherpad.opendev.org/p/kuryr-yoga-ptg > > See you on the PTG. > > Thanks, > Maysa Macedo. > > On Thu, Jul 22, 2021 at 11:36 AM Maysa De Macedo Souza < > mdemaced at redhat.com> wrote: > >> Hello, >> >> I booked the following slots for Kuryr during the Yoga PTG: Oct 19 7-8 >> UTC and Oct 22 13-14 UTC. >> If you have any topic ideas you would like to discuss, please include >> them in the etherpad[1], >> also it would be interesting to include your name there if you plan to >> attend any Kuryr session. >> >> See you on the next PTG. >> >> Cheers, >> Maysa. >> >> [1] https://etherpad.opendev.org/p/kuryr-yoga-ptg >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Oct 12 10:18:05 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 12 Oct 2021 11:18:05 +0100 Subject: [docs] Keystone docs missing for Xena In-Reply-To: <8e9b9ddd-48c4-c144-cad6-63ec0682c5e8@nemebean.com> References: <8e9b9ddd-48c4-c144-cad6-63ec0682c5e8@nemebean.com> Message-ID: On Mon, 2021-10-11 at 10:27 -0500, Ben Nemec wrote: > Hey, > > I was just looking for the Keystone docs and discovered that they are > not listed on https://docs.openstack.org/xena/projects.html. If I > s/wallaby/xena/ on the wallaby version then it resolves, so it looks > like the docs are published they just aren't included in the index for > some reason. That's because Keystone hadn't merge a patch to the stable/xena branch when we created [1]. We need to uncomment the project (and any other projects that now have docs) in the 'www/project-data/xena.yaml' file in openstack-manuals. Stephen [1] https://review.opendev.org/c/openstack/openstack-manuals/+/812120 > > -Ben > From dalvarez at redhat.com Tue Oct 12 10:26:53 2021 From: dalvarez at redhat.com (Daniel Alvarez) Date: Tue, 12 Oct 2021 12:26:53 +0200 Subject: [wallaby][neutron] Distributed floating IP In-Reply-To: References: Message-ID: <5413F2AA-1FD9-42F3-A2C5-987BE418E610@redhat.com> Hi Ammad > On 12 Oct 2021, at 10:33, Ammad Syed wrote: > > ? > Hi, > > I am using openstack wallaby. I have seen an issue not sure if its a bug or configuration related issue. I am using ml2/ovn backend with distributed floating IP enabled. I have made my compute node 1 as a gateway chassis where the routers are scheduled. > > I have then created an instance and NATed a public IP. The instance deployed on compute 2. When I see the IP address via curl ipinfo.io it shows the floating IP that I have NATed. > > Then I migrated the instance to compute node 1. I had many ping drops for a couple of seconds then its back to normal. I have then seen the IP address via curl ipinfo.io. It showed me the SNAT IP address of router. Could it be that the compute 1 is not properly configured with a connection on the public network? Provider bridge, correct bridge mappings and so on and then the traffic falls back to centralized? > > Then I migrated the instance back to compute node 2, I had ping drops for 20 seconds and then the instance came back. I have seen the IP via curl, it showed the floating IP that I have nated with instance. > > Is it the expected behavior ? > > Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Tue Oct 12 11:38:08 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 12 Oct 2021 16:38:08 +0500 Subject: [wallaby][neutron] Distributed floating IP In-Reply-To: <5413F2AA-1FD9-42F3-A2C5-987BE418E610@redhat.com> References: <5413F2AA-1FD9-42F3-A2C5-987BE418E610@redhat.com> Message-ID: All three nodes are exactly identical. I am able to take the SSH of the VM via floating IP attached to it but the reverse traffic is getting out with the SNAT IP of the router when I put the VM on the gateway chassis. Ammad On Tue, Oct 12, 2021 at 3:26 PM Daniel Alvarez wrote: > > Hi Ammad > > > On 12 Oct 2021, at 10:33, Ammad Syed wrote: > > ? > Hi, > > I am using openstack wallaby. I have seen an issue not sure if its a bug > or configuration related issue. I am using ml2/ovn backend with distributed > floating IP enabled. I have made my compute node 1 as a gateway chassis > where the routers are scheduled. > > I have then created an instance and NATed a public IP. The instance > deployed on compute 2. When I see the IP address via curl ipinfo.io it > shows the floating IP that I have NATed. > > Then I migrated the instance to compute node 1. I had many ping drops for > a couple of seconds then its back to normal. I have then seen the IP > address via curl ipinfo.io. It showed me the SNAT IP address of router. > > > Could it be that the compute 1 is not properly configured with a > connection on the public network? Provider bridge, correct bridge mappings > and so on and then the traffic falls back to centralized? > > > > Then I migrated the instance back to compute node 2, I had ping drops for > 20 seconds and then the instance came back. I have seen the IP via curl, it > showed the floating IP that I have nated with instance. > > Is it the expected behavior ? > > Ammad > > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Tue Oct 12 12:06:28 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Tue, 12 Oct 2021 14:06:28 +0200 Subject: [nova & libvirt] about attach disk on aarch64 In-Reply-To: References: Message-ID: <6538214e-645b-c45b-fecf-401aeb508ef6@linaro.org> W dniu 12.10.2021 o?10:31, Jaze Lee pisze: > Hello, > We run stein openstack in our environment. And already set libvirt > value in nova.conf > hw_machine_type=aarch64=virt > num_pcie_ports = 15 > > We test and find sometimes disks can not be attached correctly. > For example, > Built vm with 6 disks, only three disk be there. The others will be > inactive in virsh. > No obvious error can be found in nova-compute, libvirt, vm os logs. > > libvirt:5.0.0 > qemu:2.12.0-44 > openstack-nova: 19.3.2 > librbd1:ibrbd1-14.2.16-1.el7.aarch64 > > Any Suggestions? Update to Wallaby on top of CentOS Stream 8? Will get whole stack update. Stein is not supported anymore. And I wonder does someone support AArch64 in CentOS 7 (RHEL 7 does not). From dtantsur at redhat.com Tue Oct 12 12:29:53 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 12 Oct 2021 14:29:53 +0200 Subject: [stable][requirements][zuul] unpinned setuptools dependency on stable In-Reply-To: <20210924143600.yfjuxerlid52vlji@yuggoth.org> References: <6J4UZQ.VOBD0LVDTPUX1@est.tech> <20210924143600.yfjuxerlid52vlji@yuggoth.org> Message-ID: On Fri, Sep 24, 2021 at 4:40 PM Jeremy Stanley wrote: > On 2021-09-24 07:19:03 -0600 (-0600), Alex Schultz wrote: > [...] > > JFYI as I was looking into some other requirements issues > > yesterday, I hit this error with anyjson[0] 0.3.3 as well. It's > > used in a handful of projects[1] and there has not been a release > > since 2012[2] so this might be a problem in xena. I haven't > > checked the projects respective gates, but just want to highlight > > we'll probably have additional fallout from the setuptools change. > [...] > > Yes, we've also run into similar problems with pydot2 and > funcparserlib, and I'm sure there's plenty more of what is > effectively abandonware lingering in various projects' requirements > lists. The long and short of it is that people with newer versions > of SetupTools are going to be unable to install those, full stop. > The maintainers of some of them may be spurred to action and release > a new version, but in so doing may also drop support for older > interpreters we still test with on some stable branches (this was > the case with funcparserlib). > Apparently, suds-jurko has the same problem, breaking oslo.vmware [1] and thus cinder. Dmitry [1] https://review.opendev.org/c/openstack/oslo.vmware/+/813377 > > On the other hand, controlling what version of SetupTools others > have and use isn't always possible, unlike runtime dependencies, so > that really should be a solution of last resort. Making exceptions > to stable branch policy in unusual circumstances such as this seems > like a reasonable and more effective compromise. > -- > Jeremy Stanley > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraden at verisign.com Tue Oct 12 12:47:46 2021 From: abraden at verisign.com (Braden, Albert) Date: Tue, 12 Oct 2021 12:47:46 +0000 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: References: <7b85e6646792469aaa7e513ecfda8551@verisign.com> Message-ID: Thank you Michael, this is very helpful. Do you have any insight into why we don't experience this in Queens clusters? We aren't running a lock manager there either, and I haven't been able to duplicate the problem there. -----Original Message----- From: Michael Johnson Sent: Monday, October 11, 2021 4:24 PM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. You will need one of the Tooz supported distributed lock managers: Consul, Memcacded, Redis, or zookeeper. Michael On Mon, Oct 11, 2021 at 11:57 AM Braden, Albert wrote: > > After investigating further, I realized that we're not running redis, and I think that means that redis_connection_string doesn't get set. Does this mean that we must run redis, or is there a workaround? > > -----Original Message----- > From: Braden, Albert > Sent: Monday, October 11, 2021 2:48 PM > To: 'johnsomor at gmail.com' > Cc: 'openstack-discuss at lists.openstack.org' > Subject: RE: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > I think so. I see this: > > ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} > > ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" > > Did anything with the distributed lock manager between Queens and Train? > > -----Original Message----- > From: Michael Johnson > Sent: Monday, October 11, 2021 1:15 PM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > Hi Albert, > > Have you configured your distributed lock manager for Designate? > > [coordination] > backend_url = > > Michael > > On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > > > > > Before applying the change, we see the DNS record in the recordset: > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > $ > > > > > > > > and we can pull it from the DNS server on the controllers: > > > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > > > After applying the change, we don?t see it: > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > $ > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > $ > > > > > > > > We see this in the logs: > > > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From iurygregory at gmail.com Tue Oct 12 12:48:51 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Tue, 12 Oct 2021 14:48:51 +0200 Subject: [ironic] No weekly meeting on Oct18 Message-ID: Hello ironicers! Just a reminder that on Oct 18, we won't have our weekly meeting because we have a session in the PTG. -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Oct 12 13:03:11 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 12 Oct 2021 13:03:11 +0000 Subject: [Xena] It works! In-Reply-To: References: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> <20211011142008.heixx43ckfsaoyd4@yuggoth.org> <69f236d2-0441-c688-562d-19c818b8030a@dds.nl> <17c70bc2a63.ede9811c850136.3432835365508929383@ghanshyammann.com> Message-ID: <20211012130310.iqm4d3zyqz2cvrso@yuggoth.org> On 2021-10-12 08:44:55 +0200 (+0200), tjoen wrote: > On 10/11/21 21:02, Ghanshyam Mann wrote: [...] > > Now cryptography 35.0.0 is used for current master branch > > testing (non voting py3.9). > > With openssl-3 I hope I don't think any of the LTS distributions we use for testing (CentOS, Ubuntu) have OpenSSL 3.x packages available. Even in Debian, unstable is still using 1.1.1l-1 while 3.0.0-1 is only available from experimental. We may start running some tests with OpenSSL 3.x versions once they begin to appear in Debian/testing or a new Fedora version, but widespread testing with it likely won't happen until we add CentOS 9 Stream or Ubuntu 22.04 LTS (assuming one of those provides it once they exist). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rigault.francois at gmail.com Tue Oct 12 14:03:58 2021 From: rigault.francois at gmail.com (Francois) Date: Tue, 12 Oct 2021 16:03:58 +0200 Subject: [neutron] OVN and dynamic routing Message-ID: Hello Neutron! I am looking into running stacks with OVN on a leaf-spine network, and have some floating IPs routed between racks. Basically each rack is assigned its own set of subnets. Some VLANs are stretched across all racks: the provisioning VLAN used by tripleo to deploy the stack, and the VLANs for the controllers API IPs. However, each tenant subnet is local to a rack: for example each OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its own rack. Traffic between 2 racks is sent to a spine, and leaves and spines run some eVPN-like thing: each pair of ToR is a vtep, traffic is encapsulated as VXLAN, and routes between vteps are exchanged with BGP. I am looking into supporting floating IPs in there: I expect floating IPs to be able to move between racks, as such I am looking into publishing the route for a FIP towards an hypervisor, through BGP. Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. It seems there are several ideas to achieve this (it was discussed [before][1] in ovs conference) - using [neutron-dynamic-routing][2] - that seems to have some gaps for OVN. It uses os-ken to talk to switches and exchange routes - using [OVN BGP agent][3] that uses FRR, it seems there is a related [RFE][4] for integration in tripleo There is btw also a [BGPVPN][5] project (it does not match my usecase as far as I tried to understand it) that also has some code that talks BGP to switches, already integrated in tripleo. For my tests, I was able to use the neutron-dynamic-routing project (almost) as documented, with a few changes: - for traffic going from VMs to outside the stack, the hypervisor was trying to resolve the "gateway of fIPs" with ARP request which does not make any sense. I created a dummy port with the mac address of the virtual router of the switches: ``` $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml - Fixed IP Addresses: - ip_address: 10.64.254.1 subnet_id: 8f37 ID: 4028 MAC Address: 00:1c:73:00:00:11 Name: lagw Status: DOWN ``` this prevent the hypervisor to send ARP requests to a non existent gateway - for traffic coming back, we start the neutron-bgp-dragent agent on the controllers. We create the right bgp speaker, peers, etc. - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it selects fips and join with ports owned by a "floatingip_agent_gateway" which does not exist on OVN. We can define ourselves some ports so that the dragent is able to find the tenant IP of a host: ``` openstack port create --network provider --device-owner network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip ip-address=10.64.245.102 ag2 ``` - when creating a floating IP and assigning a port to it, Neutron reads changes from OVN SB and fills the binding information into the port: ``` $ openstack port show -c binding_host_id `openstack floating ip show 10.64.254.177 -f value -c port_id` +-----------------+----------------------------------------+ | Field | Value | +-----------------+----------------------------------------+ | binding_host_id | cpu35d.cloud | +-----------------+----------------------------------------+ ``` this allows the dragent to publish the route for the fip ``` $ openstack bgp speaker list advertised routes bgpspeaker +------------------+---------------+ | Destination | Nexthop | +------------------+---------------+ | 10.64.254.177/32 | 10.64.245.102 | +------------------+---------------+ ``` - traffic reaches the hypervisor but (for reason I don't understand) I had to add a rule ``` $ ip rule 0: from all lookup local 32765: from all iif vlan1234 lookup ovn 32766: from all lookup main 32767: from all lookup default $ ip route show table ovn 10.64.254.177 dev vlan1234 scope link ``` so that the traffic coming for the fip is not immediately discarded by the hypervisor (it's not an ideal solution but it is a workaround that makes my one fIP work!) So all in all it seems it would be possible to use the neutron-dynamic-routing agent, with some minor modifications (eg: to also publish the fip of the OVN L3 gateway router). I am wondering whether I have overlooked anything, and if such kind of deployment (OVN + neutron dynamic routing or similar) is already in use somewhere. Does it make sense to have a RFE for better integration between OVN and neutron-dynamic-routing? Thanks Francois [1]: https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf [2]: https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html [3]: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ [4]: https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html [5]: https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml From dpeacock at redhat.com Tue Oct 12 14:31:56 2021 From: dpeacock at redhat.com (David Peacock) Date: Tue, 12 Oct 2021 10:31:56 -0400 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: Hi Anirudh, You're hitting a known bug that we're in the process of propagating a fix for; sorry for this. :-) As per a patch we have under review, use the inventory file located under ~/tripleo-deploy/ directory: tripleo-ansible-inventory.yaml. To generate an inventory file, use the playbook in "tripleo-ansible: cli-config-download.yaml". https://review.opendev.org/c/openstack/tripleo-validations/+/813535 Let us know if this doesn't put you on the right track. Thanks, David On Sat, Oct 9, 2021 at 5:12 PM Anirudh Gupta wrote: > Hi Team, > > I am installing Tripleo using the below link > > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html > > In the Introspect section, When I executed the command > openstack tripleo validator run --group pre-introspection > > I got the following error: > > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | UUID | Validations | > Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | > > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu | > PASSED | localhost | localhost | | 0:00:01.261 | > | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space | > PASSED | localhost | localhost | | 0:00:04.480 | > | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram | > PASSED | localhost | localhost | | 0:00:02.173 | > | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode | > PASSED | localhost | localhost | | 0:00:01.546 | > | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway | > FAILED | undercloud | No host matched | | | > | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space | > FAILED | undercloud | No host matched | | | > | 2f0239db-d530-48eb-b606-f82179e72e50 | undercloud-neutron-sanity-check | > FAILED | undercloud | No host matched | | | > | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range | > FAILED | undercloud | No host matched | | | > | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection | > FAILED | undercloud | No host matched | | | > | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush | > FAILED | undercloud | No host matched | | | > > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > > > Then I created the following inventory file: > [Undercloud] > undercloud > > Passed this command while running the pre-introspection command. > It then executed successfully. > > > But with Pre-deployment, it is still failing even after passing the > inventory > > > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > | UUID | Validations > | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration > | > > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e > | PASSED | localhost | localhost | | > 0:00:00.504 | > | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns > | PASSED | localhost | localhost | | > 0:00:00.481 | > | 93611c13-49a2-4cae-ad87-099546459481 | service-status > | PASSED | all | undercloud | | > 0:00:06.942 | > | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux > | PASSED | all | undercloud | | > 0:00:02.433 | > | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version > | FAILED | all | undercloud | | > 0:00:03.576 | > | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed > | PASSED | undercloud | undercloud | | > 0:00:02.850 | > | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed > | FAILED | allovercloud | No host matched | | > | > | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment > | FAILED | undercloud | undercloud | | > 0:00:31.559 | > | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug > | FAILED | undercloud | undercloud | | > 0:00:02.057 | > | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | > collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud > | | 0:00:00.884 | > | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted > | FAILED | undercloud | undercloud | | > 0:00:02.138 | > | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count > | PASSED | undercloud | undercloud | | > 0:00:06.164 | > | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count > | FAILED | undercloud | undercloud | | > 0:00:00.934 | > | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning > | FAILED | undercloud | undercloud | | > 0:00:02.456 | > | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration > | FAILED | undercloud | undercloud | | > 0:00:00.882 | > | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment > | FAILED | undercloud | undercloud | | > 0:00:00.880 | > | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks > | FAILED | undercloud | undercloud | | > 0:00:01.934 | > | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans > | FAILED | undercloud | undercloud | | > 0:00:01.931 | > | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding > | PASSED | all | undercloud | | > 0:00:00.366 | > > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > > Also this step of passing the inventory file is not mentioned anywhere in > the document. Is there anything I am missing? > > Regards > Anirudh Gupta > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbultel at redhat.com Tue Oct 12 15:02:07 2021 From: mbultel at redhat.com (Mathieu Bultel) Date: Tue, 12 Oct 2021 17:02:07 +0200 Subject: [TripleO] Issue in running Pre-Introspection In-Reply-To: References: Message-ID: Hi, Which release are you using ? You have to provide a valid inventory file via the openstack CLI in order to allow the VF to know which hosts & ips is. Mathieu On Fri, Oct 1, 2021 at 5:17 PM Anirudh Gupta wrote: > Hi Team,, > > Upon further debugging, I found that pre-introspection internally calls > the ansible playbook located at path /usr/share/ansible/validation-playbooks > File "dhcp-introspection.yaml" has hosts mentioned as undercloud. > > - hosts: *undercloud* > become: true > vars: > ... > ... > > > But the artifacts created for dhcp-introspection at > location /home/stack/validations/artifacts/_dhcp-introspection.yaml_2021-10-01T11 > has file *hosts *present which has *localhost* written into it as a > result of which when command gets executed it gives the error *"Could not > match supplied host pattern, ignoring: undercloud:"* > > Can someone suggest how is this artifacts written in tripleo and the way > we can change hosts file entry to undercloud so that it can work > > Similar is the case with other tasks > like undercloud-tokenflush, ctlplane-ip-range etc > > Regards > Anirudh Gupta > > On Wed, Sep 29, 2021 at 4:47 PM Anirudh Gupta wrote: > >> Hi Team, >> >> I tried installing Undercloud using the below link: >> >> >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud >> >> I am getting the following error: >> >> (undercloud) [stack at undercloud ~]$ openstack tripleo validator run >> --group pre-introspection >> Selected log directory '/home/stack/validations' does not exist. >> Attempting to create it. >> >> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >> | UUID | Validations >> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >> >> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >> | 7029c1f6-5ab4-465d-82d7-3f29058012ce | check-cpu >> | PASSED | localhost | localhost | | 0:00:02.531 | >> | db059017-30f1-4b97-925e-3f55b586d492 | check-disk-space >> | PASSED | localhost | localhost | | 0:00:04.432 | >> | e23dd9a1-90d3-4797-ae0a-b43e55ab6179 | check-ram >> | PASSED | localhost | localhost | | 0:00:01.324 | >> | 598ca02d-258a-44ad-b78d-3877321cdfe6 | check-selinux-mode >> | PASSED | localhost | localhost | | 0:00:01.591 | >> | c4435b4c-b432-4a1e-8a99-00638034a884 | *check-network-gateway >> | FAILED* | undercloud | *No host matched* | | >> | >> | cb1eed23-ef2f-4acd-a43a-86fb09bf0372 | *undercloud-disk-space >> | FAILED* | undercloud | *No host matched* | | >> | >> | abde5329-9289-4b24-bf16-c4d82b03e67a | *undercloud-neutron-sanity-check >> | FAILED* | undercloud | *No host matched* | | >> | >> | d0e5fdca-ece6-4a37-b759-ed1fac31a10f | *ctlplane-ip-range >> | FAILED* | undercloud | No host matched | | >> | >> | 91511807-225c-4852-bb52-6d0003c51d49 | *dhcp-introspection >> | FAILED* | undercloud | No host matched | | >> | >> | e96f7704-d2fb-465d-972b-47e2f057449c |* undercloud-tokenflush >> | FAILED *| undercloud | No host matched | | >> | >> >> >> As per the validation link, >> >> https://docs.openstack.org/tripleo-validations/wallaby/validations-pre-introspection-details.html >> >> check-network-gateway >> >> If gateway in undercloud.conf is different from local_ip, verify that >> the gateway exists and is reachable >> >> Observation - In my case IP specified in local_ip and gateway, both are >> pingable, but still this error is being observed >> >> >> ctlplane-ip-range? >> >> >> Check the number of IP addresses available for the overcloud nodes. >> >> Verify that the number of IP addresses defined in dhcp_start and dhcp_end fields >> in undercloud.conf is not too low. >> >> - >> >> ctlplane_iprange_min_size: 20 >> >> Observation - In my case I have defined more than 20 IPs >> >> >> Similarly for disk related issue, I have dedicated 100 GB space in /var >> and / >> >> Filesystem Size Used Avail Use% Mounted on >> devtmpfs 12G 0 12G 0% /dev >> tmpfs 12G 84K 12G 1% /dev/shm >> tmpfs 12G 8.7M 12G 1% /run >> tmpfs 12G 0 12G 0% /sys/fs/cgroup >> /dev/mapper/cl-root 100G 2.5G 98G 3% / >> /dev/mapper/cl-home 47G 365M 47G 1% /home >> /dev/mapper/cl-var 103G 1.1G 102G 2% /var >> /dev/vda1 947M 200M 747M 22% /boot >> tmpfs 2.4G 0 2.4G 0% /run/user/0 >> tmpfs 2.4G 0 2.4G 0% /run/user/1000 >> >> Despite setting al the parameters, still I am not able to pass >> pre-introspection checks. *"NO Host Matched" *is found in the table. >> >> >> Regards >> >> Anirudh Gupta >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jing.c.zhang at nokia.com Tue Oct 12 14:23:07 2021 From: jing.c.zhang at nokia.com (Zhang, Jing C. (Nokia - CA/Ottawa)) Date: Tue, 12 Oct 2021 14:23:07 +0000 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Hi Michael, Nova log does not have other error besides "port-bind-failure...check neutron log"...as if you manually attempts to attach VM to SRIOV provider network not using the direct port type. Jing -----Original Message----- From: Michael Johnson Sent: Monday, October 11, 2021 2:00 PM To: Sean Mooney Cc: Zhang, Jing C. (Nokia - CA/Ottawa) ; openstack-discuss at lists.openstack.org Subject: Re: [Octavia] Can not create LB on SRIOV network Ah, so that is probably the issue. Nova doesn't support the interface attach for SRIOV in Train. We do currently require that the port be hot plugged after boot. I would still be interested in seeing the log messages, just to confirm that is the issue or if we have other work to do. The vnic_type=direct should not be an issue as the port is being passed into Octavia pre-created. I think it was already mentioned that the port was successful when used during boot via the --nic option. Thanks for the pointer Sean. Michael On Mon, Oct 11, 2021 at 10:22 AM Sean Mooney wrote: > > On Mon, Oct 11, 2021 at 6:12 PM Michael Johnson wrote: > > > > Interesting, thank you for trying that out. > > > > We call the nova "interface_attach" and pass in the port_id you > > provided on the load balancer create command line. > > > > In the worker log, above the "tree" log lines, is there another > > ERROR log line that includes the exception returned from nova? > until very recently nova did not support interface attach for sriov interfaces. > https://specs.openstack.org/openstack/nova-specs/specs/victoria/implem > ented/sriov-interface-attach-detach.html > today we do allow it but we do not guarentee it will work. > > if there are not enoch pci slots in the vm or there are not enough VF > on the host that are attached to the correct phsynet the attach will > fail. > the most comon reason the attach fails is either numa affintiy cannot > be acived or there is an issue in the guest/qemu the guest kernel need > to repond to the hotplug event when qemu tries to add the device if it > does not it will fail. > > keeping all of tha tin mind for sriov attach to work octavia will have > to create the port with vnic_type=driect or one of the other valid > options like macvtap or direct phsyical. > you cannot attach sriov device that can be used with octavia using > flavor extra specs. > > > > > Also, I would be interested to see what nova logged as to why it was > > unable to attach the port. That may be in the main nova logs, or > > possibly on the compute host nova logs. > > > > Michael > > > > On Thu, Oct 7, 2021 at 5:36 PM Zhang, Jing C. (Nokia - CA/Ottawa) > > wrote: > > > > > > Hi Michael, > > > > > > I made a mistake when creating VM manually, I should use --nic option not --network option. After correcting that, I can create VM with the extra-flavor: > > > > > > $ openstack server create --flavor octavia-flavor --image Centos7 > > > --nic port-id=test-port --security-group demo-secgroup --key-name > > > demo-key test-vm > > > > > > $ nova list --all --fields name,status,host,networks | grep > > > test-vm > > > | 8548400b-725a-405a-aeeb-ed1d208915e2 | test-vm | ACTIVE | overcloud-sriovperformancecompute-201-1.localdomain | ext-net1=10.5.201.149 > > > > > > A 2nd VF interface is seen inside the VM: > > > > > > [centos at test-vm ~]$ ip a > > > ... > > > 3: eth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 > > > link/ether 0a:b2:d4:85:a2:e6 brd ff:ff:ff:ff:ff:ff > > > > > > This MAC is not seen by neutron though: > > > > > > $ openstack port list | grep 0a:b2:d4:85:a2:e6 > > > > > > [empty] > > > > > > ===================== > > > However when I tried to create LB with the same VM flavor, it failed at the same place as before. > > > > > > Looking at worker.log, it seems the error is similar to use --network option to create the VM manually. But you are the expert. > > > > > > "Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52" > > > > > > Here is the full list of command line: > > > > > > $ openstack flavor list | grep octavia-flavor > > > | eb312b9a-d04d-4a88-9db2-7a88ce167cff | octavia-flavor | 4096 | 0 | 0 | 4 | True | > > > > > > openstack loadbalancer flavorprofile create --name ofp1 --provider amphora --flavor-data '{"compute_flavor": "eb312b9a-d04d-4a88-9db2-7a88ce167cff"}' > > > openstack loadbalancer flavor create --name of1 --flavorprofile > > > ofp1 --enable openstack loadbalancer create --name lb1 --flavor > > > of1 --vip-port-id test-port --vip-subnet-id ext-subnet1 > > > > > > > > > |__Flow 'octavia-create-loadbalancer-flow': PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py", line 399, in execute > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker loadbalancer, loadbalancer.vip, amphora, subnet) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 391, in plug_aap_port > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker interface = self._plug_amphora_vip(amphora, subnet) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 123, in _plug_amphora_vip > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker raise base.PlugVIPException(message) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > > 2021-10-08 00:19:26.497 71 ERROR > > > octavia.controller.worker.v1.controller_worker > > > > > > > > > -----Original Message----- > > > From: Zhang, Jing C. (Nokia - CA/Ottawa) > > > Sent: Thursday, October 7, 2021 6:18 PM > > > To: Michael Johnson > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: RE: [Octavia] Can not create LB on SRIOV network > > > > > > Hi Michael, > > > > > > Thank you so much for the information. > > > > > > I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. > > > > > > However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: > > > https://docs.openstack.org/nova/train/admin/pci-passthrough.html > > > https://docs.openstack.org/nova/latest/admin/pci-passthrough.html > > > > > > ========================= > > > Here is the detail: > > > > > > Env: NIC is intel 82599, creating VM with SRIOV direct port works well. > > > > > > Nova.conf > > > > > > passthrough_whitelist={"devname":"ens1f0","physical_network":"phys > > > net5"} > > > passthrough_whitelist={"devname":"ens1f1","physical_network":"phys > > > net6"} > > > > > > Sriov_agent.ini > > > > > > [sriov_nic] > > > physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 > > > > > > (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: > > > > > > alias = { "vendor_id":"8086", "product_id":"10ed", > > > "device_type":"type-VF", "name":"vf", "numa_policy": "required" } > > > > > > (2) Used the extra-spec in nova flavor > > > > > > openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" > > > > > > (3) Failed to create VM with this flavor, sriov agent log does not > > > show port event, for sure also failed to create LB, > > > PortBindingFailed > > > > > > > > > (4) Tried multiple formats to add whitelist for PF and VF in > > > nova.conf for nova-compute, and retried, still failed > > > > > > passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","dev > > > name":"ens1f0","physical_network":"physnet5"} #PF > > > passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","phy > > > sical_network":"physnet5"} #VF > > > > > > The sriov agent log does not show port event for any of them. > > > > > > > > > > > > > > > -----Original Message----- > > > From: Michael Johnson > > > Sent: Wednesday, October 6, 2021 4:48 PM > > > To: Zhang, Jing C. (Nokia - CA/Ottawa) > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: Re: [Octavia] Can not create LB on SRIOV network > > > > > > Hi Jing, > > > > > > To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. > > > > > > It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. > > > > > > You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. > > > This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. > > > > > > I have not tried this and would be interested to hear if it works for you. > > > > > > If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. > > > > > > Michael > > > > > > [1] https://wiki.openstack.org/wiki/Octavia/Roadmap > > > [2] > > > https://docs.openstack.org/nova/xena/configuration/extra-specs.htm > > > l [3] https://docs.openstack.org/octavia/latest/admin/flavors.html > > > [4] https://etherpad.opendev.org/p/yoga-ptg-octavia > > > > > > On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > > > > > > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > > > > > > > > > > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > > > > > > > > > > > > > Thank you so much > > > > > > > > > > > > > > > > Jing > > > > > > > > > > > > > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > > > > Interface Config Guide (Openstack) > > > > > > > > > > > > > > > > Hi, > > > > In Openstack train release, creating Octavia LB on SRIOV network fails. > > > > I come here to search if there is already a plan to add this support, and see this story. > > > > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > > > > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > > > > Thank you > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From johnsomor at gmail.com Tue Oct 12 15:32:43 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 12 Oct 2021 08:32:43 -0700 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: References: <7b85e6646792469aaa7e513ecfda8551@verisign.com> Message-ID: I don't have a good answer for you on that as it pre-dates my history with Designate a bit. I suspect it has to do with the removal of the pool-manager and the restructuring of the controller code. Maybe someone else on the discuss list has more insight. Michael On Tue, Oct 12, 2021 at 5:47 AM Braden, Albert wrote: > > Thank you Michael, this is very helpful. Do you have any insight into why we don't experience this in Queens clusters? We aren't running a lock manager there either, and I haven't been able to duplicate the problem there. > > -----Original Message----- > From: Michael Johnson > Sent: Monday, October 11, 2021 4:24 PM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > You will need one of the Tooz supported distributed lock managers: > Consul, Memcacded, Redis, or zookeeper. > > Michael > > On Mon, Oct 11, 2021 at 11:57 AM Braden, Albert wrote: > > > > After investigating further, I realized that we're not running redis, and I think that means that redis_connection_string doesn't get set. Does this mean that we must run redis, or is there a workaround? > > > > -----Original Message----- > > From: Braden, Albert > > Sent: Monday, October 11, 2021 2:48 PM > > To: 'johnsomor at gmail.com' > > Cc: 'openstack-discuss at lists.openstack.org' > > Subject: RE: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > > > I think so. I see this: > > > > ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} > > > > ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" > > > > Did anything with the distributed lock manager between Queens and Train? > > > > -----Original Message----- > > From: Michael Johnson > > Sent: Monday, October 11, 2021 1:15 PM > > To: Braden, Albert > > Cc: openstack-discuss at lists.openstack.org > > Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > > > Hi Albert, > > > > Have you configured your distributed lock manager for Designate? > > > > [coordination] > > backend_url = > > > > Michael > > > > On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > > > > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > > > > > > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > > > > > > > > > Before applying the change, we see the DNS record in the recordset: > > > > > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > > > $ > > > > > > > > > > > > and we can pull it from the DNS server on the controllers: > > > > > > > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > > > > > > > After applying the change, we don?t see it: > > > > > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > > > $ > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > $ > > > > > > > > > > > > We see this in the logs: > > > > > > > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > > > > > > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > > > > > > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From tonyliu0592 at hotmail.com Tue Oct 12 16:04:55 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Tue, 12 Oct 2021 16:04:55 +0000 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: Message-ID: I wonder, since there is already VXLAN based L2 across multiple racks (which means you are not looking for pure L3 solution), while keep tenant network multi-subnet on L3 for EW traffic, why not have external network also on L2 and stretched on multiple racks for NS traffic, assuming you are using distributed FIP? Thanks! Tony ________________________________________ From: Francois Sent: October 12, 2021 07:03 AM To: openstack-discuss Subject: [neutron] OVN and dynamic routing Hello Neutron! I am looking into running stacks with OVN on a leaf-spine network, and have some floating IPs routed between racks. Basically each rack is assigned its own set of subnets. Some VLANs are stretched across all racks: the provisioning VLAN used by tripleo to deploy the stack, and the VLANs for the controllers API IPs. However, each tenant subnet is local to a rack: for example each OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its own rack. Traffic between 2 racks is sent to a spine, and leaves and spines run some eVPN-like thing: each pair of ToR is a vtep, traffic is encapsulated as VXLAN, and routes between vteps are exchanged with BGP. I am looking into supporting floating IPs in there: I expect floating IPs to be able to move between racks, as such I am looking into publishing the route for a FIP towards an hypervisor, through BGP. Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. It seems there are several ideas to achieve this (it was discussed [before][1] in ovs conference) - using [neutron-dynamic-routing][2] - that seems to have some gaps for OVN. It uses os-ken to talk to switches and exchange routes - using [OVN BGP agent][3] that uses FRR, it seems there is a related [RFE][4] for integration in tripleo There is btw also a [BGPVPN][5] project (it does not match my usecase as far as I tried to understand it) that also has some code that talks BGP to switches, already integrated in tripleo. For my tests, I was able to use the neutron-dynamic-routing project (almost) as documented, with a few changes: - for traffic going from VMs to outside the stack, the hypervisor was trying to resolve the "gateway of fIPs" with ARP request which does not make any sense. I created a dummy port with the mac address of the virtual router of the switches: ``` $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml - Fixed IP Addresses: - ip_address: 10.64.254.1 subnet_id: 8f37 ID: 4028 MAC Address: 00:1c:73:00:00:11 Name: lagw Status: DOWN ``` this prevent the hypervisor to send ARP requests to a non existent gateway - for traffic coming back, we start the neutron-bgp-dragent agent on the controllers. We create the right bgp speaker, peers, etc. - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it selects fips and join with ports owned by a "floatingip_agent_gateway" which does not exist on OVN. We can define ourselves some ports so that the dragent is able to find the tenant IP of a host: ``` openstack port create --network provider --device-owner network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip ip-address=10.64.245.102 ag2 ``` - when creating a floating IP and assigning a port to it, Neutron reads changes from OVN SB and fills the binding information into the port: ``` $ openstack port show -c binding_host_id `openstack floating ip show 10.64.254.177 -f value -c port_id` +-----------------+----------------------------------------+ | Field | Value | +-----------------+----------------------------------------+ | binding_host_id | cpu35d.cloud | +-----------------+----------------------------------------+ ``` this allows the dragent to publish the route for the fip ``` $ openstack bgp speaker list advertised routes bgpspeaker +------------------+---------------+ | Destination | Nexthop | +------------------+---------------+ | 10.64.254.177/32 | 10.64.245.102 | +------------------+---------------+ ``` - traffic reaches the hypervisor but (for reason I don't understand) I had to add a rule ``` $ ip rule 0: from all lookup local 32765: from all iif vlan1234 lookup ovn 32766: from all lookup main 32767: from all lookup default $ ip route show table ovn 10.64.254.177 dev vlan1234 scope link ``` so that the traffic coming for the fip is not immediately discarded by the hypervisor (it's not an ideal solution but it is a workaround that makes my one fIP work!) So all in all it seems it would be possible to use the neutron-dynamic-routing agent, with some minor modifications (eg: to also publish the fip of the OVN L3 gateway router). I am wondering whether I have overlooked anything, and if such kind of deployment (OVN + neutron dynamic routing or similar) is already in use somewhere. Does it make sense to have a RFE for better integration between OVN and neutron-dynamic-routing? Thanks Francois [1]: https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf [2]: https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html [3]: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ [4]: https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html [5]: https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml From openstack at nemebean.com Tue Oct 12 16:18:11 2021 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 12 Oct 2021 11:18:11 -0500 Subject: [KEYSTONE][POLICIES] - Overrides that don't work? In-Reply-To: References: Message-ID: Probably. I'm not an expert on writing Keystone policies so I can't promise anything. :-) However, I'm fairly confident that if you get a properly scoped token it will get you past your current error. Anything beyond that would be a barely educated guess on my part. On 10/11/21 12:18 PM, Ga?l THEROND wrote: > Hi ben! Thanks a lot for the answer! > > Ok I?ll get a look at that, but if I correctly understand a user with a > role of project-admin attached to him as a scoped to domain he should be > able to add users to a group once the policy update right? > > Once again thanks a lot for your answer! > > Le?lun. 11 oct. 2021 ? 17:25, Ben Nemec > a ?crit?: > > I don't believe it's possible to override the scope of a policy > rule. In > this case it sounds like the user should request a domain-scoped token > to perform this operation. For details on who to do that, see > https://docs.openstack.org/keystone/wallaby/admin/tokens-overview.html#authorization-scopes > > > On 10/6/21 7:52 AM, Ga?l THEROND wrote: > > Hi team, > > > > I'm having a weird behavior with my Openstack platform that makes me > > think I may have misunderstood some mechanisms on the way > policies are > > working and especially the overriding. > > > > So, long story short, I've few services that get custom policies > such as > > glance that behave as expected, Keystone's one aren't. > > > > All in all, here is what I'm understanding of the mechanism: > > > > This is the keystone policy that I'm looking to override: > > https://paste.openstack.org/show/bwuF6jFISscRllWdUURL/ > > > > > > > > This policy default can be found in here: > > > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > > > > > > > > > Here is the policy that I'm testing: > > https://paste.openstack.org/show/bHQ0PXvOro4lXNTlxlie/ > > > > > > > > I know, this policy isn't taking care of the admin role but it's > not the > > point. > > > >? From my understanding, any user with the project-manager role > should be > > able to add any available user on any available group as long as the > > project-manager domain is the same as the target. > > > > However, when I'm doing that, keystone complains that I'm not > authorized > > to do so because the user token scope is 'PROJECT' where it > should be > > 'SYSTEM' or 'DOMAIN'. > > > > Now, I wouldn't be surprised of that message being thrown?out > with the > > default policy as it's stated on the code with the following: > > > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > > > > > > > > > So the question is, if the custom policy doesn't override the > default > > scope_types how am I supposed to make it work? > > > > I hope it was clear enough, but if not, feel free to ask me for more > > information. > > > > PS: I've tried to assign this role with a domain scope to my user > and > > I've still the same issue. > > > > Thanks a lot everyone! > > > > > From ignaziocassano at gmail.com Tue Oct 12 16:38:24 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 12 Oct 2021 18:38:24 +0200 Subject: [kolla-ansible][neutron] configuration question Message-ID: Hello Everyone, I need to know if it possible configure kolla neutron ovs with more than one bridge mappings, for example: bridge_mappings = physnet1:br-ex,physnet2:br-ex2 I figure out that in standard configuration ansible playbook create br-ex and add the interface with variable "neutron_external_interface" under br-ex. What can I do if I need to do if I wand more than one bridge ? How kolla ansible playbook can help in this case ? I could use multiple bridges in /etc/kolla/config neutron configuration files, but I do not know how ansible playbook can do the job. because I do not see any variable can help me in /etc/kolla/globals.yml Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 12 17:40:51 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 12 Oct 2021 19:40:51 +0200 Subject: [kolla-ansible][neutron] configuration question In-Reply-To: References: Message-ID: Reading at this bug: https://bugs.launchpad.net/kolla-ansible/+bug/1626259 It seems only for documentation, so it must work. Right? Ignazio Il giorno mar 12 ott 2021 alle ore 18:38 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello Everyone, > I need to know if it possible configure kolla neutron ovs with more than > one bridge mappings, for example: > > bridge_mappings = physnet1:br-ex,physnet2:br-ex2 > > I figure out that in standard configuration ansible playbook create br-ex and add > > the interface with variable "neutron_external_interface" under br-ex. > > What can I do if I need to do if I wand more than one bridge ? > > How kolla ansible playbook can help in this case ? > > I could use multiple bridges in /etc/kolla/config neutron configuration files, but I do not know how ansible playbook can do the job. > > because I do not see any variable can help me in /etc/kolla/globals.yml > Thanks > > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rigault.francois at gmail.com Tue Oct 12 19:26:56 2021 From: rigault.francois at gmail.com (Francois) Date: Tue, 12 Oct 2021 21:26:56 +0200 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: Message-ID: (yes we are using distributed fips) Well we don't want stretched VLANs. However... if we followed the doc we would end up with 3 controllers in the same rack which would not be resilient. Since we have just 3 controllers plugged on specific, identified ports, we afford to have a stretched VLAN on these few ports only. For the provisioning network, I am taking a shortcut since this network should basically only be needed once in a while for stack upgrades and nothing interesting (like mac addresses moving) happens there. The data plane traffic, that needs scalability and resiliency, is not going through these VLANs. I think stretched VLANs on leaf spine networks are forbidden in general for these reasons (large L2 networks? STP reducing the bandwidth? broadcast storm? larger failure domain? I don't know specifically, I would need help from a network engineer to explain the reason). https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/spine_leaf_networking/index On Tue, 12 Oct 2021 at 18:04, Tony Liu wrote: > > I wonder, since there is already VXLAN based L2 across multiple racks > (which means you are not looking for pure L3 solution), > while keep tenant network multi-subnet on L3 for EW traffic, > why not have external network also on L2 and stretched on multiple racks > for NS traffic, assuming you are using distributed FIP? > > Thanks! > Tony > ________________________________________ > From: Francois > Sent: October 12, 2021 07:03 AM > To: openstack-discuss > Subject: [neutron] OVN and dynamic routing > > Hello Neutron! > I am looking into running stacks with OVN on a leaf-spine network, and > have some floating IPs routed between racks. > > Basically each rack is assigned its own set of subnets. > Some VLANs are stretched across all racks: the provisioning VLAN used > by tripleo to deploy the stack, and the VLANs for the controllers API > IPs. However, each tenant subnet is local to a rack: for example each > OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its > own rack. Traffic between 2 racks is sent to a spine, and leaves and > spines run some eVPN-like thing: each pair of ToR is a vtep, traffic > is encapsulated as VXLAN, and routes between vteps are exchanged with > BGP. > > I am looking into supporting floating IPs in there: I expect floating > IPs to be able to move between racks, as such I am looking into > publishing the route for a FIP towards an hypervisor, through BGP. > Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. > > It seems there are several ideas to achieve this (it was discussed > [before][1] in ovs conference) > - using [neutron-dynamic-routing][2] - that seems to have some gaps > for OVN. It uses os-ken to talk to switches and exchange routes > - using [OVN BGP agent][3] that uses FRR, it seems there is a related > [RFE][4] for integration in tripleo > > There is btw also a [BGPVPN][5] project (it does not match my usecase > as far as I tried to understand it) that also has some code that talks > BGP to switches, already integrated in tripleo. > > For my tests, I was able to use the neutron-dynamic-routing project > (almost) as documented, with a few changes: > - for traffic going from VMs to outside the stack, the hypervisor was > trying to resolve the "gateway of fIPs" with ARP request which does > not make any sense. I created a dummy port with the mac address of the > virtual router of the switches: > ``` > $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml > - Fixed IP Addresses: > - ip_address: 10.64.254.1 > subnet_id: 8f37 > ID: 4028 > MAC Address: 00:1c:73:00:00:11 > Name: lagw > Status: DOWN > ``` > this prevent the hypervisor to send ARP requests to a non existent gateway > - for traffic coming back, we start the neutron-bgp-dragent agent on > the controllers. We create the right bgp speaker, peers, etc. > - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it > selects fips and join with ports owned by a "floatingip_agent_gateway" > which does not exist on OVN. We can define ourselves some ports so > that the dragent is able to find the tenant IP of a host: > ``` > openstack port create --network provider --device-owner > network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip > ip-address=10.64.245.102 ag2 > ``` > - when creating a floating IP and assigning a port to it, Neutron > reads changes from OVN SB and fills the binding information into the > port: > ``` > $ openstack port show -c binding_host_id `openstack floating ip show > 10.64.254.177 -f value -c port_id` > +-----------------+----------------------------------------+ > | Field | Value | > +-----------------+----------------------------------------+ > | binding_host_id | cpu35d.cloud | > +-----------------+----------------------------------------+ > ``` > this allows the dragent to publish the route for the fip > ``` > $ openstack bgp speaker list advertised routes bgpspeaker > +------------------+---------------+ > | Destination | Nexthop | > +------------------+---------------+ > | 10.64.254.177/32 | 10.64.245.102 | > +------------------+---------------+ > ``` > - traffic reaches the hypervisor but (for reason I don't understand) I > had to add a rule > ``` > $ ip rule > 0: from all lookup local > 32765: from all iif vlan1234 lookup ovn > 32766: from all lookup main > 32767: from all lookup default > $ ip route show table ovn > 10.64.254.177 dev vlan1234 scope link > ``` > so that the traffic coming for the fip is not immediately discarded by > the hypervisor (it's not an ideal solution but it is a workaround that > makes my one fIP work!) > > So all in all it seems it would be possible to use the > neutron-dynamic-routing agent, with some minor modifications (eg: to > also publish the fip of the OVN L3 gateway router). > > I am wondering whether I have overlooked anything, and if such kind of > deployment (OVN + neutron dynamic routing or similar) is already in > use somewhere. Does it make sense to have a RFE for better integration > between OVN and neutron-dynamic-routing? > > Thanks > Francois > > > > > [1]: https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf > [2]: https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html > [3]: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ > [4]: https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html > [5]: https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml > From piotrmisiak1984 at gmail.com Tue Oct 12 19:40:26 2021 From: piotrmisiak1984 at gmail.com (Piotr Misiak) Date: Tue, 12 Oct 2021 21:40:26 +0200 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: Message-ID: You dont need a stretched provisioning network when you setup a DHCP relay :) IMO the L2 external network in Neutron is a major issue in OpenStack scaling. I?d love to see a BGP support in OVN and OVN neutron plugin. On Tue, 12 Oct 2021 at 21:28 Francois wrote: > (yes we are using distributed fips) Well we don't want stretched > VLANs. However... if we followed the doc we would end up with 3 > controllers in the same rack which would not be resilient. Since we > have just 3 controllers plugged on specific, identified ports, we > afford to have a stretched VLAN on these few ports only. For the > provisioning network, I am taking a shortcut since this network should > basically only be needed once in a while for stack upgrades and > nothing interesting (like mac addresses moving) happens there. The > data plane traffic, that needs scalability and resiliency, is not > going through these VLANs. I think stretched VLANs on leaf spine > networks are forbidden in general for these reasons (large L2 > networks? STP reducing the bandwidth? broadcast storm? larger failure > domain? I don't know specifically, I would need help from a network > engineer to explain the reason). > > > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/spine_leaf_networking/index > > > On Tue, 12 Oct 2021 at 18:04, Tony Liu wrote: > > > > I wonder, since there is already VXLAN based L2 across multiple racks > > (which means you are not looking for pure L3 solution), > > while keep tenant network multi-subnet on L3 for EW traffic, > > why not have external network also on L2 and stretched on multiple racks > > for NS traffic, assuming you are using distributed FIP? > > > > Thanks! > > Tony > > ________________________________________ > > From: Francois > > Sent: October 12, 2021 07:03 AM > > To: openstack-discuss > > Subject: [neutron] OVN and dynamic routing > > > > Hello Neutron! > > I am looking into running stacks with OVN on a leaf-spine network, and > > have some floating IPs routed between racks. > > > > Basically each rack is assigned its own set of subnets. > > Some VLANs are stretched across all racks: the provisioning VLAN used > > by tripleo to deploy the stack, and the VLANs for the controllers API > > IPs. However, each tenant subnet is local to a rack: for example each > > OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its > > own rack. Traffic between 2 racks is sent to a spine, and leaves and > > spines run some eVPN-like thing: each pair of ToR is a vtep, traffic > > is encapsulated as VXLAN, and routes between vteps are exchanged with > > BGP. > > > > I am looking into supporting floating IPs in there: I expect floating > > IPs to be able to move between racks, as such I am looking into > > publishing the route for a FIP towards an hypervisor, through BGP. > > Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. > > > > It seems there are several ideas to achieve this (it was discussed > > [before][1] in ovs conference) > > - using [neutron-dynamic-routing][2] - that seems to have some gaps > > for OVN. It uses os-ken to talk to switches and exchange routes > > - using [OVN BGP agent][3] that uses FRR, it seems there is a related > > [RFE][4] for integration in tripleo > > > > There is btw also a [BGPVPN][5] project (it does not match my usecase > > as far as I tried to understand it) that also has some code that talks > > BGP to switches, already integrated in tripleo. > > > > For my tests, I was able to use the neutron-dynamic-routing project > > (almost) as documented, with a few changes: > > - for traffic going from VMs to outside the stack, the hypervisor was > > trying to resolve the "gateway of fIPs" with ARP request which does > > not make any sense. I created a dummy port with the mac address of the > > virtual router of the switches: > > ``` > > $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml > > - Fixed IP Addresses: > > - ip_address: 10.64.254.1 > > subnet_id: 8f37 > > ID: 4028 > > MAC Address: 00:1c:73:00:00:11 > > Name: lagw > > Status: DOWN > > ``` > > this prevent the hypervisor to send ARP requests to a non existent > gateway > > - for traffic coming back, we start the neutron-bgp-dragent agent on > > the controllers. We create the right bgp speaker, peers, etc. > > - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it > > selects fips and join with ports owned by a "floatingip_agent_gateway" > > which does not exist on OVN. We can define ourselves some ports so > > that the dragent is able to find the tenant IP of a host: > > ``` > > openstack port create --network provider --device-owner > > network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip > > ip-address=10.64.245.102 ag2 > > ``` > > - when creating a floating IP and assigning a port to it, Neutron > > reads changes from OVN SB and fills the binding information into the > > port: > > ``` > > $ openstack port show -c binding_host_id `openstack floating ip show > > 10.64.254.177 -f value -c port_id` > > +-----------------+----------------------------------------+ > > | Field | Value | > > +-----------------+----------------------------------------+ > > | binding_host_id | cpu35d.cloud | > > +-----------------+----------------------------------------+ > > ``` > > this allows the dragent to publish the route for the fip > > ``` > > $ openstack bgp speaker list advertised routes bgpspeaker > > +------------------+---------------+ > > | Destination | Nexthop | > > +------------------+---------------+ > > | 10.64.254.177/32 | 10.64.245.102 | > > +------------------+---------------+ > > ``` > > - traffic reaches the hypervisor but (for reason I don't understand) I > > had to add a rule > > ``` > > $ ip rule > > 0: from all lookup local > > 32765: from all iif vlan1234 lookup ovn > > 32766: from all lookup main > > 32767: from all lookup default > > $ ip route show table ovn > > 10.64.254.177 dev vlan1234 scope link > > ``` > > so that the traffic coming for the fip is not immediately discarded by > > the hypervisor (it's not an ideal solution but it is a workaround that > > makes my one fIP work!) > > > > So all in all it seems it would be possible to use the > > neutron-dynamic-routing agent, with some minor modifications (eg: to > > also publish the fip of the OVN L3 gateway router). > > > > I am wondering whether I have overlooked anything, and if such kind of > > deployment (OVN + neutron dynamic routing or similar) is already in > > use somewhere. Does it make sense to have a RFE for better integration > > between OVN and neutron-dynamic-routing? > > > > Thanks > > Francois > > > > > > > > > > [1]: > https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf > > [2]: > https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html > > [3]: > https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ > > [4]: > https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html > > [5]: > https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Tue Oct 12 19:46:50 2021 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 12 Oct 2021 12:46:50 -0700 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: Message-ID: <16ee3932-e4d5-076b-4f56-89d35bf4bd8a@redhat.com> On 10/12/21 07:03, Francois wrote: > Hello Neutron! > I am looking into running stacks with OVN on a leaf-spine network, and > have some floating IPs routed between racks. > > Basically each rack is assigned its own set of subnets. > Some VLANs are stretched across all racks: the provisioning VLAN used > by tripleo to deploy the stack, and the VLANs for the controllers API > IPs. However, each tenant subnet is local to a rack: for example each > OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its > own rack. Traffic between 2 racks is sent to a spine, and leaves and > spines run some eVPN-like thing: each pair of ToR is a vtep, traffic > is encapsulated as VXLAN, and routes between vteps are exchanged with > BGP. > There has been a lot of work put into TripleO to allow you to provision hosts across L3 boundaries using DHCP relay. You can create a routed provisioning network using "helper-address" or vendor-specific commands on your top-of-rack switches, and a different subnet and DHCP address pool per rack. > I am looking into supporting floating IPs in there: I expect floating > IPs to be able to move between racks, as such I am looking into > publishing the route for a FIP towards an hypervisor, through BGP. > Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. This is becoming a very common architecture, and that is why there are several projects working to achieve the same goal with slightly different implementations. > > It seems there are several ideas to achieve this (it was discussed > [before][1] in ovs conference) > - using [neutron-dynamic-routing][2] - that seems to have some gaps > for OVN. It uses os-ken to talk to switches and exchange routes > - using [OVN BGP agent][3] that uses FRR, it seems there is a related > [RFE][4] for integration in tripleo > > There is btw also a [BGPVPN][5] project (it does not match my usecase > as far as I tried to understand it) that also has some code that talks > BGP to switches, already integrated in tripleo. > > For my tests, I was able to use the neutron-dynamic-routing project > (almost) as documented, with a few changes: > - for traffic going from VMs to outside the stack, the hypervisor was > trying to resolve the "gateway of fIPs" with ARP request which does > not make any sense. I created a dummy port with the mac address of the > virtual router of the switches: > ``` > $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml > - Fixed IP Addresses: > - ip_address: 10.64.254.1 > subnet_id: 8f37 > ID: 4028 > MAC Address: 00:1c:73:00:00:11 > Name: lagw > Status: DOWN > ``` > this prevent the hypervisor to send ARP requests to a non existent gateway > - for traffic coming back, we start the neutron-bgp-dragent agent on > the controllers. We create the right bgp speaker, peers, etc. > - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it > selects fips and join with ports owned by a "floatingip_agent_gateway" > which does not exist on OVN. We can define ourselves some ports so > that the dragent is able to find the tenant IP of a host: > ``` > openstack port create --network provider --device-owner > network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip > ip-address=10.64.245.102 ag2 > ``` > - when creating a floating IP and assigning a port to it, Neutron > reads changes from OVN SB and fills the binding information into the > port: > ``` > $ openstack port show -c binding_host_id `openstack floating ip show > 10.64.254.177 -f value -c port_id` > +-----------------+----------------------------------------+ > | Field | Value | > +-----------------+----------------------------------------+ > | binding_host_id | cpu35d.cloud | > +-----------------+----------------------------------------+ > ``` > this allows the dragent to publish the route for the fip > ``` > $ openstack bgp speaker list advertised routes bgpspeaker > +------------------+---------------+ > | Destination | Nexthop | > +------------------+---------------+ > | 10.64.254.177/32 | 10.64.245.102 | > +------------------+---------------+ > ``` > - traffic reaches the hypervisor but (for reason I don't understand) I > had to add a rule > ``` > $ ip rule > 0: from all lookup local > 32765: from all iif vlan1234 lookup ovn > 32766: from all lookup main > 32767: from all lookup default > $ ip route show table ovn > 10.64.254.177 dev vlan1234 scope link > ``` > so that the traffic coming for the fip is not immediately discarded by > the hypervisor (it's not an ideal solution but it is a workaround that > makes my one fIP work!) > > So all in all it seems it would be possible to use the > neutron-dynamic-routing agent, with some minor modifications (eg: to > also publish the fip of the OVN L3 gateway router). > > I am wondering whether I have overlooked anything, and if such kind of > deployment (OVN + neutron dynamic routing or similar) is already in > use somewhere. Does it make sense to have a RFE for better integration > between OVN and neutron-dynamic-routing? I have been helping to contribute to integrating FRR with OVN in order to advertise FIPs and provider network IPs into BGP. The OVN BGP Agent is very new, and I'm pretty sure that nobody is using it in production yet. However the initial implementation is fairly simple and hopefully it will mature quickly. As you discovered, the solution that uses neutron-bgp-dragent and os-ken is not compatible with OVN, that is why ovs-bgp-agent is being developed. You should be able to try the ovs-bgp-agent with FRR and properly configured routing switches, it functions for the basic use case. The OVN BGP Agent will ensure that FIP and provider network IPs are present in the kernel as a /32 or /128 host route, which is then advertised into the BGP fabric using the FRR BGP daemon. If the default route is received from BGP it will be installed into the kernel by the FRR zebra daemon which syncs kernel routes with the FRR BGP routing table. The OVN BGP agent installs flows for the Neutron network gateways that hand off traffic to the kernel for routing. Since the kernel routing table is used, the agent isn't compatible with DPDK fast datapath yet. We don't have good documentation for the OVN BGP integration yet. I've only recently been able to make it my primary priority, and some of the other engineers which have done the initial proof of concept are moving on to other projects. There will be some discussions at the upcoming OpenStack PTG about this work, but I am hopeful that the missing pieces for your use case will come about in the Yoga cycle. > > Thanks > Francois > > > > > [1]: https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf > [2]: https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html > [3]: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ > [4]: https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html > [5]: https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml > -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter From tonyliu0592 at hotmail.com Tue Oct 12 20:03:47 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Tue, 12 Oct 2021 20:03:47 +0000 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: Message-ID: Not sure if this helps. http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019681.html https://docs.openstack.org/neutron/latest/admin/config-bgp-floating-ip-over-l2-segmented-network.html Thanks! Tony ________________________________________ From: Francois Sent: October 12, 2021 12:26 PM To: Tony Liu Cc: openstack-discuss Subject: Re: [neutron] OVN and dynamic routing (yes we are using distributed fips) Well we don't want stretched VLANs. However... if we followed the doc we would end up with 3 controllers in the same rack which would not be resilient. Since we have just 3 controllers plugged on specific, identified ports, we afford to have a stretched VLAN on these few ports only. For the provisioning network, I am taking a shortcut since this network should basically only be needed once in a while for stack upgrades and nothing interesting (like mac addresses moving) happens there. The data plane traffic, that needs scalability and resiliency, is not going through these VLANs. I think stretched VLANs on leaf spine networks are forbidden in general for these reasons (large L2 networks? STP reducing the bandwidth? broadcast storm? larger failure domain? I don't know specifically, I would need help from a network engineer to explain the reason). https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/spine_leaf_networking/index On Tue, 12 Oct 2021 at 18:04, Tony Liu wrote: > > I wonder, since there is already VXLAN based L2 across multiple racks > (which means you are not looking for pure L3 solution), > while keep tenant network multi-subnet on L3 for EW traffic, > why not have external network also on L2 and stretched on multiple racks > for NS traffic, assuming you are using distributed FIP? > > Thanks! > Tony > ________________________________________ > From: Francois > Sent: October 12, 2021 07:03 AM > To: openstack-discuss > Subject: [neutron] OVN and dynamic routing > > Hello Neutron! > I am looking into running stacks with OVN on a leaf-spine network, and > have some floating IPs routed between racks. > > Basically each rack is assigned its own set of subnets. > Some VLANs are stretched across all racks: the provisioning VLAN used > by tripleo to deploy the stack, and the VLANs for the controllers API > IPs. However, each tenant subnet is local to a rack: for example each > OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its > own rack. Traffic between 2 racks is sent to a spine, and leaves and > spines run some eVPN-like thing: each pair of ToR is a vtep, traffic > is encapsulated as VXLAN, and routes between vteps are exchanged with > BGP. > > I am looking into supporting floating IPs in there: I expect floating > IPs to be able to move between racks, as such I am looking into > publishing the route for a FIP towards an hypervisor, through BGP. > Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. > > It seems there are several ideas to achieve this (it was discussed > [before][1] in ovs conference) > - using [neutron-dynamic-routing][2] - that seems to have some gaps > for OVN. It uses os-ken to talk to switches and exchange routes > - using [OVN BGP agent][3] that uses FRR, it seems there is a related > [RFE][4] for integration in tripleo > > There is btw also a [BGPVPN][5] project (it does not match my usecase > as far as I tried to understand it) that also has some code that talks > BGP to switches, already integrated in tripleo. > > For my tests, I was able to use the neutron-dynamic-routing project > (almost) as documented, with a few changes: > - for traffic going from VMs to outside the stack, the hypervisor was > trying to resolve the "gateway of fIPs" with ARP request which does > not make any sense. I created a dummy port with the mac address of the > virtual router of the switches: > ``` > $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml > - Fixed IP Addresses: > - ip_address: 10.64.254.1 > subnet_id: 8f37 > ID: 4028 > MAC Address: 00:1c:73:00:00:11 > Name: lagw > Status: DOWN > ``` > this prevent the hypervisor to send ARP requests to a non existent gateway > - for traffic coming back, we start the neutron-bgp-dragent agent on > the controllers. We create the right bgp speaker, peers, etc. > - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it > selects fips and join with ports owned by a "floatingip_agent_gateway" > which does not exist on OVN. We can define ourselves some ports so > that the dragent is able to find the tenant IP of a host: > ``` > openstack port create --network provider --device-owner > network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip > ip-address=10.64.245.102 ag2 > ``` > - when creating a floating IP and assigning a port to it, Neutron > reads changes from OVN SB and fills the binding information into the > port: > ``` > $ openstack port show -c binding_host_id `openstack floating ip show > 10.64.254.177 -f value -c port_id` > +-----------------+----------------------------------------+ > | Field | Value | > +-----------------+----------------------------------------+ > | binding_host_id | cpu35d.cloud | > +-----------------+----------------------------------------+ > ``` > this allows the dragent to publish the route for the fip > ``` > $ openstack bgp speaker list advertised routes bgpspeaker > +------------------+---------------+ > | Destination | Nexthop | > +------------------+---------------+ > | 10.64.254.177/32 | 10.64.245.102 | > +------------------+---------------+ > ``` > - traffic reaches the hypervisor but (for reason I don't understand) I > had to add a rule > ``` > $ ip rule > 0: from all lookup local > 32765: from all iif vlan1234 lookup ovn > 32766: from all lookup main > 32767: from all lookup default > $ ip route show table ovn > 10.64.254.177 dev vlan1234 scope link > ``` > so that the traffic coming for the fip is not immediately discarded by > the hypervisor (it's not an ideal solution but it is a workaround that > makes my one fIP work!) > > So all in all it seems it would be possible to use the > neutron-dynamic-routing agent, with some minor modifications (eg: to > also publish the fip of the OVN L3 gateway router). > > I am wondering whether I have overlooked anything, and if such kind of > deployment (OVN + neutron dynamic routing or similar) is already in > use somewhere. Does it make sense to have a RFE for better integration > between OVN and neutron-dynamic-routing? > > Thanks > Francois > > > > > [1]: https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf > [2]: https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html > [3]: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ > [4]: https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html > [5]: https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml > From gmann at ghanshyammann.com Tue Oct 12 23:07:33 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Oct 2021 18:07:33 -0500 Subject: [all] RBAC related discussion in Yoga PTG Message-ID: <17c76c2d8d8.b4647d6b920595.8260462059922238034@ghanshyammann.com> Hello Everyone, As you might know, we are not so far from the Yoga PTG. I have created the below etherpad to collect the RBAC related discussion happening in various project sessions. - https://etherpad.opendev.org/p/policy-popup-yoga-ptg We have not schedule any separate sessions for this instead thought of attending the related discussion in project PTG itself. Please do the below two steps before PTG: 1. Add the common topics (for QA, Horizon etc) you would like to discuss/know. 2. Add any related rbac sessions you have planned in your project PTG. - I have added a few of them but few need the exact schedule/time so that we can plan to attend it. Please check and add the time for your project sessions. -gmann From franck.vedel at univ-grenoble-alpes.fr Wed Oct 13 07:57:52 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Wed, 13 Oct 2021 09:57:52 +0200 Subject: =?utf-8?Q?Probl=C3=A8me_with_image_from_snapshot?= Message-ID: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> Hello and first sorry for my english? thanks google. Something is wrong with what I want to do: I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). Here is what I want to do and which does not work as I want: - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. I create the snapshot, I place the "--public" parameter on the new image. I'm trying to create a new instance from this snapshot with the admin account: it works. I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? Thanks if you have ideas for helping me Franck VEDEL -------------- next part -------------- An HTML attachment was scrubbed... URL: From rigault.francois at gmail.com Wed Oct 13 08:03:51 2021 From: rigault.francois at gmail.com (Francois) Date: Wed, 13 Oct 2021 10:03:51 +0200 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: <16ee3932-e4d5-076b-4f56-89d35bf4bd8a@redhat.com> Message-ID: ...forgot to add the mailing list in the reply On Wed, 13 Oct 2021 at 10:01, Francois wrote: > > On Tue, 12 Oct 2021 at 21:46, Dan Sneddon wrote: > > > > On 10/12/21 07:03, Francois wrote: > > > Hello Neutron! > > > I am looking into running stacks with OVN on a leaf-spine network, and > > > have some floating IPs routed between racks. > > > > > > Basically each rack is assigned its own set of subnets. > > > Some VLANs are stretched across all racks: the provisioning VLAN used > > > by tripleo to deploy the stack, and the VLANs for the controllers API > > > IPs. However, each tenant subnet is local to a rack: for example each > > > OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its > > > own rack. Traffic between 2 racks is sent to a spine, and leaves and > > > spines run some eVPN-like thing: each pair of ToR is a vtep, traffic > > > is encapsulated as VXLAN, and routes between vteps are exchanged with > > > BGP. > > > > > > > There has been a lot of work put into TripleO to allow you to provision > > hosts across L3 boundaries using DHCP relay. You can create a routed > > provisioning network using "helper-address" or vendor-specific commands > > on your top-of-rack switches, and a different subnet and DHCP address > > pool per rack. > Yes I saw that in the doc. I was not planning on using this for reasons I mentioned in another reply (this provisioning network is ""useless most of the time"" since there is almost no provisioning happening :D ) If any, I would love to work on Ironic DHCP-less deployments which was almost working last time I tried and I saw Ironic team contributing fixes since then. > > > > > > I am looking into supporting floating IPs in there: I expect floating > > > IPs to be able to move between racks, as such I am looking into > > > publishing the route for a FIP towards an hypervisor, through BGP. > > > Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. > > > > This is becoming a very common architecture, and that is why there are > > several projects working to achieve the same goal with slightly > > different implementations. > > > > > > > > It seems there are several ideas to achieve this (it was discussed > > > [before][1] in ovs conference) > > > - using [neutron-dynamic-routing][2] - that seems to have some gaps > > > for OVN. It uses os-ken to talk to switches and exchange routes > > > - using [OVN BGP agent][3] that uses FRR, it seems there is a related > > > [RFE][4] for integration in tripleo > > > > > > There is btw also a [BGPVPN][5] project (it does not match my usecase > > > as far as I tried to understand it) that also has some code that talks > > > BGP to switches, already integrated in tripleo. > > > > > > For my tests, I was able to use the neutron-dynamic-routing project > > > (almost) as documented, with a few changes: > > > - for traffic going from VMs to outside the stack, the hypervisor was > > > trying to resolve the "gateway of fIPs" with ARP request which does > > > not make any sense. I created a dummy port with the mac address of the > > > virtual router of the switches: > > > ``` > > > $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml > > > - Fixed IP Addresses: > > > - ip_address: 10.64.254.1 > > > subnet_id: 8f37 > > > ID: 4028 > > > MAC Address: 00:1c:73:00:00:11 > > > Name: lagw > > > Status: DOWN > > > ``` > > > this prevent the hypervisor to send ARP requests to a non existent gateway > > > - for traffic coming back, we start the neutron-bgp-dragent agent on > > > the controllers. We create the right bgp speaker, peers, etc. > > > - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it > > > selects fips and join with ports owned by a "floatingip_agent_gateway" > > > which does not exist on OVN. We can define ourselves some ports so > > > that the dragent is able to find the tenant IP of a host: > > > ``` > > > openstack port create --network provider --device-owner > > > network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip > > > ip-address=10.64.245.102 ag2 > > > ``` > > > - when creating a floating IP and assigning a port to it, Neutron > > > reads changes from OVN SB and fills the binding information into the > > > port: > > > ``` > > > $ openstack port show -c binding_host_id `openstack floating ip show > > > 10.64.254.177 -f value -c port_id` > > > +-----------------+----------------------------------------+ > > > | Field | Value | > > > +-----------------+----------------------------------------+ > > > | binding_host_id | cpu35d.cloud | > > > +-----------------+----------------------------------------+ > > > ``` > > > this allows the dragent to publish the route for the fip > > > ``` > > > $ openstack bgp speaker list advertised routes bgpspeaker > > > +------------------+---------------+ > > > | Destination | Nexthop | > > > +------------------+---------------+ > > > | 10.64.254.177/32 | 10.64.245.102 | > > > +------------------+---------------+ > > > ``` > > > - traffic reaches the hypervisor but (for reason I don't understand) I > > > had to add a rule > > > ``` > > > $ ip rule > > > 0: from all lookup local > > > 32765: from all iif vlan1234 lookup ovn > > > 32766: from all lookup main > > > 32767: from all lookup default > > > $ ip route show table ovn > > > 10.64.254.177 dev vlan1234 scope link > > > ``` > > > so that the traffic coming for the fip is not immediately discarded by > > > the hypervisor (it's not an ideal solution but it is a workaround that > > > makes my one fIP work!) > > > > > > So all in all it seems it would be possible to use the > > > neutron-dynamic-routing agent, with some minor modifications (eg: to > > > also publish the fip of the OVN L3 gateway router). > > > > > > I am wondering whether I have overlooked anything, and if such kind of > > > deployment (OVN + neutron dynamic routing or similar) is already in > > > use somewhere. Does it make sense to have a RFE for better integration > > > between OVN and neutron-dynamic-routing? > > > > I have been helping to contribute to integrating FRR with OVN in order > > to advertise FIPs and provider network IPs into BGP. The OVN BGP Agent > > is very new, and I'm pretty sure that nobody is using it in production > > yet. However the initial implementation is fairly simple and hopefully > > it will mature quickly. > > > > As you discovered, the solution that uses neutron-bgp-dragent and os-ken > > is not compatible with OVN > Pretty much the contrary, it basically worked. There are a few differences but the gap seems very tiny (unless I overlooked something and I'm fundamentally wrong) I don't understand why a new project would be needed to make it work for OVN. > > >, that is why ovs-bgp-agent is being > > developed. You should be able to try the ovs-bgp-agent with FRR and > > properly configured routing switches, it functions for the basic use case. > > > > The OVN BGP Agent will ensure that FIP and provider network IPs are > > present in the kernel as a /32 or /128 host route, which is then > > advertised into the BGP fabric using the FRR BGP daemon. If the default > > route is received from BGP it will be installed into the kernel by the > > FRR zebra daemon which syncs kernel routes with the FRR BGP routing > > table. The OVN BGP agent installs flows for the Neutron network gateways > > that hand off traffic to the kernel for routing. Since the kernel > > routing table is used, the agent isn't compatible with DPDK fast > > datapath yet. > > > > We don't have good documentation for the OVN BGP integration yet. I've > > only recently been able to make it my primary priority, and some of the > > other engineers which have done the initial proof of concept are moving > > on to other projects. There will be some discussions at the upcoming > > OpenStack PTG about this work, but I am hopeful that the missing pieces > > for your use case will come about in the Yoga cycle. > I did not try to run the OVN BGP agent but I saw your blog posts and I think it's enough to get started with. I still don't get why an extra OVN BGP agent would be needed. One thing I was wondering from the blog posts (and your reply here) is whether every single compute would need connectivity to the physical switches to publish the routes - as the dragent runs on the controller node you only need to configure connectivity between the controllers and the physical switches while in the FRR case you need to open much more. > Would the developments in the Yoga cycle be focused on the OVN BGP agent only, and so there is no interest in improving the neutron-dynamic-routing project ? > Thanks for your insightful comments :) > > > > > > > > > Thanks > > > Francois From mark at stackhpc.com Wed Oct 13 08:09:05 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 13 Oct 2021 09:09:05 +0100 Subject: [kolla-ansible][neutron] configuration question In-Reply-To: References: Message-ID: On Tue, 12 Oct 2021 at 18:47, Ignazio Cassano wrote: > > Reading at this bug: > https://bugs.launchpad.net/kolla-ansible/+bug/1626259 > It seems only for documentation, so it must work. > Right? > Ignazio As mentioned in the above bug: neutron_bridge_name: "br-ex,br-ex2" neutron_external_interface: "eth1,eth2" Mark > > Il giorno mar 12 ott 2021 alle ore 18:38 Ignazio Cassano ha scritto: >> >> Hello Everyone, >> I need to know if it possible configure kolla neutron ovs with more than one bridge mappings, for example: >> >> bridge_mappings = physnet1:br-ex,physnet2:br-ex2 >> >> I figure out that in standard configuration ansible playbook create br-ex and add >> >> the interface with variable "neutron_external_interface" under br-ex. >> >> What can I do if I need to do if I wand more than one bridge ? >> >> How kolla ansible playbook can help in this case ? >> >> I could use multiple bridges in /etc/kolla/config neutron configuration files, but I do not know how ansible playbook can do the job. >> >> because I do not see any variable can help me in /etc/kolla/globals.yml >> Thanks >> >> Ignazio From ignaziocassano at gmail.com Wed Oct 13 08:09:56 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 13 Oct 2021 10:09:56 +0200 Subject: [kolla-ansible][neutron] configuration question In-Reply-To: References: Message-ID: Many thanks Ignazio Il giorno mer 13 ott 2021 alle ore 10:09 Mark Goddard ha scritto: > On Tue, 12 Oct 2021 at 18:47, Ignazio Cassano > wrote: > > > > Reading at this bug: > > https://bugs.launchpad.net/kolla-ansible/+bug/1626259 > > It seems only for documentation, so it must work. > > Right? > > Ignazio > > As mentioned in the above bug: > > neutron_bridge_name: "br-ex,br-ex2" > neutron_external_interface: "eth1,eth2" > > Mark > > > > > Il giorno mar 12 ott 2021 alle ore 18:38 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> > >> Hello Everyone, > >> I need to know if it possible configure kolla neutron ovs with more > than one bridge mappings, for example: > >> > >> bridge_mappings = physnet1:br-ex,physnet2:br-ex2 > >> > >> I figure out that in standard configuration ansible playbook create > br-ex and add > >> > >> the interface with variable "neutron_external_interface" under br-ex. > >> > >> What can I do if I need to do if I wand more than one bridge ? > >> > >> How kolla ansible playbook can help in this case ? > >> > >> I could use multiple bridges in /etc/kolla/config neutron configuration > files, but I do not know how ansible playbook can do the job. > >> > >> because I do not see any variable can help me in /etc/kolla/globals.yml > >> Thanks > >> > >> Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at bitswalk.com Wed Oct 13 12:19:32 2021 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Wed, 13 Oct 2021 14:19:32 +0200 Subject: [KEYSTONE][POLICIES] - Overrides that don't work? In-Reply-To: References: Message-ID: All right, I'll test that out a bit more using a native Keystone user type as for now I'm dealing with ADFS/SSO based users that can't use CLI because ECP isn't available and so rely on Application Credentials that are project scoped ^^ Le mar. 12 oct. 2021 ? 18:18, Ben Nemec a ?crit : > Probably. I'm not an expert on writing Keystone policies so I can't > promise anything. :-) > > However, I'm fairly confident that if you get a properly scoped token it > will get you past your current error. Anything beyond that would be a > barely educated guess on my part. > > On 10/11/21 12:18 PM, Ga?l THEROND wrote: > > Hi ben! Thanks a lot for the answer! > > > > Ok I?ll get a look at that, but if I correctly understand a user with a > > role of project-admin attached to him as a scoped to domain he should be > > able to add users to a group once the policy update right? > > > > Once again thanks a lot for your answer! > > > > Le lun. 11 oct. 2021 ? 17:25, Ben Nemec > > a ?crit : > > > > I don't believe it's possible to override the scope of a policy > > rule. In > > this case it sounds like the user should request a domain-scoped > token > > to perform this operation. For details on who to do that, see > > > https://docs.openstack.org/keystone/wallaby/admin/tokens-overview.html#authorization-scopes > > < > https://docs.openstack.org/keystone/wallaby/admin/tokens-overview.html#authorization-scopes > > > > > > On 10/6/21 7:52 AM, Ga?l THEROND wrote: > > > Hi team, > > > > > > I'm having a weird behavior with my Openstack platform that makes > me > > > think I may have misunderstood some mechanisms on the way > > policies are > > > working and especially the overriding. > > > > > > So, long story short, I've few services that get custom policies > > such as > > > glance that behave as expected, Keystone's one aren't. > > > > > > All in all, here is what I'm understanding of the mechanism: > > > > > > This is the keystone policy that I'm looking to override: > > > https://paste.openstack.org/show/bwuF6jFISscRllWdUURL/ > > > > > > > > > > > > > This policy default can be found in here: > > > > > > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > > > > > > > > < > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > >> > > > > > > Here is the policy that I'm testing: > > > https://paste.openstack.org/show/bHQ0PXvOro4lXNTlxlie/ > > > > > > > > > > > > > I know, this policy isn't taking care of the admin role but it's > > not the > > > point. > > > > > > From my understanding, any user with the project-manager role > > should be > > > able to add any available user on any available group as long as > the > > > project-manager domain is the same as the target. > > > > > > However, when I'm doing that, keystone complains that I'm not > > authorized > > > to do so because the user token scope is 'PROJECT' where it > > should be > > > 'SYSTEM' or 'DOMAIN'. > > > > > > Now, I wouldn't be surprised of that message being thrown out > > with the > > > default policy as it's stated on the code with the following: > > > > > > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > > > > > > > > < > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > >> > > > > > > So the question is, if the custom policy doesn't override the > > default > > > scope_types how am I supposed to make it work? > > > > > > I hope it was clear enough, but if not, feel free to ask me for > more > > > information. > > > > > > PS: I've tried to assign this role with a domain scope to my user > > and > > > I've still the same issue. > > > > > > Thanks a lot everyone! > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Oct 13 12:47:22 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 13 Oct 2021 09:47:22 -0300 Subject: [cinder] Bug deputy report for week of 10-13-2021 Message-ID: This is a bug report from 10-06-2021-15-09 to 10-13-2021. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Medium - https://bugs.launchpad.net/cinder/+bug/1946736 'Powerflex driver: update supported storage versions'. - https://bugs.launchpad.net/cinder/+bug/1946350 '[gate-failure] nova-live-migration evacuation failure due to slow lvchange -a command in c-vol during volume attachment update'. - https://bugs.launchpad.net/cinder/+bug/1946340 '[gate-failure] Unable to stack on fedora34 due to cinder pulling in oslo.vmware and suds-jurko that uses use_2to3 that is invalid with setuptools >=58.0.0'. - https://bugs.launchpad.net/cinder/+bug/1946263 'NetApp ONTAP Failing migrating volume from/to FlexGroup pool'. Low - https://bugs.launchpad.net/cinder/+bug/1946618 'Add same volume to the group-update does not show proper error'. Wishlist - https://bugs.launchpad.net/cinder/+bug/1946645 '[doc] Install and configure a storage node in cinder'. Cheers -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpeacock at redhat.com Wed Oct 13 13:10:17 2021 From: dpeacock at redhat.com (David Peacock) Date: Wed, 13 Oct 2021 09:10:17 -0400 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: Sounds like progress, thanks for the update. For clarification, which version are you attempting to deploy? Upstream master? Thanks, David On Wed, Oct 13, 2021 at 3:57 AM Anirudh Gupta wrote: > Hi David, > > Thanks for your response. > In order to run pre-introspection, I debugged and created an inventory > file of my own having the following content > > [Undercloud] > undercloud > > With this and also with the file you mentioned, I was able to run > pre-introspection successfully. > > (undercloud) [stack at undercloud ~]$ openstack tripleo validator run > --group pre-introspection -i > tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml > > +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ > | UUID | Validations | > Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | > > +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ > | 6cdc7c84-d278-430a-b6fc-3893e42310d8 | check-cpu | > PASSED | localhost | localhost | | 0:00:01.116 | > | ac0d54a5-51c3-4f52-9dba-2a9b26583591 | check-disk-space | > PASSED | localhost | localhost | | 0:00:03.546 | > | 3af6fefc-47d0-40b1-bd5b-88e03e0f61ef | check-ram | > PASSED | localhost | localhost | | 0:00:01.069 | > | e8d17007-6c46-4959-8bfc-dc59dd77ba65 | check-selinux-mode | > PASSED | localhost | localhost | | 0:00:01.395 | > | 28df7ed3-8cea-4a4d-af34-14c8eec406ea | check-network-gateway | > PASSED | undercloud | undercloud | | 0:00:02.347 | > | efa6b4ab-de40-42a0-815e-238e5b81995c | undercloud-disk-space | > PASSED | undercloud | undercloud | | 0:00:03.657 | > | 89293cce-5f30-4626-b326-5cfeff48ab0c | undercloud-neutron-sanity-check | > PASSED | undercloud | undercloud | | 0:00:07.715 | > | 0da9986f-8fc6-46f7-8936-c8b838c12c7b | ctlplane-ip-range | > PASSED | undercloud | undercloud | | 0:00:01.973 | > | 89f286ee-cd83-4d05-8d99-bffd03df142b | dhcp-introspection | > PASSED | undercloud | undercloud | | 0:00:06.364 | > | c5256e61-f787-4a1b-9e1a-1eff0c0b2bb6 | undercloud-tokenflush | > PASSED | undercloud | undercloud | | 0:00:01.209 | > > +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ > > > But passing this file while pre-deployment, it is still failing. > (undercloud) [stack at undercloud undercloud]$ openstack tripleo validator > run --group pre-deployment -i tripleo-ansible-inventory.yaml > > +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ > | UUID | Validations > | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration > | > > +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ > | 6deebd06-cf12-4083-a4f2-a31306a719b3 | 512e > | PASSED | localhost | localhost | | > 0:00:00.511 | > | a2b80c05-40c0-4dd6-9d8d-03be0f5278ba | dns > | PASSED | localhost | localhost | | > 0:00:00.428 | > | bd3c32b3-6a0e-424c-9d2e-2898c5bb50ef | service-status > | PASSED | all | undercloud | | > 0:00:05.923 | > | 7342190b-2ad9-4639-91c7-582ae4b141c6 | validate-selinux > | PASSED | all | undercloud | | > 0:00:02.299 | > | 665c4d42-e058-4e9d-9ee1-30e29b3a75c8 | package-version > | FAILED | all | undercloud | | > 0:03:34.295 | > | e0001906-5a8c-4f9b-9ad7-7b5b4d4b8d22 | ceph-ansible-installed > | PASSED | undercloud | undercloud | | > 0:00:02.723 | > | beb5bf3d-3ee8-4fd6-8daa-0cf13023c1f3 | ceph-dependencies-installed > | PASSED | allovercloud | undercloud | | > 0:00:02.610 | > | d872e781-4cd2-4509-ad51-74d7f3b3ebbf | tls-everywhere-pre-deployment > | FAILED | undercloud | undercloud | | > 0:00:36.546 | > | bc7e8940-d61a-4349-a5be-a41312b8bd2f | undercloud-debug > | FAILED | undercloud | undercloud | | > 0:00:01.702 | > | 8de4f037-ac24-4700-b449-405e723a7e50 | > collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud > | | 0:00:00.936 | > | 1aadf9f7-a200-499a-826f-06c2ad3f1ab7 | undercloud-heat-purge-deleted > | PASSED | undercloud | undercloud | | > 0:00:02.232 | > | db5204af-a054-4eae-9325-c2f592997b59 | undercloud-process-count > | PASSED | undercloud | undercloud | | > 0:00:07.770 | > | 7fdb9935-a30d-4356-8524-23065da894e4 | default-node-count > | FAILED | undercloud | undercloud | | > 0:00:00.942 | > | 0868a984-7de0-42f0-8d6b-abb19c72c98b | dhcp-provisioning > | FAILED | undercloud | undercloud | | > 0:00:01.668 | > | 7796624f-5b13-4d66-8dce-8998f2370625 | ironic-boot-configuration > | FAILED | undercloud | undercloud | | > 0:00:00.935 | > | e087bbae-6371-4e2e-9445-0fcc1f936b96 | network-environment > | FAILED | undercloud | undercloud | | > 0:00:00.936 | > | db93613d-9cab-4954-949f-d7b2578c20c5 | node-disks > | FAILED | undercloud | undercloud | | > 0:00:01.741 | > | 66bed170-ffb1-4466-b065-9f6012abdd6e | switch-vlans > | FAILED | undercloud | undercloud | | > 0:00:01.795 | > | 4911cd84-26cf-4c43-ba5a-645c5c5f20b4 | system-encoding > | PASSED | all | undercloud | | > 0:00:00.393 | > > +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ > > > As per the response from Alex, This could probably because these > validations calls might be broken and and are not tested in CI > > I am moving forward with the deployment ignoring these errors as suggested > > Regards > Anirudh Gupta > > > On Tue, Oct 12, 2021 at 8:02 PM David Peacock wrote: > >> Hi Anirudh, >> >> You're hitting a known bug that we're in the process of propagating a fix >> for; sorry for this. :-) >> >> As per a patch we have under review, use the inventory file located under >> ~/tripleo-deploy/ directory: tripleo-ansible-inventory.yaml. >> To generate an inventory file, use the playbook in "tripleo-ansible: >> cli-config-download.yaml". >> >> https://review.opendev.org/c/openstack/tripleo-validations/+/813535 >> >> Let us know if this doesn't put you on the right track. >> >> Thanks, >> David >> >> On Sat, Oct 9, 2021 at 5:12 PM Anirudh Gupta wrote: >> >>> Hi Team, >>> >>> I am installing Tripleo using the below link >>> >>> >>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html >>> >>> In the Introspect section, When I executed the command >>> openstack tripleo validator run --group pre-introspection >>> >>> I got the following error: >>> >>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>> | UUID | Validations >>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >>> >>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>> | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu >>> | PASSED | localhost | localhost | | 0:00:01.261 | >>> | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space >>> | PASSED | localhost | localhost | | 0:00:04.480 | >>> | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram >>> | PASSED | localhost | localhost | | 0:00:02.173 | >>> | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode >>> | PASSED | localhost | localhost | | 0:00:01.546 | >>> | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway >>> | FAILED | undercloud | No host matched | | | >>> | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space >>> | FAILED | undercloud | No host matched | | | >>> | 2f0239db-d530-48eb-b606-f82179e72e50 | undercloud-neutron-sanity-check >>> | FAILED | undercloud | No host matched | | | >>> | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range >>> | FAILED | undercloud | No host matched | | | >>> | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection >>> | FAILED | undercloud | No host matched | | | >>> | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush >>> | FAILED | undercloud | No host matched | | | >>> >>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>> >>> >>> Then I created the following inventory file: >>> [Undercloud] >>> undercloud >>> >>> Passed this command while running the pre-introspection command. >>> It then executed successfully. >>> >>> >>> But with Pre-deployment, it is still failing even after passing the >>> inventory >>> >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>> | UUID | Validations >>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | >>> Duration | >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>> | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e >>> | PASSED | localhost | localhost | | >>> 0:00:00.504 | >>> | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns >>> | PASSED | localhost | localhost | | >>> 0:00:00.481 | >>> | 93611c13-49a2-4cae-ad87-099546459481 | service-status >>> | PASSED | all | undercloud | | >>> 0:00:06.942 | >>> | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux >>> | PASSED | all | undercloud | | >>> 0:00:02.433 | >>> | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version >>> | FAILED | all | undercloud | | >>> 0:00:03.576 | >>> | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed >>> | PASSED | undercloud | undercloud | | >>> 0:00:02.850 | >>> | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed >>> | FAILED | allovercloud | No host matched | | >>> | >>> | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment >>> | FAILED | undercloud | undercloud | | >>> 0:00:31.559 | >>> | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug >>> | FAILED | undercloud | undercloud | | >>> 0:00:02.057 | >>> | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | >>> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >>> | | 0:00:00.884 | >>> | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted >>> | FAILED | undercloud | undercloud | | >>> 0:00:02.138 | >>> | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count >>> | PASSED | undercloud | undercloud | | >>> 0:00:06.164 | >>> | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.934 | >>> | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning >>> | FAILED | undercloud | undercloud | | >>> 0:00:02.456 | >>> | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.882 | >>> | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.880 | >>> | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.934 | >>> | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.931 | >>> | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding >>> | PASSED | all | undercloud | | >>> 0:00:00.366 | >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>> >>> Also this step of passing the inventory file is not mentioned anywhere >>> in the document. Is there anything I am missing? >>> >>> Regards >>> Anirudh Gupta >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Wed Oct 13 14:46:08 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Wed, 13 Oct 2021 10:46:08 -0400 Subject: [docs] has_project_guide key Message-ID: Does the has_project_guide key in the www/project-data/.yaml file have any meaning? One of my projects has been dragging this key along from release to release and I do not see it documented [1]. I want to avoid unintended results if that key is removed. Thanks. [1]: https://docs.openstack.org/doc-contrib-guide/doc-tools/template-generator.html Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Wed Oct 13 15:16:28 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Wed, 13 Oct 2021 15:16:28 +0000 Subject: =?utf-8?B?UkU6IFByb2Jsw6htZSB3aXRoIGltYWdlIGZyb20gc25hcHNob3Q=?= In-Reply-To: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> Message-ID: <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> Franck; I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Regarding OpenStack, could you tell us what glance and cinder drivers you use? Have you done other volume to image before? Have you verified that the image finishes creating before trying to create a VM from it? I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] Sent: Wednesday, October 13, 2021 12:58 AM To: openstack-discuss Subject: Probl?me with image from snapshot Hello and first sorry for my english? thanks google. Something is wrong with what I want to do: I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). Here is what I want to do and which does not work as I want: - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. I create the snapshot, I place the "--public" parameter on the new image. I'm trying to create a new instance from this snapshot with the admin account: it works. I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? Thanks if you have ideas for helping me Franck VEDEL From dpeacock at redhat.com Wed Oct 13 15:44:19 2021 From: dpeacock at redhat.com (David Peacock) Date: Wed, 13 Oct 2021 11:44:19 -0400 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: Thank you. Good to know. :-) On Wed, Oct 13, 2021 at 11:26 AM Anirudh Gupta wrote: > Hi David > > I am trying this on Openstack Wallaby Release. > > Regards > Anirudh Gupta > > On Wed, 13 Oct, 2021, 6:40 pm David Peacock, wrote: > >> Sounds like progress, thanks for the update. >> >> For clarification, which version are you attempting to deploy? Upstream >> master? >> >> Thanks, >> David >> >> On Wed, Oct 13, 2021 at 3:57 AM Anirudh Gupta >> wrote: >> >>> Hi David, >>> >>> Thanks for your response. >>> In order to run pre-introspection, I debugged and created an inventory >>> file of my own having the following content >>> >>> [Undercloud] >>> undercloud >>> >>> With this and also with the file you mentioned, I was able to run >>> pre-introspection successfully. >>> >>> (undercloud) [stack at undercloud ~]$ openstack tripleo validator run >>> --group pre-introspection -i >>> tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml >>> >>> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >>> | UUID | Validations >>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >>> >>> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >>> | 6cdc7c84-d278-430a-b6fc-3893e42310d8 | check-cpu >>> | PASSED | localhost | localhost | | 0:00:01.116 | >>> | ac0d54a5-51c3-4f52-9dba-2a9b26583591 | check-disk-space >>> | PASSED | localhost | localhost | | 0:00:03.546 | >>> | 3af6fefc-47d0-40b1-bd5b-88e03e0f61ef | check-ram >>> | PASSED | localhost | localhost | | 0:00:01.069 | >>> | e8d17007-6c46-4959-8bfc-dc59dd77ba65 | check-selinux-mode >>> | PASSED | localhost | localhost | | 0:00:01.395 | >>> | 28df7ed3-8cea-4a4d-af34-14c8eec406ea | check-network-gateway >>> | PASSED | undercloud | undercloud | | 0:00:02.347 | >>> | efa6b4ab-de40-42a0-815e-238e5b81995c | undercloud-disk-space >>> | PASSED | undercloud | undercloud | | 0:00:03.657 | >>> | 89293cce-5f30-4626-b326-5cfeff48ab0c | undercloud-neutron-sanity-check >>> | PASSED | undercloud | undercloud | | 0:00:07.715 | >>> | 0da9986f-8fc6-46f7-8936-c8b838c12c7b | ctlplane-ip-range >>> | PASSED | undercloud | undercloud | | 0:00:01.973 | >>> | 89f286ee-cd83-4d05-8d99-bffd03df142b | dhcp-introspection >>> | PASSED | undercloud | undercloud | | 0:00:06.364 | >>> | c5256e61-f787-4a1b-9e1a-1eff0c0b2bb6 | undercloud-tokenflush >>> | PASSED | undercloud | undercloud | | 0:00:01.209 | >>> >>> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >>> >>> >>> But passing this file while pre-deployment, it is still failing. >>> (undercloud) [stack at undercloud undercloud]$ openstack tripleo validator >>> run --group pre-deployment -i tripleo-ansible-inventory.yaml >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >>> | UUID | Validations >>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration >>> | >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >>> | 6deebd06-cf12-4083-a4f2-a31306a719b3 | 512e >>> | PASSED | localhost | localhost | | >>> 0:00:00.511 | >>> | a2b80c05-40c0-4dd6-9d8d-03be0f5278ba | dns >>> | PASSED | localhost | localhost | | >>> 0:00:00.428 | >>> | bd3c32b3-6a0e-424c-9d2e-2898c5bb50ef | service-status >>> | PASSED | all | undercloud | | >>> 0:00:05.923 | >>> | 7342190b-2ad9-4639-91c7-582ae4b141c6 | validate-selinux >>> | PASSED | all | undercloud | | >>> 0:00:02.299 | >>> | 665c4d42-e058-4e9d-9ee1-30e29b3a75c8 | package-version >>> | FAILED | all | undercloud | | >>> 0:03:34.295 | >>> | e0001906-5a8c-4f9b-9ad7-7b5b4d4b8d22 | ceph-ansible-installed >>> | PASSED | undercloud | undercloud | | >>> 0:00:02.723 | >>> | beb5bf3d-3ee8-4fd6-8daa-0cf13023c1f3 | ceph-dependencies-installed >>> | PASSED | allovercloud | undercloud | | >>> 0:00:02.610 | >>> | d872e781-4cd2-4509-ad51-74d7f3b3ebbf | tls-everywhere-pre-deployment >>> | FAILED | undercloud | undercloud | | >>> 0:00:36.546 | >>> | bc7e8940-d61a-4349-a5be-a41312b8bd2f | undercloud-debug >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.702 | >>> | 8de4f037-ac24-4700-b449-405e723a7e50 | >>> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >>> | | 0:00:00.936 | >>> | 1aadf9f7-a200-499a-826f-06c2ad3f1ab7 | undercloud-heat-purge-deleted >>> | PASSED | undercloud | undercloud | | >>> 0:00:02.232 | >>> | db5204af-a054-4eae-9325-c2f592997b59 | undercloud-process-count >>> | PASSED | undercloud | undercloud | | >>> 0:00:07.770 | >>> | 7fdb9935-a30d-4356-8524-23065da894e4 | default-node-count >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.942 | >>> | 0868a984-7de0-42f0-8d6b-abb19c72c98b | dhcp-provisioning >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.668 | >>> | 7796624f-5b13-4d66-8dce-8998f2370625 | ironic-boot-configuration >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.935 | >>> | e087bbae-6371-4e2e-9445-0fcc1f936b96 | network-environment >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.936 | >>> | db93613d-9cab-4954-949f-d7b2578c20c5 | node-disks >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.741 | >>> | 66bed170-ffb1-4466-b065-9f6012abdd6e | switch-vlans >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.795 | >>> | 4911cd84-26cf-4c43-ba5a-645c5c5f20b4 | system-encoding >>> | PASSED | all | undercloud | | >>> 0:00:00.393 | >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >>> >>> >>> As per the response from Alex, This could probably because these >>> validations calls might be broken and and are not tested in CI >>> >>> I am moving forward with the deployment ignoring these errors as >>> suggested >>> >>> Regards >>> Anirudh Gupta >>> >>> >>> On Tue, Oct 12, 2021 at 8:02 PM David Peacock >>> wrote: >>> >>>> Hi Anirudh, >>>> >>>> You're hitting a known bug that we're in the process of propagating a >>>> fix for; sorry for this. :-) >>>> >>>> As per a patch we have under review, use the inventory file located >>>> under ~/tripleo-deploy/ directory: tripleo-ansible-inventory.yaml. >>>> To generate an inventory file, use the playbook in "tripleo-ansible: >>>> cli-config-download.yaml". >>>> >>>> https://review.opendev.org/c/openstack/tripleo-validations/+/813535 >>>> >>>> Let us know if this doesn't put you on the right track. >>>> >>>> Thanks, >>>> David >>>> >>>> On Sat, Oct 9, 2021 at 5:12 PM Anirudh Gupta >>>> wrote: >>>> >>>>> Hi Team, >>>>> >>>>> I am installing Tripleo using the below link >>>>> >>>>> >>>>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html >>>>> >>>>> In the Introspect section, When I executed the command >>>>> openstack tripleo validator run --group pre-introspection >>>>> >>>>> I got the following error: >>>>> >>>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>>> | UUID | Validations >>>>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration >>>>> | >>>>> >>>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>>> | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu >>>>> | PASSED | localhost | localhost | | 0:00:01.261 >>>>> | >>>>> | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space >>>>> | PASSED | localhost | localhost | | >>>>> 0:00:04.480 | >>>>> | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram >>>>> | PASSED | localhost | localhost | | 0:00:02.173 >>>>> | >>>>> | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode >>>>> | PASSED | localhost | localhost | | >>>>> 0:00:01.546 | >>>>> | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway >>>>> | FAILED | undercloud | No host matched | | >>>>> | >>>>> | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space >>>>> | FAILED | undercloud | No host matched | | >>>>> | >>>>> | 2f0239db-d530-48eb-b606-f82179e72e50 | >>>>> undercloud-neutron-sanity-check | FAILED | undercloud | No host matched | >>>>> | | >>>>> | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range >>>>> | FAILED | undercloud | No host matched | | >>>>> | >>>>> | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection >>>>> | FAILED | undercloud | No host matched | | >>>>> | >>>>> | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush >>>>> | FAILED | undercloud | No host matched | | >>>>> | >>>>> >>>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>>> >>>>> >>>>> Then I created the following inventory file: >>>>> [Undercloud] >>>>> undercloud >>>>> >>>>> Passed this command while running the pre-introspection command. >>>>> It then executed successfully. >>>>> >>>>> >>>>> But with Pre-deployment, it is still failing even after passing the >>>>> inventory >>>>> >>>>> >>>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>>> | UUID | Validations >>>>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | >>>>> Duration | >>>>> >>>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>>> | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e >>>>> | PASSED | localhost | localhost | | >>>>> 0:00:00.504 | >>>>> | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns >>>>> | PASSED | localhost | localhost | | >>>>> 0:00:00.481 | >>>>> | 93611c13-49a2-4cae-ad87-099546459481 | service-status >>>>> | PASSED | all | undercloud | | >>>>> 0:00:06.942 | >>>>> | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux >>>>> | PASSED | all | undercloud | | >>>>> 0:00:02.433 | >>>>> | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version >>>>> | FAILED | all | undercloud | | >>>>> 0:00:03.576 | >>>>> | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed >>>>> | PASSED | undercloud | undercloud | | >>>>> 0:00:02.850 | >>>>> | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed >>>>> | FAILED | allovercloud | No host matched | | >>>>> | >>>>> | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:31.559 | >>>>> | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:02.057 | >>>>> | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | >>>>> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >>>>> | | 0:00:00.884 | >>>>> | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:02.138 | >>>>> | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count >>>>> | PASSED | undercloud | undercloud | | >>>>> 0:00:06.164 | >>>>> | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:00.934 | >>>>> | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:02.456 | >>>>> | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:00.882 | >>>>> | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:00.880 | >>>>> | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:01.934 | >>>>> | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:01.931 | >>>>> | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding >>>>> | PASSED | all | undercloud | | >>>>> 0:00:00.366 | >>>>> >>>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>>> >>>>> Also this step of passing the inventory file is not mentioned anywhere >>>>> in the document. Is there anything I am missing? >>>>> >>>>> Regards >>>>> Anirudh Gupta >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Wed Oct 13 15:49:47 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 13 Oct 2021 17:49:47 +0200 Subject: [nova][placement] No meeting next week Message-ID: As agreed during yesterday's meeting [1], Tuesday Oct 19th's meeting is *CANCELLED* as all of us will be attending the virtual PTG. I'm more than happy tho to see all of you next week thru video ! I'll add the PTG connection details in https://etherpad.opendev.org/p/nova-yoga-ptg -Sylvain [1] https://meetings.opendev.org/meetings/nova/2021/nova.2021-10-12-16.00.log.html#l-189 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Wed Oct 13 04:55:32 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 13 Oct 2021 10:25:32 +0530 Subject: [TripleO] Issue in running Pre-Introspection In-Reply-To: References: Message-ID: Hi Mathieu, Thanks for your reply. I am using Openstack Wallaby Release. The document I was referring to had not specified the usage of any inventory file. https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html Although I figured this out and passed the inventory file as follows [Undercloud] undercloud After passing this all the errors were removed Regards Anirudh Gupta On Tue, Oct 12, 2021 at 8:32 PM Mathieu Bultel wrote: > Hi, > > Which release are you using ? > You have to provide a valid inventory file via the openstack CLI in order > to allow the VF to know which hosts & ips is. > > Mathieu > > On Fri, Oct 1, 2021 at 5:17 PM Anirudh Gupta wrote: > >> Hi Team,, >> >> Upon further debugging, I found that pre-introspection internally calls >> the ansible playbook located at path /usr/share/ansible/validation-playbooks >> File "dhcp-introspection.yaml" has hosts mentioned as undercloud. >> >> - hosts: *undercloud* >> become: true >> vars: >> ... >> ... >> >> >> But the artifacts created for dhcp-introspection at >> location /home/stack/validations/artifacts/_dhcp-introspection.yaml_2021-10-01T11 >> has file *hosts *present which has *localhost* written into it as a >> result of which when command gets executed it gives the error *"Could >> not match supplied host pattern, ignoring: undercloud:"* >> >> Can someone suggest how is this artifacts written in tripleo and the way >> we can change hosts file entry to undercloud so that it can work >> >> Similar is the case with other tasks >> like undercloud-tokenflush, ctlplane-ip-range etc >> >> Regards >> Anirudh Gupta >> >> On Wed, Sep 29, 2021 at 4:47 PM Anirudh Gupta >> wrote: >> >>> Hi Team, >>> >>> I tried installing Undercloud using the below link: >>> >>> >>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud >>> >>> I am getting the following error: >>> >>> (undercloud) [stack at undercloud ~]$ openstack tripleo validator run >>> --group pre-introspection >>> Selected log directory '/home/stack/validations' does not exist. >>> Attempting to create it. >>> >>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>> | UUID | Validations >>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >>> >>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>> | 7029c1f6-5ab4-465d-82d7-3f29058012ce | check-cpu >>> | PASSED | localhost | localhost | | 0:00:02.531 | >>> | db059017-30f1-4b97-925e-3f55b586d492 | check-disk-space >>> | PASSED | localhost | localhost | | 0:00:04.432 | >>> | e23dd9a1-90d3-4797-ae0a-b43e55ab6179 | check-ram >>> | PASSED | localhost | localhost | | 0:00:01.324 | >>> | 598ca02d-258a-44ad-b78d-3877321cdfe6 | check-selinux-mode >>> | PASSED | localhost | localhost | | 0:00:01.591 | >>> | c4435b4c-b432-4a1e-8a99-00638034a884 | *check-network-gateway >>> | FAILED* | undercloud | *No host matched* | | >>> | >>> | cb1eed23-ef2f-4acd-a43a-86fb09bf0372 | *undercloud-disk-space >>> | FAILED* | undercloud | *No host matched* | | >>> | >>> | abde5329-9289-4b24-bf16-c4d82b03e67a | *undercloud-neutron-sanity-check >>> | FAILED* | undercloud | *No host matched* | | >>> | >>> | d0e5fdca-ece6-4a37-b759-ed1fac31a10f | *ctlplane-ip-range >>> | FAILED* | undercloud | No host matched | | >>> | >>> | 91511807-225c-4852-bb52-6d0003c51d49 | *dhcp-introspection >>> | FAILED* | undercloud | No host matched | | >>> | >>> | e96f7704-d2fb-465d-972b-47e2f057449c |* undercloud-tokenflush >>> | FAILED *| undercloud | No host matched | | >>> | >>> >>> >>> As per the validation link, >>> >>> https://docs.openstack.org/tripleo-validations/wallaby/validations-pre-introspection-details.html >>> >>> check-network-gateway >>> >>> If gateway in undercloud.conf is different from local_ip, verify that >>> the gateway exists and is reachable >>> >>> Observation - In my case IP specified in local_ip and gateway, both are >>> pingable, but still this error is being observed >>> >>> >>> ctlplane-ip-range? >>> >>> >>> Check the number of IP addresses available for the overcloud nodes. >>> >>> Verify that the number of IP addresses defined in dhcp_start and >>> dhcp_end fields in undercloud.conf is not too low. >>> >>> - >>> >>> ctlplane_iprange_min_size: 20 >>> >>> Observation - In my case I have defined more than 20 IPs >>> >>> >>> Similarly for disk related issue, I have dedicated 100 GB space in /var >>> and / >>> >>> Filesystem Size Used Avail Use% Mounted on >>> devtmpfs 12G 0 12G 0% /dev >>> tmpfs 12G 84K 12G 1% /dev/shm >>> tmpfs 12G 8.7M 12G 1% /run >>> tmpfs 12G 0 12G 0% /sys/fs/cgroup >>> /dev/mapper/cl-root 100G 2.5G 98G 3% / >>> /dev/mapper/cl-home 47G 365M 47G 1% /home >>> /dev/mapper/cl-var 103G 1.1G 102G 2% /var >>> /dev/vda1 947M 200M 747M 22% /boot >>> tmpfs 2.4G 0 2.4G 0% /run/user/0 >>> tmpfs 2.4G 0 2.4G 0% /run/user/1000 >>> >>> Despite setting al the parameters, still I am not able to pass >>> pre-introspection checks. *"NO Host Matched" *is found in the table. >>> >>> >>> Regards >>> >>> Anirudh Gupta >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Wed Oct 13 05:41:00 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 13 Oct 2021 11:11:00 +0530 Subject: [TripleO] Timeout while introspecting Overcloud Node In-Reply-To: References: Message-ID: Hi Team,, To further update, this issue is regularly seen at my setup. I have created 2 different undercloud machines in order to confirm this. This issue got resolved when I rebooted the undercloud node once after its installation. Regards Anirudh Gupta On Tue, Oct 5, 2021 at 6:54 PM Anirudh Gupta wrote: > Hi Team, > > We were trying to provision Overcloud Nodes using the Tripleo wallaby > release. > For this, on Undercloud machine (Centos 8.4), we downloaded the > ironic-python and overcloud images from the following link: > > https://images.rdoproject.org/centos8/wallaby/rdo_trunk/current-tripleo/ > > After untarring, we executed the command > > *openstack overcloud image upload* > > This command setted the images at path /var/lib/ironic/images folder > successfully. > > Then we uploaded our instackenv.json file and executed the command > > *openstack overcloud node introspect --all-manageable* > > On the overcloud node, we are getting the Timeout error while getting the > agent.kernel and agent.ramdisk image. > > *http://10.0.1.10/8088/agent.kernel......Connection > timed out > (http://ipxe.org/4c0a6092 )* > *http://10.0.1.10/8088/agent.kernel......Connection > timed out > (http://ipxe.org/4c0a6092 )* > > However, from another test machine, when I tried *wget http://10.0.1.10/8088/agent.kernel > * - It successfully worked > > Screenshot is attached for the reference > > Can someone please help in resolving this issue. > > Regards > Anirudh Gupta > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Wed Oct 13 07:57:33 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 13 Oct 2021 13:27:33 +0530 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: Hi David, Thanks for your response. In order to run pre-introspection, I debugged and created an inventory file of my own having the following content [Undercloud] undercloud With this and also with the file you mentioned, I was able to run pre-introspection successfully. (undercloud) [stack at undercloud ~]$ openstack tripleo validator run --group pre-introspection -i tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ | 6cdc7c84-d278-430a-b6fc-3893e42310d8 | check-cpu | PASSED | localhost | localhost | | 0:00:01.116 | | ac0d54a5-51c3-4f52-9dba-2a9b26583591 | check-disk-space | PASSED | localhost | localhost | | 0:00:03.546 | | 3af6fefc-47d0-40b1-bd5b-88e03e0f61ef | check-ram | PASSED | localhost | localhost | | 0:00:01.069 | | e8d17007-6c46-4959-8bfc-dc59dd77ba65 | check-selinux-mode | PASSED | localhost | localhost | | 0:00:01.395 | | 28df7ed3-8cea-4a4d-af34-14c8eec406ea | check-network-gateway | PASSED | undercloud | undercloud | | 0:00:02.347 | | efa6b4ab-de40-42a0-815e-238e5b81995c | undercloud-disk-space | PASSED | undercloud | undercloud | | 0:00:03.657 | | 89293cce-5f30-4626-b326-5cfeff48ab0c | undercloud-neutron-sanity-check | PASSED | undercloud | undercloud | | 0:00:07.715 | | 0da9986f-8fc6-46f7-8936-c8b838c12c7b | ctlplane-ip-range | PASSED | undercloud | undercloud | | 0:00:01.973 | | 89f286ee-cd83-4d05-8d99-bffd03df142b | dhcp-introspection | PASSED | undercloud | undercloud | | 0:00:06.364 | | c5256e61-f787-4a1b-9e1a-1eff0c0b2bb6 | undercloud-tokenflush | PASSED | undercloud | undercloud | | 0:00:01.209 | +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ But passing this file while pre-deployment, it is still failing. (undercloud) [stack at undercloud undercloud]$ openstack tripleo validator run --group pre-deployment -i tripleo-ansible-inventory.yaml +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ | 6deebd06-cf12-4083-a4f2-a31306a719b3 | 512e | PASSED | localhost | localhost | | 0:00:00.511 | | a2b80c05-40c0-4dd6-9d8d-03be0f5278ba | dns | PASSED | localhost | localhost | | 0:00:00.428 | | bd3c32b3-6a0e-424c-9d2e-2898c5bb50ef | service-status | PASSED | all | undercloud | | 0:00:05.923 | | 7342190b-2ad9-4639-91c7-582ae4b141c6 | validate-selinux | PASSED | all | undercloud | | 0:00:02.299 | | 665c4d42-e058-4e9d-9ee1-30e29b3a75c8 | package-version | FAILED | all | undercloud | | 0:03:34.295 | | e0001906-5a8c-4f9b-9ad7-7b5b4d4b8d22 | ceph-ansible-installed | PASSED | undercloud | undercloud | | 0:00:02.723 | | beb5bf3d-3ee8-4fd6-8daa-0cf13023c1f3 | ceph-dependencies-installed | PASSED | allovercloud | undercloud | | 0:00:02.610 | | d872e781-4cd2-4509-ad51-74d7f3b3ebbf | tls-everywhere-pre-deployment | FAILED | undercloud | undercloud | | 0:00:36.546 | | bc7e8940-d61a-4349-a5be-a41312b8bd2f | undercloud-debug | FAILED | undercloud | undercloud | | 0:00:01.702 | | 8de4f037-ac24-4700-b449-405e723a7e50 | collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud | | 0:00:00.936 | | 1aadf9f7-a200-499a-826f-06c2ad3f1ab7 | undercloud-heat-purge-deleted | PASSED | undercloud | undercloud | | 0:00:02.232 | | db5204af-a054-4eae-9325-c2f592997b59 | undercloud-process-count | PASSED | undercloud | undercloud | | 0:00:07.770 | | 7fdb9935-a30d-4356-8524-23065da894e4 | default-node-count | FAILED | undercloud | undercloud | | 0:00:00.942 | | 0868a984-7de0-42f0-8d6b-abb19c72c98b | dhcp-provisioning | FAILED | undercloud | undercloud | | 0:00:01.668 | | 7796624f-5b13-4d66-8dce-8998f2370625 | ironic-boot-configuration | FAILED | undercloud | undercloud | | 0:00:00.935 | | e087bbae-6371-4e2e-9445-0fcc1f936b96 | network-environment | FAILED | undercloud | undercloud | | 0:00:00.936 | | db93613d-9cab-4954-949f-d7b2578c20c5 | node-disks | FAILED | undercloud | undercloud | | 0:00:01.741 | | 66bed170-ffb1-4466-b065-9f6012abdd6e | switch-vlans | FAILED | undercloud | undercloud | | 0:00:01.795 | | 4911cd84-26cf-4c43-ba5a-645c5c5f20b4 | system-encoding | PASSED | all | undercloud | | 0:00:00.393 | +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ As per the response from Alex, This could probably because these validations calls might be broken and and are not tested in CI I am moving forward with the deployment ignoring these errors as suggested Regards Anirudh Gupta On Tue, Oct 12, 2021 at 8:02 PM David Peacock wrote: > Hi Anirudh, > > You're hitting a known bug that we're in the process of propagating a fix > for; sorry for this. :-) > > As per a patch we have under review, use the inventory file located under > ~/tripleo-deploy/ directory: tripleo-ansible-inventory.yaml. > To generate an inventory file, use the playbook in "tripleo-ansible: > cli-config-download.yaml". > > https://review.opendev.org/c/openstack/tripleo-validations/+/813535 > > Let us know if this doesn't put you on the right track. > > Thanks, > David > > On Sat, Oct 9, 2021 at 5:12 PM Anirudh Gupta wrote: > >> Hi Team, >> >> I am installing Tripleo using the below link >> >> >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html >> >> In the Introspect section, When I executed the command >> openstack tripleo validator run --group pre-introspection >> >> I got the following error: >> >> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >> | UUID | Validations >> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >> >> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >> | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu >> | PASSED | localhost | localhost | | 0:00:01.261 | >> | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space >> | PASSED | localhost | localhost | | 0:00:04.480 | >> | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram >> | PASSED | localhost | localhost | | 0:00:02.173 | >> | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode >> | PASSED | localhost | localhost | | 0:00:01.546 | >> | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway >> | FAILED | undercloud | No host matched | | | >> | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space >> | FAILED | undercloud | No host matched | | | >> | 2f0239db-d530-48eb-b606-f82179e72e50 | undercloud-neutron-sanity-check >> | FAILED | undercloud | No host matched | | | >> | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range >> | FAILED | undercloud | No host matched | | | >> | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection >> | FAILED | undercloud | No host matched | | | >> | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush >> | FAILED | undercloud | No host matched | | | >> >> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >> >> >> Then I created the following inventory file: >> [Undercloud] >> undercloud >> >> Passed this command while running the pre-introspection command. >> It then executed successfully. >> >> >> But with Pre-deployment, it is still failing even after passing the >> inventory >> >> >> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >> | UUID | Validations >> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | >> Duration | >> >> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >> | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e >> | PASSED | localhost | localhost | | >> 0:00:00.504 | >> | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns >> | PASSED | localhost | localhost | | >> 0:00:00.481 | >> | 93611c13-49a2-4cae-ad87-099546459481 | service-status >> | PASSED | all | undercloud | | >> 0:00:06.942 | >> | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux >> | PASSED | all | undercloud | | >> 0:00:02.433 | >> | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version >> | FAILED | all | undercloud | | >> 0:00:03.576 | >> | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed >> | PASSED | undercloud | undercloud | | >> 0:00:02.850 | >> | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed >> | FAILED | allovercloud | No host matched | | >> | >> | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment >> | FAILED | undercloud | undercloud | | >> 0:00:31.559 | >> | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug >> | FAILED | undercloud | undercloud | | >> 0:00:02.057 | >> | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | >> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >> | | 0:00:00.884 | >> | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted >> | FAILED | undercloud | undercloud | | >> 0:00:02.138 | >> | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count >> | PASSED | undercloud | undercloud | | >> 0:00:06.164 | >> | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count >> | FAILED | undercloud | undercloud | | >> 0:00:00.934 | >> | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning >> | FAILED | undercloud | undercloud | | >> 0:00:02.456 | >> | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration >> | FAILED | undercloud | undercloud | | >> 0:00:00.882 | >> | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment >> | FAILED | undercloud | undercloud | | >> 0:00:00.880 | >> | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks >> | FAILED | undercloud | undercloud | | >> 0:00:01.934 | >> | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans >> | FAILED | undercloud | undercloud | | >> 0:00:01.931 | >> | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding >> | PASSED | all | undercloud | | >> 0:00:00.366 | >> >> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >> >> Also this step of passing the inventory file is not mentioned anywhere in >> the document. Is there anything I am missing? >> >> Regards >> Anirudh Gupta >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Wed Oct 13 08:02:02 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 13 Oct 2021 13:32:02 +0530 Subject: [tripleo] Unable to deploy Overcloud Nodes Message-ID: Hi Team, As per the link below, While executing the command to deploy the overcloud nodes https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud I am executing the command - openstack overcloud deploy --templates On running this command, I am getting the following error *ERROR: (pymysql.err.OperationalError) (1045, "Access denied for user 'heat'@'10.255.255.4' (using password: YES)")(Background on this error at: http://sqlalche.me/e/e3q8 )* 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured while running the command: subprocess.CalledProcessError: Command '['sudo', 'podman', 'run', '--rm', '--user', 'heat', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', 'db_sync']' returned non-zero exit status 1. 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent call last): 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, self).run(parsed_args) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in run 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, self).run(parsed_args) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = self.take_action(parsed_args) or 0 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 1277, in take_action 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud self.setup_ephemeral_heat(parsed_args) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 767, in setup_ephemeral_heat 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud utils.launch_heat(self.heat_launcher, restore_db=restore_db) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 2706, in launch_heat 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud launcher.heat_db_sync(restore_db) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/heat_launcher.py", line 530, in heat_db_sync 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud subprocess.check_call(cmd) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib64/python3.6/subprocess.py", line 311, in check_call 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud raise CalledProcessError(retcode, cmd) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud subprocess.CalledProcessError: Command '['sudo', 'podman', 'run', '--rm', '--user', 'heat', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', 'db_sync']' returned non-zero exit status 1. 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2021-10-13 05:46:28.400 183680 ERROR openstack [-] Command '['sudo', 'podman', 'run', '--rm', '--user', 'heat', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', 'db_sync']' returned non-zero exit status 1.: subprocess.CalledProcessError: Command '['sudo', 'podman', 'run', '--rm', '--user', 'heat', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', 'db_sync']' returned non-zero exit status 1. 2021-10-13 05:46:28.401 183680 INFO osc_lib.shell [-] END return value: 1 Can someone please help in resolving this issue. Are there any parameters, templates that need to be passed in order to make it work. Regards Anirudh Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: From faisalsheikh.cyber at gmail.com Wed Oct 13 11:19:48 2021 From: faisalsheikh.cyber at gmail.com (Faisal Sheikh) Date: Wed, 13 Oct 2021 16:19:48 +0500 Subject: [wallaby][neutron][ovn] SSL connection to OVN-NB/SB OVSDB Message-ID: Hi, I am using Openstack Wallaby release with OVN on Ubuntu 20.04. My environment consists of 2 compute nodes and 1 controller node. ovs-vswitchd (Open vSwitch) 2.15.0 Ubuntu Kernel Version: 5.4.0-88-generic compute node1 172.16.30.1 compute node2 172.16.30.3 controller/Network node IP 172.16.30.46 I want to configure the ovn southbound and northbound database to listen on SSL connection. Set a certificate, private key, and CA certificate on both compute nodes and controller nodes in /etc/neutron/plugins/ml2/ml2_conf.ini and using string ssl:IP:Port to connect the southbound/northbound database but I am unable to establish connection on SSL. It's not connecting to ovsdb-server on 6641/6642. Error in the neutron logs is like below: 2021-10-12 17:15:27.728 50561 WARNING neutron.quota.resource_registry [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] security_group_rule is already registered 2021-10-12 17:15:27.754 50561 WARNING keystonemiddleware.auth_token [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. 2021-10-12 17:15:27.761 50561 INFO oslo_service.service [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Starting 1 workers 2021-10-12 17:15:27.768 50561 INFO neutron.service [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Neutron service started, listening on 0.0.0.0:9696 2021-10-12 17:15:27.776 50561 ERROR ovsdbapp.backend.ovs_idl.idlutils [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 2021-10-12 17:15:27.779 50561 CRITICAL neutron [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unhandled error: neutron_lib.callbacks.exceptions.CallbackFailure: Callback neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 failed with "Could not retrieve schema from ssl:172.16.30.46:6641" 2021-10-12 17:15:27.779 50561 ERROR neutron Traceback (most recent call last): 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/bin/neutron-server", line 10, in 2021-10-12 17:15:27.779 50561 ERROR neutron sys.exit(main()) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", line 19, in main 2021-10-12 17:15:27.779 50561 ERROR neutron server.boot_server(wsgi_eventlet.eventlet_wsgi_server) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, in boot_server 2021-10-12 17:15:27.779 50561 ERROR neutron server_func() 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line 24, in eventlet_wsgi_server 2021-10-12 17:15:27.779 50561 ERROR neutron neutron_api = service.serve_wsgi(service.NeutronApiService) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in serve_wsgi 2021-10-12 17:15:27.779 50561 ERROR neutron registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", line 60, in publish 2021-10-12 17:15:27.779 50561 ERROR neutron _get_callback_manager().publish(resource, event, trigger, payload=payload) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 149, in publish 2021-10-12 17:15:27.779 50561 ERROR neutron return self.notify(resource, event, trigger, payload=payload) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in _wrapped 2021-10-12 17:15:27.779 50561 ERROR neutron raise db_exc.RetryRequest(e) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__ 2021-10-12 17:15:27.779 50561 ERROR neutron self.force_reraise() 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise 2021-10-12 17:15:27.779 50561 ERROR neutron raise self.value 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in _wrapped 2021-10-12 17:15:27.779 50561 ERROR neutron return function(*args, **kwargs) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 174, in notify 2021-10-12 17:15:27.779 50561 ERROR neutron raise exceptions.CallbackFailure(errors=errors) 2021-10-12 17:15:27.779 50561 ERROR neutron neutron_lib.callbacks.exceptions.CallbackFailure: Callback neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 failed with "Could not retrieve schema from ssl:172.16.30.46:6641" 2021-10-12 17:15:27.779 50561 ERROR neutron 2021-10-12 17:15:27.783 50572 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager [-] Error during notification for neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-373774 process, after_init: Exception: Could not retrieve schema from ssl:172.16.30.46:6641 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager Traceback (most recent call last): 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 197, in _notify_loop 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager callback(resource, event, trigger, **kwargs) 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 294, in post_fork_initialize 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager self._wait_for_pg_drop_event() 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 357, in _wait_for_pg_drop_event 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 136, in nb_schema_helper 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line 721, in __get__ 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager return self.func(owner) 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", line 102, in schema_helper 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 215, in get_schema_helper 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager return create_schema_helper(fetch_schema_json(connection, schema_name)) 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 204, in fetch_schema_json 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager raise Exception("Could not retrieve schema from %s" % connection) 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager Exception: Could not retrieve schema from ssl:172.16.30.46:6641 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager 2021-10-12 17:15:27.787 50572 INFO neutron.wsgi [-] (50572) wsgi starting up on http://0.0.0.0:9696 2021-10-12 17:15:27.924 50572 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2021-10-12 17:15:27.925 50572 INFO neutron.wsgi [-] (50572) wsgi exited, is_accepting=True 2021-10-12 17:15:29.709 50573 INFO neutron.common.config [-] Logging enabled! 2021-10-12 17:15:29.710 50573 INFO neutron.common.config [-] /usr/bin/neutron-server version 18.0.0 2021-10-12 17:15:29.712 50573 INFO neutron.common.config [-] Logging enabled! 2021-10-12 17:15:29.713 50573 INFO neutron.common.config [-] /usr/bin/neutron-server version 18.0.0 2021-10-12 17:15:29.899 50573 INFO keyring.backend [-] Loading KWallet 2021-10-12 17:15:29.904 50573 INFO keyring.backend [-] Loading SecretService 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading Windows 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading chainer 2021-10-12 17:15:29.908 50573 INFO keyring.backend [-] Loading macOS 2021-10-12 17:15:29.927 50573 INFO neutron.manager [-] Loading core plugin: ml2 2021-10-12 17:15:30.355 50573 INFO neutron.plugins.ml2.managers [-] Configured type driver names: ['flat', 'geneve'] 2021-10-12 17:15:30.357 50573 INFO neutron.plugins.ml2.drivers.type_flat [-] Arbitrary flat physical_network names allowed 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] Loaded type driver names: ['flat', 'geneve'] 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] Registered types: dict_keys(['flat', 'geneve']) 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] Tenant network_types: ['geneve'] 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] Configured extension driver names: ['port_security', 'qos'] 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] Loaded extension driver names: ['port_security', 'qos'] 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] Registered extension drivers: ['port_security', 'qos'] 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] Configured mechanism driver names: ['ovn'] 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] Loaded mechanism driver names: ['ovn'] 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] Registered mechanism drivers: ['ovn'] 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] No mechanism drivers provide segment reachability information for agent scheduling. 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] Initializing driver for type 'flat' 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.drivers.type_flat [-] ML2 FlatTypeDriver initialization complete 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] Initializing driver for type 'geneve' 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.drivers.type_tunnel [-] geneve ID ranges: [(1, 65536)] 2021-10-12 17:15:32.555 50573 INFO neutron.plugins.ml2.managers [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing extension driver 'port_security' 2021-10-12 17:15:32.555 50573 INFO neutron.plugins.ml2.extensions.port_security [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] PortSecurityExtensionDriver initialization complete 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing extension driver 'qos' 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing mechanism driver 'ovn' 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting OVNMechanismDriver 2021-10-12 17:15:32.562 50573 WARNING neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Firewall driver configuration is ignored 2021-10-12 17:15:32.586 50573 INFO neutron.services.logapi.drivers.ovn.driver [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] OVN logging driver registered 2021-10-12 17:15:32.588 50573 INFO neutron.plugins.ml2.plugin [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Modular L2 Plugin initialization complete 2021-10-12 17:15:32.589 50573 INFO neutron.plugins.ml2.managers [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Got port-security extension from driver 'port_security' 2021-10-12 17:15:32.589 50573 INFO neutron.extensions.vlantransparent [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Disabled vlantransparent extension. 2021-10-12 17:15:32.589 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: ovn-router 2021-10-12 17:15:32.597 50573 INFO neutron.services.ovn_l3.plugin [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting OVNL3RouterPlugin 2021-10-12 17:15:32.597 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: qos 2021-10-12 17:15:32.600 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: metering 2021-10-12 17:15:32.603 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: port_forwarding 2021-10-12 17:15:32.605 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading service plugin ovn-router, it is required by port_forwarding 2021-10-12 17:15:32.606 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: segments 2021-10-12 17:15:32.684 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: auto_allocate 2021-10-12 17:15:32.685 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: tag 2021-10-12 17:15:32.687 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: timestamp 2021-10-12 17:15:32.689 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: network_ip_availability 2021-10-12 17:15:32.691 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: flavors 2021-10-12 17:15:32.693 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: revisions 2021-10-12 17:15:32.695 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing extension manager. 2021-10-12 17:15:32.696 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension address-group not supported by any of loaded plugins 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: address-scope 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension router-admin-state-down-before-update not supported by any of loaded plugins 2021-10-12 17:15:32.698 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: agent 2021-10-12 17:15:32.699 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension agent-resources-synced not supported by any of loaded plugins 2021-10-12 17:15:32.700 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: allowed-address-pairs 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: auto-allocated-topology 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: availability_zone 2021-10-12 17:15:32.702 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension availability_zone_filter not supported by any of loaded plugins 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension data-plane-status not supported by any of loaded plugins 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: default-subnetpools 2021-10-12 17:15:32.704 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dhcp_agent_scheduler not supported by any of loaded plugins 2021-10-12 17:15:32.705 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dns-integration not supported by any of loaded plugins 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dns-domain-ports not supported by any of loaded plugins 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dvr not supported by any of loaded plugins 2021-10-12 17:15:32.707 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension empty-string-filtering not supported by any of loaded plugins 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension expose-l3-conntrack-helper not supported by any of loaded plugins 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: expose-port-forwarding-in-fip 2021-10-12 17:15:32.709 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: external-net 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: extra_dhcp_opt 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: extraroute 2021-10-12 17:15:32.711 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension extraroute-atomic not supported by any of loaded plugins 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension filter-validation not supported by any of loaded plugins 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: floating-ip-port-forwarding-description 2021-10-12 17:15:32.713 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: fip-port-details 2021-10-12 17:15:32.714 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: flavors 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: floating-ip-port-forwarding 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension floatingip-pools not supported by any of loaded plugins 2021-10-12 17:15:32.716 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: ip_allocation 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension ip-substring-filtering not supported by any of loaded plugins 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: l2_adjacency 2021-10-12 17:15:32.718 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: router 2021-10-12 17:15:32.719 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-conntrack-helper not supported by any of loaded plugins 2021-10-12 17:15:32.720 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: ext-gw-mode 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-ha not supported by any of loaded plugins 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-flavors not supported by any of loaded plugins 2021-10-12 17:15:32.722 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-port-ip-change-not-allowed not supported by any of loaded plugins 2021-10-12 17:15:32.723 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3_agent_scheduler not supported by any of loaded plugins 2021-10-12 17:15:32.724 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension logging not supported by any of loaded plugins 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: metering 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: metering_source_and_destination_fields 2021-10-12 17:15:32.726 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: multi-provider 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: net-mtu 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: net-mtu-writable 2021-10-12 17:15:32.728 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: network_availability_zone 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: network-ip-availability 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension network-segment-range not supported by any of loaded plugins 2021-10-12 17:15:32.730 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: pagination 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: port-device-profile 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension port-mac-address-regenerate not supported by any of loaded plugins 2021-10-12 17:15:32.732 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: port-numa-affinity-policy 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: port-resource-request 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: binding 2021-10-12 17:15:32.734 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension binding-extended not supported by any of loaded plugins 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: port-security 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: project-id 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: provider 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos 2021-10-12 17:15:32.737 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-bw-limit-direction 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-bw-minimum-ingress 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-default 2021-10-12 17:15:32.739 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-fip 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension qos-gateway-ip not supported by any of loaded plugins 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-port-network-policy 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-rule-type-details 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-rules-alias 2021-10-12 17:15:32.742 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: quotas 2021-10-12 17:15:32.743 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: quota_details 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: rbac-policies 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension rbac-address-group not supported by any of loaded plugins 2021-10-12 17:15:32.745 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: rbac-address-scope 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension rbac-security-groups not supported by any of loaded plugins 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension rbac-subnetpool not supported by any of loaded plugins 2021-10-12 17:15:32.747 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: revision-if-match 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: standard-attr-revisions 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: router_availability_zone 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension router-service-type not supported by any of loaded plugins 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: security-groups-normalized-cidr 2021-10-12 17:15:32.750 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension port-security-groups-filtering not supported by any of loaded plugins 2021-10-12 17:15:32.751 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: security-groups-remote-address-group 2021-10-12 17:15:32.756 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: security-group 2021-10-12 17:15:32.757 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: segment 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: segments-peer-subnet-host-routes 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: service-type 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: sorting 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: standard-attr-segment 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: standard-attr-description 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension stateful-security-group not supported by any of loaded plugins 2021-10-12 17:15:32.761 50573 WARNING neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Did not find expected name "Stdattrs_common" in /usr/lib/python3/dist-packages/neutron/extensions/stdattrs_common.py 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension subnet-dns-publish-fixed-ip not supported by any of loaded plugins 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension subnet_onboard not supported by any of loaded plugins 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: subnet-segmentid-writable 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension subnet-service-types not supported by any of loaded plugins 2021-10-12 17:15:32.764 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: subnet_allocation 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension subnetpool-prefix-ops not supported by any of loaded plugins 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension tag-ports-during-bulk-creation not supported by any of loaded plugins 2021-10-12 17:15:32.766 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: standard-attr-tag 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: standard-attr-timestamp 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension trunk not supported by any of loaded plugins 2021-10-12 17:15:32.768 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension trunk-details not supported by any of loaded plugins 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension uplink-status-propagation not supported by any of loaded plugins 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension vlan-transparent not supported by any of loaded plugins 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:network 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:subnet 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:subnetpool 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:port 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:router 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:floatingip 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of CountableResource for resource:rbac_policy 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:security_group 2021-10-12 17:15:32.779 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:security_group_rule 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:router 2021-10-12 17:15:32.781 50573 WARNING neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] router is already registered 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:floatingip 2021-10-12 17:15:32.782 50573 WARNING neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] floatingip is already registered 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of CountableResource for resource:rbac_policy 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] rbac_policy is already registered 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:security_group 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] security_group is already registered 2021-10-12 17:15:32.784 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:security_group_rule 2021-10-12 17:15:32.784 50573 WARNING neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] security_group_rule is already registered 2021-10-12 17:15:32.810 50573 WARNING keystonemiddleware.auth_token [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. 2021-10-12 17:15:32.816 50573 INFO oslo_service.service [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting 1 workers 2021-10-12 17:15:32.824 50573 INFO neutron.service [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Neutron service started, listening on 0.0.0.0:9696 2021-10-12 17:15:32.831 50573 ERROR ovsdbapp.backend.ovs_idl.idlutils [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 2021-10-12 17:15:32.834 50573 CRITICAL neutron [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unhandled error: neutron_lib.callbacks.exceptions.CallbackFailure: Callback neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 failed with "Could not retrieve schema from ssl:172.16.30.46:6641" 2021-10-12 17:15:32.834 50573 ERROR neutron Traceback (most recent call last): 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/bin/neutron-server", line 10, in 2021-10-12 17:15:32.834 50573 ERROR neutron sys.exit(main()) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", line 19, in main 2021-10-12 17:15:32.834 50573 ERROR neutron server.boot_server(wsgi_eventlet.eventlet_wsgi_server) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, in boot_server 2021-10-12 17:15:32.834 50573 ERROR neutron server_func() 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line 24, in eventlet_wsgi_server 2021-10-12 17:15:32.834 50573 ERROR neutron neutron_api = service.serve_wsgi(service.NeutronApiService) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in serve_wsgi 2021-10-12 17:15:32.834 50573 ERROR neutron registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", line 60, in publish 2021-10-12 17:15:32.834 50573 ERROR neutron _get_callback_manager().publish(resource, event, trigger, payload=payload) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 149, in publish 2021-10-12 17:15:32.834 50573 ERROR neutron return self.notify(resource, event, trigger, payload=payload) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in _wrapped 2021-10-12 17:15:32.834 50573 ERROR neutron raise db_exc.RetryRequest(e) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__ 2021-10-12 17:15:32.834 50573 ERROR neutron self.force_reraise() 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise 2021-10-12 17:15:32.834 50573 ERROR neutron raise self.value 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in _wrapped 2021-10-12 17:15:32.834 50573 ERROR neutron return function(*args, **kwargs) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 174, in notify 2021-10-12 17:15:32.834 50573 ERROR neutron raise exceptions.CallbackFailure(errors=errors) 2021-10-12 17:15:32.834 50573 ERROR neutron neutron_lib.callbacks.exceptions.CallbackFailure: Callback neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 failed with "Could not retrieve schema from ssl:172.16.30.46:6641" 2021-10-12 17:15:32.834 50573 ERROR neutron 2021-10-12 17:15:32.838 50582 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager [-] Error during notification for neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-904522 process, after_init: Exception: Could not retrieve schema from ssl:172.16.30.46:6641 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager Traceback (most recent call last): 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 197, in _notify_loop 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager callback(resource, event, trigger, **kwargs) 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 294, in post_fork_initialize 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager self._wait_for_pg_drop_event() 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 357, in _wait_for_pg_drop_event 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 136, in nb_schema_helper 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line 721, in __get__ 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager return self.func(owner) 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", line 102, in schema_helper 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 215, in get_schema_helper 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager return create_schema_helper(fetch_schema_json(connection, schema_name)) 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 204, in fetch_schema_json 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager raise Exception("Could not retrieve schema from %s" % connection) 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager Exception: Could not retrieve schema from ssl:172.16.30.46:6641 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager 2021-10-12 17:15:32.842 50582 INFO neutron.wsgi [-] (50582) wsgi starting up on http://0.0.0.0:9696 2021-10-12 17:15:32.961 50582 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2021-10-12 17:15:32.963 50582 INFO neutron.wsgi [-] (50582) wsgi exited, is_accepting=True 2021-10-12 17:15:34.722 50583 INFO neutron.common.config [-] Logging enabled! I would really appreciate any input in this regard. Best regards, Faisal Sheikh From anyrude10 at gmail.com Wed Oct 13 15:26:18 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 13 Oct 2021 20:56:18 +0530 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: Hi David I am trying this on Openstack Wallaby Release. Regards Anirudh Gupta On Wed, 13 Oct, 2021, 6:40 pm David Peacock, wrote: > Sounds like progress, thanks for the update. > > For clarification, which version are you attempting to deploy? Upstream > master? > > Thanks, > David > > On Wed, Oct 13, 2021 at 3:57 AM Anirudh Gupta wrote: > >> Hi David, >> >> Thanks for your response. >> In order to run pre-introspection, I debugged and created an inventory >> file of my own having the following content >> >> [Undercloud] >> undercloud >> >> With this and also with the file you mentioned, I was able to run >> pre-introspection successfully. >> >> (undercloud) [stack at undercloud ~]$ openstack tripleo validator run >> --group pre-introspection -i >> tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml >> >> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >> | UUID | Validations >> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >> >> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >> | 6cdc7c84-d278-430a-b6fc-3893e42310d8 | check-cpu >> | PASSED | localhost | localhost | | 0:00:01.116 | >> | ac0d54a5-51c3-4f52-9dba-2a9b26583591 | check-disk-space >> | PASSED | localhost | localhost | | 0:00:03.546 | >> | 3af6fefc-47d0-40b1-bd5b-88e03e0f61ef | check-ram >> | PASSED | localhost | localhost | | 0:00:01.069 | >> | e8d17007-6c46-4959-8bfc-dc59dd77ba65 | check-selinux-mode >> | PASSED | localhost | localhost | | 0:00:01.395 | >> | 28df7ed3-8cea-4a4d-af34-14c8eec406ea | check-network-gateway >> | PASSED | undercloud | undercloud | | 0:00:02.347 | >> | efa6b4ab-de40-42a0-815e-238e5b81995c | undercloud-disk-space >> | PASSED | undercloud | undercloud | | 0:00:03.657 | >> | 89293cce-5f30-4626-b326-5cfeff48ab0c | undercloud-neutron-sanity-check >> | PASSED | undercloud | undercloud | | 0:00:07.715 | >> | 0da9986f-8fc6-46f7-8936-c8b838c12c7b | ctlplane-ip-range >> | PASSED | undercloud | undercloud | | 0:00:01.973 | >> | 89f286ee-cd83-4d05-8d99-bffd03df142b | dhcp-introspection >> | PASSED | undercloud | undercloud | | 0:00:06.364 | >> | c5256e61-f787-4a1b-9e1a-1eff0c0b2bb6 | undercloud-tokenflush >> | PASSED | undercloud | undercloud | | 0:00:01.209 | >> >> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >> >> >> But passing this file while pre-deployment, it is still failing. >> (undercloud) [stack at undercloud undercloud]$ openstack tripleo validator >> run --group pre-deployment -i tripleo-ansible-inventory.yaml >> >> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >> | UUID | Validations >> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration >> | >> >> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >> | 6deebd06-cf12-4083-a4f2-a31306a719b3 | 512e >> | PASSED | localhost | localhost | | >> 0:00:00.511 | >> | a2b80c05-40c0-4dd6-9d8d-03be0f5278ba | dns >> | PASSED | localhost | localhost | | >> 0:00:00.428 | >> | bd3c32b3-6a0e-424c-9d2e-2898c5bb50ef | service-status >> | PASSED | all | undercloud | | >> 0:00:05.923 | >> | 7342190b-2ad9-4639-91c7-582ae4b141c6 | validate-selinux >> | PASSED | all | undercloud | | >> 0:00:02.299 | >> | 665c4d42-e058-4e9d-9ee1-30e29b3a75c8 | package-version >> | FAILED | all | undercloud | | >> 0:03:34.295 | >> | e0001906-5a8c-4f9b-9ad7-7b5b4d4b8d22 | ceph-ansible-installed >> | PASSED | undercloud | undercloud | | >> 0:00:02.723 | >> | beb5bf3d-3ee8-4fd6-8daa-0cf13023c1f3 | ceph-dependencies-installed >> | PASSED | allovercloud | undercloud | | >> 0:00:02.610 | >> | d872e781-4cd2-4509-ad51-74d7f3b3ebbf | tls-everywhere-pre-deployment >> | FAILED | undercloud | undercloud | | >> 0:00:36.546 | >> | bc7e8940-d61a-4349-a5be-a41312b8bd2f | undercloud-debug >> | FAILED | undercloud | undercloud | | >> 0:00:01.702 | >> | 8de4f037-ac24-4700-b449-405e723a7e50 | >> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >> | | 0:00:00.936 | >> | 1aadf9f7-a200-499a-826f-06c2ad3f1ab7 | undercloud-heat-purge-deleted >> | PASSED | undercloud | undercloud | | >> 0:00:02.232 | >> | db5204af-a054-4eae-9325-c2f592997b59 | undercloud-process-count >> | PASSED | undercloud | undercloud | | >> 0:00:07.770 | >> | 7fdb9935-a30d-4356-8524-23065da894e4 | default-node-count >> | FAILED | undercloud | undercloud | | >> 0:00:00.942 | >> | 0868a984-7de0-42f0-8d6b-abb19c72c98b | dhcp-provisioning >> | FAILED | undercloud | undercloud | | >> 0:00:01.668 | >> | 7796624f-5b13-4d66-8dce-8998f2370625 | ironic-boot-configuration >> | FAILED | undercloud | undercloud | | >> 0:00:00.935 | >> | e087bbae-6371-4e2e-9445-0fcc1f936b96 | network-environment >> | FAILED | undercloud | undercloud | | >> 0:00:00.936 | >> | db93613d-9cab-4954-949f-d7b2578c20c5 | node-disks >> | FAILED | undercloud | undercloud | | >> 0:00:01.741 | >> | 66bed170-ffb1-4466-b065-9f6012abdd6e | switch-vlans >> | FAILED | undercloud | undercloud | | >> 0:00:01.795 | >> | 4911cd84-26cf-4c43-ba5a-645c5c5f20b4 | system-encoding >> | PASSED | all | undercloud | | >> 0:00:00.393 | >> >> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >> >> >> As per the response from Alex, This could probably because these >> validations calls might be broken and and are not tested in CI >> >> I am moving forward with the deployment ignoring these errors as suggested >> >> Regards >> Anirudh Gupta >> >> >> On Tue, Oct 12, 2021 at 8:02 PM David Peacock >> wrote: >> >>> Hi Anirudh, >>> >>> You're hitting a known bug that we're in the process of propagating a >>> fix for; sorry for this. :-) >>> >>> As per a patch we have under review, use the inventory file located >>> under ~/tripleo-deploy/ directory: tripleo-ansible-inventory.yaml. >>> To generate an inventory file, use the playbook in "tripleo-ansible: >>> cli-config-download.yaml". >>> >>> https://review.opendev.org/c/openstack/tripleo-validations/+/813535 >>> >>> Let us know if this doesn't put you on the right track. >>> >>> Thanks, >>> David >>> >>> On Sat, Oct 9, 2021 at 5:12 PM Anirudh Gupta >>> wrote: >>> >>>> Hi Team, >>>> >>>> I am installing Tripleo using the below link >>>> >>>> >>>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html >>>> >>>> In the Introspect section, When I executed the command >>>> openstack tripleo validator run --group pre-introspection >>>> >>>> I got the following error: >>>> >>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>> | UUID | Validations >>>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration >>>> | >>>> >>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>> | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu >>>> | PASSED | localhost | localhost | | 0:00:01.261 >>>> | >>>> | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space >>>> | PASSED | localhost | localhost | | 0:00:04.480 | >>>> | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram >>>> | PASSED | localhost | localhost | | 0:00:02.173 >>>> | >>>> | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode >>>> | PASSED | localhost | localhost | | 0:00:01.546 | >>>> | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway >>>> | FAILED | undercloud | No host matched | | >>>> | >>>> | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space >>>> | FAILED | undercloud | No host matched | | >>>> | >>>> | 2f0239db-d530-48eb-b606-f82179e72e50 | >>>> undercloud-neutron-sanity-check | FAILED | undercloud | No host matched | >>>> | | >>>> | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range >>>> | FAILED | undercloud | No host matched | | >>>> | >>>> | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection >>>> | FAILED | undercloud | No host matched | | | >>>> | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush >>>> | FAILED | undercloud | No host matched | | >>>> | >>>> >>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>> >>>> >>>> Then I created the following inventory file: >>>> [Undercloud] >>>> undercloud >>>> >>>> Passed this command while running the pre-introspection command. >>>> It then executed successfully. >>>> >>>> >>>> But with Pre-deployment, it is still failing even after passing the >>>> inventory >>>> >>>> >>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>> | UUID | Validations >>>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | >>>> Duration | >>>> >>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>> | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e >>>> | PASSED | localhost | localhost | | >>>> 0:00:00.504 | >>>> | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns >>>> | PASSED | localhost | localhost | | >>>> 0:00:00.481 | >>>> | 93611c13-49a2-4cae-ad87-099546459481 | service-status >>>> | PASSED | all | undercloud | | >>>> 0:00:06.942 | >>>> | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux >>>> | PASSED | all | undercloud | | >>>> 0:00:02.433 | >>>> | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version >>>> | FAILED | all | undercloud | | >>>> 0:00:03.576 | >>>> | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed >>>> | PASSED | undercloud | undercloud | | >>>> 0:00:02.850 | >>>> | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed >>>> | FAILED | allovercloud | No host matched | | >>>> | >>>> | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:31.559 | >>>> | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:02.057 | >>>> | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | >>>> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >>>> | | 0:00:00.884 | >>>> | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:02.138 | >>>> | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count >>>> | PASSED | undercloud | undercloud | | >>>> 0:00:06.164 | >>>> | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:00.934 | >>>> | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:02.456 | >>>> | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:00.882 | >>>> | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:00.880 | >>>> | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:01.934 | >>>> | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:01.931 | >>>> | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding >>>> | PASSED | all | undercloud | | >>>> 0:00:00.366 | >>>> >>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>> >>>> Also this step of passing the inventory file is not mentioned anywhere >>>> in the document. Is there anything I am missing? >>>> >>>> Regards >>>> Anirudh Gupta >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Wed Oct 13 15:41:52 2021 From: helena at openstack.org (helena at openstack.org) Date: Wed, 13 Oct 2021 10:41:52 -0500 (CDT) Subject: [tc] [ptl] 2021 User Survey Project Specific Feedback Responses Message-ID: <1634139712.69166937@apps.rackspace.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2021 User Survey Project Specific Feedback Responses.csv Type: text/csv Size: 259676 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Wed Oct 13 16:02:43 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 13 Oct 2021 12:02:43 -0400 Subject: [cinder] festival of XS reviews 15 October 2021 Message-ID: Hello Cinder community members, This is a reminder that the most recent edition of the Cinder Festival of XS Reviews will be held at the end of this week on Friday 15 October. who: Everyone! what: The Cinder Festival of XS Reviews when: Friday 15 October 2021 from 1400-1600 UTC where: https://meetpad.opendev.org/cinder-festival-of-reviews This recurring meeting can be placed on your calendar by using this handy ICS file: http://eavesdrop.openstack.org/calendars/cinder-festival-of-reviews.ics See you there! brian From kennelson11 at gmail.com Wed Oct 13 16:05:56 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 13 Oct 2021 09:05:56 -0700 Subject: Learn how users are using OpenStack at OpenInfra Live Keynotes Message-ID: Hello Everyone, You might have heard that the OpenInfra Foundation is hosting its largest free virtual event of this year?OpenInfra Live: Keynote OpenInfra Live: Keynotes: https://openinfra.dev/live/keynotes Date and time: November 17-18 (1500-1700 UTC on each day) Register for free: https://www.eventbrite.com/e/openinfra-live-keynotes-tickets-169507530587 This two day special episode is your best opportunity to meet the newest players to the OpenInfra space and hear about how open source projects, such as OpenStack and Kubernetes, are supporting OpenInfra use cases like hybrid cloud. You will also have the chance to deep dive into the OpenStack user survey results since the launch of the OpenStack User Survey in 2013. Here is a preview of the OpenStack user survey results findings: - Over 300 OpenStack deployments were logged this year, including a significant number of new clouds?in the last 18 months, over 100 new OpenStack clouds have been built, growing the total number of cores under OpenStack management to more than 25,000,000. - Hybrid cloud scenarios continue to be popular, but over half of User Survey respondents indicated that the majority of their cloud infrastructure runs on OpenStack. Upgrades continue to be a challenge that the upstream community tackles with each additional release, but the User Survey shows the majority of organizations are running within the last seven releases. The full report of the OpenStack user survey will be distributed during the OpenInfra Live: Keynotes, so make sure you are registered for the event [2]. Can?t make it to the event? Register anyway, and we will email you a link to the recording after the event! At OpenInfra Live Keynotes, you will also have the opportunity to - interact with leaders of open source projects like OpenStack and Kubernetes to hear how the projects are supporting OpenInfra use cases like hybrid cloud - gain insight into public cloud economics and the role open source technologies play - celebrate as we announce this year?s Superuser Awards winner. This will be the one time everyone will be coming together this year. Come interact with the global OpenInfra community?Live! -Kendall Nelson (diablo_rojo) [1]: https://openinfra.dev/live/keynotes [2]: https://www.eventbrite.com/e/openinfra-live-keynotes-tickets-169507530587 -------------- next part -------------- An HTML attachment was scrubbed... URL: From midhunlaln66 at gmail.com Wed Oct 13 16:22:58 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Wed, 13 Oct 2021 21:52:58 +0530 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: Hi Laurent, I resolved my issues and I could create new vm's.Thanks for your help. Now I have some doubt in different network types Vlan,Vxlan and flat networks. How these networks helps in openstack.What is the use of each network? Could you Please provide me a detailed answer or suggest me any document regarding this networks. On Fri, Oct 8, 2021, 8:26 PM Laurent Dumont wrote: > These are the nova-compute logs but I think it just catches the error from > the neutron component. Any logs from neutron-server, ovs-agent, > libvirt-agent? > > Can you share the "openstack network show NETWORK_ID_HERE" of the network > you are attaching the VM to? > > On Fri, Oct 8, 2021 at 9:53 AM Midhunlal Nb > wrote: > >> Hi, >> This is the log i am getting while launching a new vm >> >> >> Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds >> to destroy the instance on the hypervisor. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds >> to detach 1 volumes for instance. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to >> allocate network(s): nova.exception.VirtualInterfaceCreateException: >> Virtual Interface creation failed >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Traceback (most recent call last): >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7235, in _create_guest_with_network >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> post_xml_callback=post_xml_callback) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> next(self.gen) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 479, in wait_for_instance_event >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> actual_event = event.wait() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >> line 125, in wait >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> result = hub.switch() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >> line 313, in switch >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> return self.greenlet.switch() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> eventlet.timeout.Timeout: 300 seconds >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> During handling of the above exception, another exception occurred: >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Traceback (most recent call last): >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 2397, in _build_and_run_instance >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> accel_info=accel_info) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 4200, in spawn >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> cleanup_instance_disks=created_disks) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7258, in _create_guest_with_network >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> raise exception.VirtualInterfaceCreateException() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> nova.exception.VirtualInterfaceCreateException: Virtual Interface creation >> failed >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 >> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >> network(s), not rescheduling.: nova.exception.BuildAbortException: Build of >> instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate >> the network(s), not rescheduling. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 >> INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Successfully unplugged vif >> VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds >> to deallocate network for instance. >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume >> 07041181-318b-4fae-b71e-02ac7b11bca3 >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 >> ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call >> for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to >> the instance being registered to the remote host None.: >> nova.exception.BuildAbortException: Build of instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >> network(s), not rescheduling. >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 >> ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Delete attachment failed for attachment >> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be >> found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. >> (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: >> 404: cinderclient.exceptions.NotFound: Volume attachment could not be found >> with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP >> 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 >> WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due >> to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be >> found.: nova.exception.VolumeAttachmentNotFound: Volume attachment >> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. >> Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 >> INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Deleted allocation for instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 >> Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 >> INFO nova.compute.manager [-] [instance: >> 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) >> Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 >> WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 >> >> >> Thanks & Regards >> Midhunlal N B >> >> >> >> On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont >> wrote: >> >>> There are essentially two types of networks, vlan and vxlan, that can be >>> attached to a VM. Ideally, you want to look at the logs on the controllers >>> and the compute node. >>> >>> Openstack-ansible seems to send stuff here >>> https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F >>> . >>> >>> On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb >>> wrote: >>> >>>> Hi Laurent, >>>> Thank you very much for your reply.we configured our network as per >>>> official document .Please take a look at below details. >>>> --->Controller node configured with below interfaces >>>> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >>>> >>>> ---> Compute node >>>> bond1,bond0,br-mgmt,br-vxlan,br-storage >>>> >>>> I don't have much more experience in openstack,I think here we used >>>> vlan network. >>>> >>>> Thanks & Regards >>>> Midhunlal N B >>>> +918921245637 >>>> >>>> >>>> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont >>>> wrote: >>>> >>>>> You will need to look at the neutron-server logs + the ovs/libviirt >>>>> agent logs on the compute. The error returned from the VM creation is not >>>>> useful most of the time. >>>>> >>>>> Was this a vxlan or vlan network? >>>>> >>>>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>>>> wrote: >>>>> >>>>>> Hi team, >>>>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>>>> --->I logged in to horizon and created a new network and launched a >>>>>> vm but I am getting an error. >>>>>> >>>>>> Error: Failed to perform requested operation on instance "hope", the >>>>>> instance has an error status: Please try again later [Error: Build of >>>>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>>>> the network(s), not rescheduling.]. >>>>>> >>>>>> -->Then I checked log >>>>>> >>>>>> | fault | {'code': 500, 'created': >>>>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>>>> last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 7235, in _create_guest_with_network\n >>>>>> post_xml_callback=post_xml_callback)\n File >>>>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>>>> next(self.gen)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>>>> File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>>>> line 125, in wait\n result = hub.switch()\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>>>> line 313, in switch\n return >>>>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>> (most recent call last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n >>>>>> File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 7258, in _create_guest_with_network\n raise >>>>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>>>> Virtual Interface creation failed\n\nDuring handling of the above >>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>> last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>>>> request_spec, accel_uuids)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2458, in _build_and_run_instance\n >>>>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>> network(s), not rescheduling.\n'} | >>>>>> >>>>>> Please help me with this error. >>>>>> >>>>>> >>>>>> Thanks & Regards >>>>>> Midhunlal N B >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraden at verisign.com Wed Oct 13 17:25:25 2021 From: abraden at verisign.com (Braden, Albert) Date: Wed, 13 Oct 2021 17:25:25 +0000 Subject: [Designate] After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Message-ID: After enabling redis, and allowing TCP 6379 and 26379, I see it in /etc/designate/designate.conf in the designate_producer container: backend_url = redis://admin:@10.221.176.48:26379?sentinel=kolla&sentinel_fallback=10.221.176.173:26379&sentinel_fallback=10.221.177.38:26379&db=0&socket_timeout=60&retry_on_timeout=yes And I can get to ports 6379 and 26379 with nc: (designate-producer)[root at dva3-ctrl3 /]# nc 10.221.176.173 26379 / -ERR unknown command `/`, with args beginning with: But I still see the DB error when TF rebuilds a VM: 2021-10-13 15:35:23.941 26 ERROR oslo_messaging.notify.dispatcher designate.exceptions.DuplicateRecord: Duplicate Record What am I missing? -----Original Message----- From: Michael Johnson Sent: Tuesday, October 12, 2021 11:33 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: Re: Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. I don't have a good answer for you on that as it pre-dates my history with Designate a bit. I suspect it has to do with the removal of the pool-manager and the restructuring of the controller code. Maybe someone else on the discuss list has more insight. Michael On Tue, Oct 12, 2021 at 5:47 AM Braden, Albert wrote: > > Thank you Michael, this is very helpful. Do you have any insight into why we don't experience this in Queens clusters? We aren't running a lock manager there either, and I haven't been able to duplicate the problem there. > > -----Original Message----- > From: Michael Johnson > Sent: Monday, October 11, 2021 4:24 PM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > You will need one of the Tooz supported distributed lock managers: > Consul, Memcacded, Redis, or zookeeper. > > Michael > > On Mon, Oct 11, 2021 at 11:57 AM Braden, Albert wrote: > > > > After investigating further, I realized that we're not running redis, and I think that means that redis_connection_string doesn't get set. Does this mean that we must run redis, or is there a workaround? > > > > -----Original Message----- > > From: Braden, Albert > > Sent: Monday, October 11, 2021 2:48 PM > > To: 'johnsomor at gmail.com' > > Cc: 'openstack-discuss at lists.openstack.org' > > Subject: RE: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > > > I think so. I see this: > > > > ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} > > > > ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" > > > > Did anything with the distributed lock manager between Queens and Train? > > > > -----Original Message----- > > From: Michael Johnson > > Sent: Monday, October 11, 2021 1:15 PM > > To: Braden, Albert > > Cc: openstack-discuss at lists.openstack.org > > Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > > > Hi Albert, > > > > Have you configured your distributed lock manager for Designate? > > > > [coordination] > > backend_url = > > > > Michael > > > > On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > > > > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > > > > > > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > > > > > > > > > Before applying the change, we see the DNS record in the recordset: > > > > > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > > > $ > > > > > > > > > > > > and we can pull it from the DNS server on the controllers: > > > > > > > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > > > > > > > After applying the change, we don?t see it: > > > > > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > > > $ > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > $ > > > > > > > > > > > > We see this in the logs: > > > > > > > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > > > > > > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > > > > > > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From zyklenfrei at gmail.com Wed Oct 13 17:43:29 2021 From: zyklenfrei at gmail.com (Manuel Holtgrewe) Date: Wed, 13 Oct 2021 19:43:29 +0200 Subject: [kayobe][kolla][ironic] kayobe overcloud provision fails because ironic compute hosts use their inspection DHCP pool IPs Message-ID: Dear list, I am experimenting with kayobe to deploy a test installation of OpenStack wallaby. You can find my configuration here: https://github.com/openstack/kayobe-config/compare/stable/wallaby...holtgrewe:my-wallaby?expand=1 I am following the kayobe documentation and have successfully setup a controller and a seed node. I am at the point where I have the nodes configured and they show up in bifrost baremetal node list. I can control them via IPMI/iDRAC/RedFish and boot them into the IPA image and the nodes can be inspected and actually go into the "manageable" status. kayobe is capable of using the inspection results and assigning the root device, so far, so good. I don't know whether my network configuration is good. I want to pin the IPs of stack-1 to stack-4 and the names resolve the correct IP addresses throughout my network. Below are some more details. In summary, I have trouble because `kayobe overcloud provision` makes my 4 overcloud bare metal host boot into IPA with DHCP enabled and they get the same IPs assigned that were given to them earlier in inspection. This means that the overcloud provision command cannot SSH into the nodes because it knows them by the wrong IPs. I must be really missing something here. What is it? Below are more details. Here is what kayobe pulled from the bifrost inspection (I believe). # cat etc/kayobe/inventory/overcloud [controllers] stack-1 ipmi_address=172.16.66.41 bmc_type=idrac stack-2 ipmi_address=172.16.66.42 bmc_type=idrac stack-3 ipmi_address=172.16.66.43 bmc_type=idrac stack-4 ipmi_address=172.16.66.44 bmc_type=idrac The IPs are also fixed here # etc/kayobe/network-allocation.yml compute_net_ips: stack-1: 172.16.32.11 stack-2: 172.16.32.12 stack-3: 172.16.32.13 stack-4: 172.16.32.14 stack-seed: 172.16.32.6 However, I thought I had to provide allocation ranges for DHCP for getting introspection to work. Thus, I have the following # etc/kayobe/networks.yml compute_net_cidr: 172.16.32.0/19 compute_net_gateway: 172.16.32.1 compute_net_vip_address: 172.16.32.2 compute_net_allocation_pool_start: 172.16.32.101 compute_net_allocation_pool_end: 172.16.32.200 compute_net_inspection_allocation_pool_start: 172.16.32.201 compute_net_inspection_allocation_pool_end: 172.16.32.250 This leads to the following dnsmasq leases in the bifrost host. # cat /var/lib/dnsmasq/dnsmasq.leases 1634187260 REDACTED 172.16.32.215 * REDACTED 1634187271 REDACTED 172.16.32.243 * REDACTED 1634187257 REDACTED 172.16.32.207 * REDACTED 1634187258 REDACTED 172.16.32.218 * REDACTED What am I missing? Best wishes, Manuel From johnsomor at gmail.com Wed Oct 13 18:07:28 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 13 Oct 2021 11:07:28 -0700 Subject: [tc] [ptl] 2021 User Survey Project Specific Feedback Responses In-Reply-To: <1634139712.69166937@apps.rackspace.com> References: <1634139712.69166937@apps.rackspace.com> Message-ID: Helena, Thank you, this is super helpful and perfect timing for the PTG. Michael On Wed, Oct 13, 2021 at 9:16 AM helena at openstack.org wrote: > > Hi everyone, > > > > Ahead of the PTG next week, I wanted to share the responses we received from the TC, PTL, and SIG submitted questions in the OpenStack User Survey. > > > > If there are duplicate responses, it is because of multiple deployments submitted by the same person. > > > > If your team would like to change your question or responses for the 2022 User Survey or you have any questions about the 2021 responses, please email community at openinfra.dev. > > > > Cheers, > > Helena > > __________________________________ > Marketing & Community Associate > The Open Infrastructure Foundation > Helena at openinfra.dev From franck.vedel at univ-grenoble-alpes.fr Wed Oct 13 19:01:30 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Wed, 13 Oct 2021 21:01:30 +0200 Subject: =?utf-8?Q?Re=3A_Probl=C3=A8me_with_image_from_snapshot?= In-Reply-To: <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> Message-ID: Hi Dominic, and thanks a lot for your help. > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Yes yes, i did that, sys prep ? generalize > Regarding OpenStack, could you tell us what glance and cinder drivers you use? i?m not sure? for cinder: LVM on a iscsi bay > Have you done other volume to image before? No, and it?s a good idea to test with a cirros instance. I will try tomorrow. > Have you verified that the image finishes creating before trying to create a VM from it? Yes > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I just tried with an instance off? same problem, sam error message (Block Device Mapping is Invalid) > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thanks a lot !! Really !! Franck VEDEL D?p. R?seaux Informatiques & T?l?coms IUT1 - Univ GRENOBLE Alpes 0476824462 Stages, Alternance, Emploi. http://www.rtgrenoble.fr > Le 13 oct. 2021 ? 17:16, a ?crit : > > Franck; > > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? > > Regarding OpenStack, could you tell us what glance and cinder drivers you use? > > Have you done other volume to image before? > > Have you verified that the image finishes creating before trying to create a VM from it? > > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. > > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 12:58 AM > To: openstack-discuss > Subject: Probl?me with image from snapshot > > Hello and first sorry for my english? thanks google. > > Something is wrong with what I want to do: > I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). > > Here is what I want to do and which does not work as I want: > - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. > I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. > I create the snapshot, I place the "--public" parameter on the new image. > I'm trying to create a new instance from this snapshot with the admin account: it works. > I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: > > Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) > > Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? > > Thanks if you have ideas for helping me > > > Franck VEDEL > -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Wed Oct 13 20:06:48 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Wed, 13 Oct 2021 20:06:48 +0000 Subject: =?utf-8?B?UkU6IFByb2Jsw6htZSB3aXRoIGltYWdlIGZyb20gc25hcHNob3Q=?= In-Reply-To: References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> Message-ID: <0670B960225633449A24709C291A525251CB431E@COM03.performair.local> Franck; What version of OpenStack are you running? Are you the cluster administrator, or a user of the cluster? I?m running Victoria, all tips below assume that major version. Can you create an image backed volume outside of the instance creation process? Do you have access to the systems running the cluster, can you review logs on the controller computers? You?re looking for the logs from the glance and cinder services. Glance?s logs should be somewhere like /var/log/glance/. I only have api.log for glance. Cinder?s should be somewhere like /var/log/cinder/. I have api.log, backup.log, scheduler.log, and volume.log. You should also check your glance and cinder configurations. They will be at /etc/glance/glance-api.conf and /etc/cinder/cinder.conf. In the glance configuration, you?re looking for the enabled_backends line in the [DEFAULT] section. If I remember correctly, it?s values has the form :. The type is the interesting part. Cinder is a little more difficult. You?re still going to be looking for an enabled_backends line, in the [DEFAULT] section, but it?s value is just a name (enabled_backends = ). You need to locate a configuration section which matches the name ([]). You?ll then be looking for a volume_driver line. Based on you response, I suspect this will be: volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver. I believe the logs will be critical to diagnosing this issue. I suspect you?ll find the error in the cinder volume.log, though it might also be in scheduler.log, or even in the glance.log. Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] Sent: Wednesday, October 13, 2021 12:02 PM To: Dominic Hilsbos Cc: openstack-discuss at lists.openstack.org Subject: Re: Probl?me with image from snapshot Hi Dominic, and thanks a lot for your help. I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Yes yes, i did that, sys prep ? generalize Regarding OpenStack, could you tell us what glance and cinder drivers you use? i?m not sure? for cinder: LVM on a iscsi bay Have you done other volume to image before? No, and it?s a good idea to test with a cirros instance. I will try tomorrow. Have you verified that the image finishes creating before trying to create a VM from it? Yes I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I just tried with an instance off? same problem, sam error message (Block Device Mapping is Invalid) I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thanks a lot !! Really !! Franck VEDEL D?p. R?seaux Informatiques & T?l?coms IUT1 - Univ GRENOBLE Alpes 0476824462 Stages, Alternance, Emploi. http://www.rtgrenoble.fr Le 13 oct. 2021 ? 17:16, > > a ?crit : Franck; I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Regarding OpenStack, could you tell us what glance and cinder drivers you use? Have you done other volume to image before? Have you verified that the image finishes creating before trying to create a VM from it? I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] Sent: Wednesday, October 13, 2021 12:58 AM To: openstack-discuss Subject: Probl?me with image from snapshot Hello and first sorry for my english? thanks google. Something is wrong with what I want to do: I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). Here is what I want to do and which does not work as I want: - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. I create the snapshot, I place the "--public" parameter on the new image. I'm trying to create a new instance from this snapshot with the admin account: it works. I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? Thanks if you have ideas for helping me Franck VEDEL -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Oct 13 22:09:24 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 13 Oct 2021 17:09:24 -0500 Subject: [tc] [ptl] 2021 User Survey Project Specific Feedback Responses In-Reply-To: <1634139712.69166937@apps.rackspace.com> References: <1634139712.69166937@apps.rackspace.com> Message-ID: <17c7bb3f929.12b32d93a990032.2271857199416779661@ghanshyammann.com> ---- On Wed, 13 Oct 2021 10:41:52 -0500 wrote ---- > Hi everyone, > > Ahead of the PTG next week, I wanted to share the responses we received from the TC, PTL, and SIG submitted questions in the OpenStack User Survey. > > If there are duplicate responses, it is because of multiple deployments submitted by the same person. > > If your team would like to change your question or responses for the 2022 User Survey or you have any questions about the 2021 responses, please email community at openinfra.dev. Thanks Helena for sharing it. I have added it in TC PTG etehrpad to discuss it in PTG and plan for next step on TC questions feedback. -gmann > > Cheers, > Helena > __________________________________ > Marketing & Community Associate > The Open Infrastructure Foundation > Helena at openinfra.dev > From gmann at ghanshyammann.com Wed Oct 13 22:10:57 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 13 Oct 2021 17:10:57 -0500 Subject: [all][tc] Technical Committee next weekly meeting on Oct 14th at 1500 UTC In-Reply-To: <17c6ffde3e7.116bfff55951568.8663051282986901805@ghanshyammann.com> References: <17c6ffde3e7.116bfff55951568.8663051282986901805@ghanshyammann.com> Message-ID: <17c7bb563eb.e90af475990047.376130421142220775@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC meeting schedule at 1500 UTC in #openstack-tc IRC channel. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Project Health checks framework ** https://etherpad.opendev.org/p/health_check ** https://review.opendev.org/c/openstack/governance/+/810037 * Stable team process change ** https://review.opendev.org/c/openstack/governance/+/810721 * Technical Writing (doc) SIG need a chair and more maintainers ** Current Chair (only maintainer in this SIG) Stephen Finucane will not continue it in the next cycle(Yoga) ** http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025161.html * Place to maintain the external hosted ELK, E-R, O-H services ** https://etherpad.opendev.org/p/elk-service-maintenance-plan * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 11 Oct 2021 10:34:42 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for Oct 14th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, Oct 13th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From gouthampravi at gmail.com Wed Oct 13 22:56:51 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 13 Oct 2021 15:56:51 -0700 Subject: [tc] [ptl] 2021 User Survey Project Specific Feedback Responses In-Reply-To: <1634139712.69166937@apps.rackspace.com> References: <1634139712.69166937@apps.rackspace.com> Message-ID: On Wed, Oct 13, 2021 at 9:19 AM helena at openstack.org wrote: > Hi everyone, > > > > Ahead of the PTG next week, I wanted to share the responses we received > from the TC, PTL, and SIG submitted questions in the OpenStack User Survey. > ++ Thank you Helena! :) > > > If there are duplicate responses, it is because of multiple deployments > submitted by the same person. > > > If your team would like to change your question or responses for the 2022 > User Survey or you have any questions about the 2021 responses, please > email community at openinfra.dev. > > > > Cheers, > > Helena > > __________________________________ > Marketing & Community Associate > The Open Infrastructure Foundation > Helena at openinfra.dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From midhunlaln66 at gmail.com Thu Oct 14 03:01:54 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Thu, 14 Oct 2021 08:31:54 +0530 Subject: Networks in openstack Message-ID: Hi Team, I have some doubt in different network types Vlan,Vxlan and flat networks. How these networks helps in openstack.What is the use of each network? Could you Please provide me a detailed answer or suggest me any document regarding this networks. Show quoted text -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Thu Oct 14 06:28:50 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Thu, 14 Oct 2021 08:28:50 +0200 Subject: =?utf-8?Q?Re=3A_Probl=C3=A8me_with_image_from_snapshot?= In-Reply-To: <0670B960225633449A24709C291A525251CB431E@COM03.performair.local> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> <0670B960225633449A24709C291A525251CB431E@COM03.performair.local> Message-ID: <79DEE6DE-47E1-4618-8B26-D4CC1C3EC0F2@univ-grenoble-alpes.fr> Yes, i?m the cluster admin. My cluster is based on Centos Stream / Kolla-ansible / Wallaby. You?re right, I need to check all the logs. (/var/log/kolla/cinder for example for me) Or check in containers?. But before, I'am not sure what I am trying to do is possible, and since I am not sure of my explanations (in English), it is difficult to make myself fully understood about the problem. Thank you very much for your help Franck VEDEL > Le 13 oct. 2021 ? 22:06, DHilsbos at performair.com a ?crit : > > Franck; > > What version of OpenStack are you running? Are you the cluster administrator, or a user of the cluster? > > I?m running Victoria, all tips below assume that major version. > > Can you create an image backed volume outside of the instance creation process? > > Do you have access to the systems running the cluster, can you review logs on the controller computers? You?re looking for the logs from the glance and cinder services. Glance?s logs should be somewhere like /var/log/glance/. I only have api.log for glance. Cinder?s should be somewhere like /var/log/cinder/. I have api.log, backup.log, scheduler.log, and volume.log. > > You should also check your glance and cinder configurations. They will be at /etc/glance/glance-api.conf and /etc/cinder/cinder.conf. > In the glance configuration, you?re looking for the enabled_backends line in the [DEFAULT] section. If I remember correctly, it?s values has the form :. The type is the interesting part. > Cinder is a little more difficult. You?re still going to be looking for an enabled_backends line, in the [DEFAULT] section, but it?s value is just a name (enabled_backends = ). You need to locate a configuration section which matches the name ([]). You?ll then be looking for a volume_driver line. Based on you response, I suspect this will be: volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver. > > I believe the logs will be critical to diagnosing this issue. I suspect you?ll find the error in the cinder volume.log, though it might also be in scheduler.log, or even in the glance.log. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 12:02 PM > To: Dominic Hilsbos > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Probl?me with image from snapshot > > Hi Dominic, and thanks a lot for your help. > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? > Yes yes, i did that, sys prep ? generalize > > Regarding OpenStack, could you tell us what glance and cinder drivers you use? > i?m not sure? for cinder: LVM on a iscsi bay > > Have you done other volume to image before? > No, and it?s a good idea to test with a cirros instance. I will try tomorrow. > > Have you verified that the image finishes creating before trying to create a VM from it? > Yes > > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. > I just tried with an instance off? same problem, sam error message (Block Device Mapping is Invalid) > > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. > Thanks a lot !! Really !! > > Franck VEDEL > D?p. R?seaux Informatiques & T?l?coms > IUT1 - Univ GRENOBLE Alpes > 0476824462 > Stages, Alternance, Emploi. > http://www.rtgrenoble.fr > > > Le 13 oct. 2021 ? 17:16, > > a ?crit : > > Franck; > > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? > > Regarding OpenStack, could you tell us what glance and cinder drivers you use? > > Have you done other volume to image before? > > Have you verified that the image finishes creating before trying to create a VM from it? > > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. > > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 12:58 AM > To: openstack-discuss > Subject: Probl?me with image from snapshot > > Hello and first sorry for my english? thanks google. > > Something is wrong with what I want to do: > I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). > > Here is what I want to do and which does not work as I want: > - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. > I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. > I create the snapshot, I place the "--public" parameter on the new image. > I'm trying to create a new instance from this snapshot with the admin account: it works. > I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: > > Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) > > Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? > > Thanks if you have ideas for helping me > > > Franck VEDEL > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Thu Oct 14 07:14:29 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Thu, 14 Oct 2021 12:44:29 +0530 Subject: [tripleo] Unable to deploy Overcloud Nodes In-Reply-To: References: Message-ID: On Wed, Oct 13, 2021 at 9:28 PM Anirudh Gupta wrote: > Hi Team, > > As per the link below, While executing the command to deploy the overcloud > nodes > > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud > > I am executing the command > > - openstack overcloud deploy --templates > > On running this command, I am getting the following error > > > *ERROR: (pymysql.err.OperationalError) (1045, "Access denied for user > 'heat'@'10.255.255.4' (using password: YES)")(Background on this error at: > http://sqlalche.me/e/e3q8 )* > Sounds like you already have an existing heat mysql database and heat user from a previous deployment and probably installed heat in the undercloud. You need to upgrade the undercloud that will remove installed heat, drop the heat database and remove the heat user. > > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured > while running the command: subprocess.CalledProcessError: Command '['sudo', > 'podman', 'run', '--rm', '--user', 'heat', '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', > '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', > 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', > 'db_sync']' returned non-zero exit status 1. > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent > call last): > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, > self).run(parsed_args) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in > run > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, > self).run(parsed_args) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = > self.take_action(parsed_args) or 0 > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 1277, in take_action > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > self.setup_ephemeral_heat(parsed_args) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 767, in setup_ephemeral_heat > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > utils.launch_heat(self.heat_launcher, restore_db=restore_db) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 2706, in > launch_heat > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > launcher.heat_db_sync(restore_db) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/heat_launcher.py", line > 530, in heat_db_sync > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > subprocess.check_call(cmd) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib64/python3.6/subprocess.py", line 311, in check_call > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud raise > CalledProcessError(retcode, cmd) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > subprocess.CalledProcessError: Command '['sudo', 'podman', 'run', '--rm', > '--user', 'heat', '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', > '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', > 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', > 'db_sync']' returned non-zero exit status 1. > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2021-10-13 05:46:28.400 183680 ERROR openstack [-] Command '['sudo', > 'podman', 'run', '--rm', '--user', 'heat', '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', > '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', > 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', > 'db_sync']' returned non-zero exit status 1.: > subprocess.CalledProcessError: Command '['sudo', 'podman', 'run', '--rm', > '--user', 'heat', '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', > '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', > 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', > 'db_sync']' returned non-zero exit status 1. > 2021-10-13 05:46:28.401 183680 INFO osc_lib.shell [-] END return value: 1 > > > Can someone please help in resolving this issue. > Are there any parameters, templates that need to be passed in order to > make it work. > > Regards > Anirudh Gupta > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Oct 14 07:37:33 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 14 Oct 2021 09:37:33 +0200 Subject: Networks in openstack In-Reply-To: References: Message-ID: <2745324.atdPhlSkOF@p1> Hi, On czwartek, 14 pa?dziernika 2021 05:01:54 CEST Midhunlal Nb wrote: > Hi Team, > > I have some doubt in different network types > > Vlan,Vxlan and flat networks. > > How these networks helps in openstack.What is the use of each network? > > Could you Please provide me a detailed answer or suggest me any document > regarding this networks. > Show quoted text Generally vlan and flat networks are "provider" network types while vxlan is tunnel network and don't require any configuration from Your provider/DC. Provider networks can be also "external" networks so can provide access to the "internet" for Your cloud. Tunnel networks are isolated and can't have direct access to the external world. See https://assafmuller.com/2018/07/23/tenant-provider-and-external-neutron-networks/ where Assaf explained different types of networks pretty well. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From claus.r at mnet-mail.de Thu Oct 14 08:39:11 2021 From: claus.r at mnet-mail.de (claus.r) Date: Thu, 14 Oct 2021 10:39:11 +0200 Subject: Networks in openstack In-Reply-To: <2745324.atdPhlSkOF@p1> References: <2745324.atdPhlSkOF@p1> Message-ID: <7466d808-a265-2aa6-6d0c-a96cdc9f8c9e@mnet-mail.de> Is it possible to have vxlan also for external Network? Am 14.10.21 um 09:37 schrieb Slawek Kaplonski: > Hi, > > On czwartek, 14 pa?dziernika 2021 05:01:54 CEST Midhunlal Nb wrote: >> Hi Team, >> >> I have some doubt in different network types >> >> Vlan,Vxlan and flat networks. >> >> How these networks helps in openstack.What is the use of each network? >> >> Could you Please provide me a detailed answer or suggest me any document >> regarding this networks. >> Show quoted text > Generally vlan and flat networks are "provider" network types while vxlan is > tunnel network and don't require any configuration from Your provider/DC. > Provider networks can be also "external" networks so can provide access to the > "internet" for Your cloud. Tunnel networks are isolated and can't have direct > access to the external world. > > See https://assafmuller.com/2018/07/23/tenant-provider-and-external-neutron-networks/ where Assaf explained different types of networks pretty well. > From mark at stackhpc.com Thu Oct 14 09:59:22 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 14 Oct 2021 10:59:22 +0100 Subject: [kayobe][kolla][ironic] kayobe overcloud provision fails because ironic compute hosts use their inspection DHCP pool IPs In-Reply-To: References: Message-ID: On Wed, 13 Oct 2021 at 18:49, Manuel Holtgrewe wrote: > > Dear list, > > I am experimenting with kayobe to deploy a test installation of > OpenStack wallaby. You can find my configuration here: > > https://github.com/openstack/kayobe-config/compare/stable/wallaby...holtgrewe:my-wallaby?expand=1 > > I am following the kayobe documentation and have successfully setup a > controller and a seed node. > > I am at the point where I have the nodes configured and they show up > in bifrost baremetal node list. I can control them via > IPMI/iDRAC/RedFish and boot them into the IPA image and the nodes can > be inspected and actually go into the "manageable" status. kayobe is > capable of using the inspection results and assigning the root device, > so far, so good. > > I don't know whether my network configuration is good. I want to pin > the IPs of stack-1 to stack-4 and the names resolve the correct IP > addresses throughout my network. > > Below are some more details. In summary, I have trouble because > `kayobe overcloud provision` makes my 4 overcloud bare metal host boot > into IPA with DHCP enabled and they get the same IPs assigned that > were given to them earlier in inspection. This means that the > overcloud provision command cannot SSH into the nodes because it knows > them by the wrong IPs. > > I must be really missing something here. What is it? Hi Manuel. Bifrost will assign IPs from its IP address pool to the machines during inspection and provisioning. IPA will use these addresses. Once provisioning is complete, the machines should boot up into a CentOS image, using the IPs you have allocated. These are statically configured via a configdrive, which is installed during provisioning. If the node stays running IPA, then something is going wrong with provisioning. Mark > > Below are more details. > > Here is what kayobe pulled from the bifrost inspection (I believe). > > # cat etc/kayobe/inventory/overcloud > [controllers] > stack-1 ipmi_address=172.16.66.41 bmc_type=idrac > stack-2 ipmi_address=172.16.66.42 bmc_type=idrac > stack-3 ipmi_address=172.16.66.43 bmc_type=idrac > stack-4 ipmi_address=172.16.66.44 bmc_type=idrac > > The IPs are also fixed here > > # etc/kayobe/network-allocation.yml > compute_net_ips: > stack-1: 172.16.32.11 > stack-2: 172.16.32.12 > stack-3: 172.16.32.13 > stack-4: 172.16.32.14 > stack-seed: 172.16.32.6 > > However, I thought I had to provide allocation ranges for DHCP for > getting introspection to work. > > Thus, I have the following > > # etc/kayobe/networks.yml > compute_net_cidr: 172.16.32.0/19 > compute_net_gateway: 172.16.32.1 > compute_net_vip_address: 172.16.32.2 > compute_net_allocation_pool_start: 172.16.32.101 > compute_net_allocation_pool_end: 172.16.32.200 > compute_net_inspection_allocation_pool_start: 172.16.32.201 > compute_net_inspection_allocation_pool_end: 172.16.32.250 > > This leads to the following dnsmasq leases in the bifrost host. > > # cat /var/lib/dnsmasq/dnsmasq.leases > 1634187260 REDACTED 172.16.32.215 * REDACTED > 1634187271 REDACTED 172.16.32.243 * REDACTED > 1634187257 REDACTED 172.16.32.207 * REDACTED > 1634187258 REDACTED 172.16.32.218 * REDACTED > > What am I missing? > > Best wishes, > Manuel > From skaplons at redhat.com Thu Oct 14 10:52:00 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 14 Oct 2021 12:52:00 +0200 Subject: Networks in openstack In-Reply-To: <7466d808-a265-2aa6-6d0c-a96cdc9f8c9e@mnet-mail.de> References: <2745324.atdPhlSkOF@p1> <7466d808-a265-2aa6-6d0c-a96cdc9f8c9e@mnet-mail.de> Message-ID: <2698745.TLkxdtWsSY@p1> Hi, On czwartek, 14 pa?dziernika 2021 10:39:11 CEST claus.r wrote: > Is it possible to have vxlan also for external Network? >From API PoV You can set router:external = True for any type of network so yes, it's doable. > > Am 14.10.21 um 09:37 schrieb Slawek Kaplonski: > > Hi, > > > > On czwartek, 14 pa?dziernika 2021 05:01:54 CEST Midhunlal Nb wrote: > >> Hi Team, > >> > >> I have some doubt in different network types > >> > >> Vlan,Vxlan and flat networks. > >> > >> How these networks helps in openstack.What is the use of each network? > >> > >> Could you Please provide me a detailed answer or suggest me any document > >> regarding this networks. > >> Show quoted text > > > > Generally vlan and flat networks are "provider" network types while vxlan is > > tunnel network and don't require any configuration from Your provider/DC. > > Provider networks can be also "external" networks so can provide access to > > the "internet" for Your cloud. Tunnel networks are isolated and can't have > > direct access to the external world. > > > > See > > https://assafmuller.com/2018/07/23/tenant-provider-and-external-neutron-net > > works/ where Assaf explained different types of networks pretty well. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From elod.illes at est.tech Thu Oct 14 13:42:28 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 14 Oct 2021 15:42:28 +0200 Subject: [oslo] Propose to EOL stable/queens, stable/rocky on all the oslo scope In-Reply-To: <25b21881-bd0b-f763-9bb5-a66340108455@nemebean.com> References: <3055264.zr5fvq113q@p1> <25b21881-bd0b-f763-9bb5-a66340108455@nemebean.com> Message-ID: <3ff9fe9f-e75f-62a9-1da5-8cdb140427a8@est.tech> What Ben wrote is correct. One comment for the topic: oslo projects have Pike open (and broken) as well, so together with stablre/rocky and stable/queens stable/pike branches should be also marked as End of Life if no maintainers stepping up for these branches. Thanks, El?d On 2021. 10. 04. 22:59, Ben Nemec wrote: > > > On 10/4/21 2:00 PM, Slawek Kaplonski wrote: >> Hi, >> >> On poniedzia?ek, 4 pa?dziernika 2021 20:46:29 CEST feilong wrote: >>> Hi Herve, >>> >>> Please correct me, does that mean we have to also EOL stable/queens and >>> stable/rocky for most of the other projects technically? Or it >>> should be >>> OK? Thanks. >> >> I don't think we have to. I think it's not that common that we are >> using new >> versions of oslo libs in those stable branches so IMHO if all works >> fine for >> some project and it has maintainers, it still can be in EM phase. >> Or is my understanding wrong here? > > The Oslo libs released for those versions will continue to work, so > you're right that it wouldn't be necessary to EOL all of the consumers > of Oslo. > > The danger would be if a critical bug were found in one of those old > releases and a fix needed to be released. However, at this point the > likelihood of finding such a serious bug seems pretty low, and in some > cases it may be possible to use a newer Oslo release with an older > service. > >> >>> >>> On 5/10/21 5:09 am, Herve Beraud wrote: >>>> Hi, >>>> >>>> On our last meeting of the oslo team we discussed the problem with >>>> broken stable >>>> branches (rocky and older) in oslo's projects [1]. >>>> >>>> Indeed, almost all these branches are broken. El?d Ill?s kindly >>>> generated a list of periodic-stable errors on Oslo's stable >>>> branches [2]. >>>> >>>> Given the lack of active maintainers on Oslo and given the current >>>> status of the CI in those branches, I propose to make them End Of >>>> Life. >>>> >>>> I will wait until the end of month for anyone who would like to maybe >>>> step up >>>> as maintainer of those branches and who would at least try to fix CI >>>> of them. >>>> >>>> If no one will volunteer for that, I'll EOLing those branches for all >>>> the projects under the oslo umbrella. >>>> >>>> Let us know your thoughts. >>>> >>>> Thank you for your attention. >>>> >>>> [1] >>>> https://meetings.opendev.org/meetings/oslo/2021/oslo. >> 2021-10-04-15.00.log.tx >>>> t >>>> [2] >>>> http://lists.openstack.org/pipermail/openstack-discuss/2021-July/ >> 023939.html >> >> > From elod.illes at est.tech Thu Oct 14 14:00:10 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 14 Oct 2021 16:00:10 +0200 Subject: [stable][requirements][zuul] unpinned setuptools dependency on stable In-Reply-To: References: <6J4UZQ.VOBD0LVDTPUX1@est.tech> <827e99c6-99b2-54c8-a627-5153e3b84e6b@est.tech> Message-ID: <0861d9e7-0dc3-683f-ad65-120b156d03a0@est.tech> Hi, First, sorry for the slow response. I think pinning setuptools in requirements for stable branches is also a good idea (up till wallaby). I can accept that. Another thing is that the openstack projects that I've checked don't have issues in their CI regarding the unpinned setuptools. Mostly I saw/see the problem in unit test, static code check and similar tox targets. Anyway, if this issue is there for devstack for others then I think we can cap setuptools, too, in requirements repository, if it is OK for everyone. My only concern is to cap it from the newest relevant stable branch where we need it. If I'm not mistaken most of the projects have fixed their related issue in Xena, so I guess Wallaby should be the first branch to cap setuptools. Thanks, El?d On 2021. 10. 04. 20:16, Neil Jerram wrote: > I can now confirm that > https://review.opendev.org/c/openstack/requirements/+/810859 > fixes > my CI use case.? (By temporarily using a fork of the requirements repo > that includes that change.) > > (Fix detail if needed here: > https://github.com/projectcalico/networking-calico/pull/64/commits/cbed6282405957f7d60b6e0790c91fb852afe84c > ) > > Best wishes. > ? ? ?Neil > > > On Mon, Oct 4, 2021 at 6:28 PM Neil Jerram > wrote: > > Is anyone helping to progress this?? I just checked that > stable/ussuri devstack is still broken. > > Best wishes, > ? ? Neil > > > On Tue, Sep 28, 2021 at 9:20 AM Neil Jerram > wrote: > > But I don't think that solution works for devstack, does it?? > Is there a way to pin setuptools in a stable/ussuri devstack > run, except by changing the stable branch of the requirements > project? > > > On Mon, Sep 27, 2021 at 7:50 PM El?d Ill?s > wrote: > > Hi again, > > as I see there is no objection yet about using gibi's > solution [1] (as I > already summarized the situation in my previous mail [2]) > for a fix for > similar cases, so with a general stable core hat on, I > *suggest* > everyone to use that solution to pin the setuptools in tox > for every > failing cases (so that to avoid similar future errors as > well). > > [1] https://review.opendev.org/810461 > > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2021-September/025059.html > > > El?d > > > On 2021. 09. 27. 14:47, Balazs Gibizer wrote: > > > > > > On Fri, Sep 24 2021 at 10:21:33 PM +0200, Thomas Goirand > > > wrote: > >> Hi Gibi! > >> > >> Thanks for bringing this up. > >> > >> As a distro package maintainer, here's my view. > >> > >> On 9/22/21 2:11 PM, Balazs Gibizer wrote: > >>> ?Option 1: Bump the major version of the decorator > dependency on > >>> stable. > >> > >> Decorator 4.0.11 is even in Debian Stretch (currently > oldoldstable), for > >> which I don't even maintain OpenStack anymore (that's > OpenStack > >> Newton...). So I don't see how switching to decorator > 4.0.0 is a > >> problem, and I don't understand how OpenStack could be > using 3.4.0 which > >> is in Jessie (ie: 6 years old Debian release). > >> > >> PyPi says Decorator 3.4.0 is from 2012: > >> https://pypi.org/project/decorator/#history > > >> > >> Do you have your release numbers correct? If so, then > switching to > >> Decorator 4.4.2 (available in Debian Bullseye (shipped > with Victoria) > >> and Ubuntu >=Focal) looks like reasonable to me... > Sticking with 3.4.0 > >> feels a bit crazy (and I wasn't aware of it). > > > > Thanks for the info. So from Debian perspective it is OK > to bump the > > decorator version on stable. As others noted in this > thread it seems > > to be more than just decorator that broke. :/ > > > >> > >>> ?Option 2: Pin the setuptools version during tox > installation > >> > >> Please don't do this for the master branch, we need > OpenStack to stay > >> current with setuptools (yeah, even if this means > breaking changes...). > > > > I've no intention to pin it on master. Master needs to > work with the > > latest and greatest. Also on master it is easier to fix > / replace the > > dependencies that become broken with new setuptools. > > > >> > >> For already released OpenStack: I don't mind much if > this is done (I > >> could backport fixes if something breaks). > > > > ack > > > >> > >>> ?Option 3: turn off lower-constraints testing > >> > >> I already expressed myself about this: this is > dangerous as distros rely > >> on it for setting lower bounds as low as possible > (which is always > >> preferred from a distro point of view). > >> > >>> ?Option 4: utilize pyproject.toml[6] to specify > build-time requirements > >> > >> I don't know about pyproject.toml. > >> > >> Just my 2 cents, hoping it's useful, > > > > Thanks! > > > > Cheers, > > gibi > > > >> Cheers, > >> > >> Thomas Goirand (zigo) > >> > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.hoffmann at cloudandheat.com Thu Oct 14 15:26:40 2021 From: stefan.hoffmann at cloudandheat.com (Stefan Hoffmann) Date: Thu, 14 Oct 2021 17:26:40 +0200 Subject: [cinder][backup] backup big volumes leads to oom kill of cinder-backup Message-ID: <69539885434df49c08936c91190044caec00876e.camel@cloudandheat.com> Hi cinder team, we have the issue, that doing backups of big volumes (5TB) fails and cinder-backup get oom killed. Looks like cinder-backup is allocating memory but didn't release it correctly. Badly we are still using cinder queens. Is this a known issue and fixed in newer releases or should I create a bug report? We found a similar bug [1] with backup restore, that got fixed. I guess something like this is also needed for backup create. Thanks for you help Stefan [1] https://bugs.launchpad.net/cinder/+bug/1865011 -- Stefan Hoffmann DevOps-Engineer Cloud&Heat Technologies GmbH K?nigsbr?cker Stra?e 96 (Halle 15) | 01099 Dresden +49 351 479 367 36 stefan.hoffmann at cloudandheat.com | www.cloudandheat.com Die gr?ne Cloud f?r KI und ML. Think Green: Mach Deine Anwendung gr?ner. https://thinkgreen.cloudandheat.com/ Commercial Register: District Court Dresden Register Number: HRB 30549 VAT ID No.: DE281093504 Managing Director: Nicolas R?hrs Authorized signatory: Dr. Marius Feldmann Authorized signatory: Kristina R?benkamp -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 878 bytes Desc: This is a digitally signed message part URL: From skaplons at redhat.com Thu Oct 14 16:10:03 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 14 Oct 2021 18:10:03 +0200 Subject: [neutron] CI meeting Tuesday 19.10 Message-ID: <7286472.G0QQBjFxQf@p1> Hi, As we have PTG next week, let's cancel CI meeting. See You all at the PTG sessions and on the CI meeting on Tuesday 26th of October. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From DHilsbos at performair.com Thu Oct 14 16:16:09 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Thu, 14 Oct 2021 16:16:09 +0000 Subject: =?utf-8?B?UkU6IFByb2Jsw6htZSB3aXRoIGltYWdlIGZyb20gc25hcHNob3Q=?= In-Reply-To: <79DEE6DE-47E1-4618-8B26-D4CC1C3EC0F2@univ-grenoble-alpes.fr> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> <0670B960225633449A24709C291A525251CB431E@COM03.performair.local> <79DEE6DE-47E1-4618-8B26-D4CC1C3EC0F2@univ-grenoble-alpes.fr> Message-ID: <0670B960225633449A24709C291A525251CB51C9@COM03.performair.local> Franck; I don't see an option to upload a volume from a snapshot in the Victoria dashboard (Horizon), so I'm going to assume that can't / shouldn't be done. Uploading a volume to an image should be possible, assuming the volume is Available (un-attached). Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] Sent: Wednesday, October 13, 2021 11:29 PM To: Dominic Hilsbos Cc: openstack-discuss at lists.openstack.org Subject: Re: Probl?me with image from snapshot Yes, i?m the cluster admin. My cluster is based on Centos Stream / Kolla-ansible / Wallaby. You?re right, I need to check all the logs.? (/var/log/kolla/cinder for example for me) Or check in containers?. But before, I'am not sure what I am trying to do is possible, and since I am not sure of my explanations (in English), it is difficult to make myself fully understood about the problem. Thank you very much for your help Franck VEDEL Le 13 oct. 2021 ? 22:06, DHilsbos at performair.com a ?crit : Franck; ? What version of OpenStack are you running?? Are you the cluster administrator, or a user of the cluster? ? I?m running Victoria, all tips below assume that major version. ? Can you create an image backed volume outside of the instance creation process? ? Do you have access to the systems running the cluster, can you review logs on the controller computers?? You?re looking for the logs from the glance and cinder services.? Glance?s logs should be somewhere like /var/log/glance/.? I only have api.log for glance.? Cinder?s should be somewhere like /var/log/cinder/. ?I have api.log, backup.log, scheduler.log, and volume.log. ? You should also check your glance and cinder configurations.? They will be at /etc/glance/glance-api.conf and /etc/cinder/cinder.conf. In the glance configuration, you?re looking for the enabled_backends line in the [DEFAULT] section.? If I remember correctly, it?s values has the form :.? The type is the interesting part. Cinder is a little more difficult.? You?re still going to be looking for an enabled_backends line, in the [DEFAULT] section, but it?s value is just a name (enabled_backends = ). ?You need to locate a configuration section which matches the name ([]).? You?ll then be looking for a volume_driver line.? Based on you response, I suspect this will be: volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver. ? I believe the logs will be critical to diagnosing this issue.? I suspect you?ll find the error in the cinder volume.log, though it might also be in scheduler.log, or even in the glance.log. ? Thank you, ? Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com ? ? From:?Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr]? Sent:?Wednesday, October 13, 2021 12:02 PM To:?Dominic Hilsbos Cc:?openstack-discuss at lists.openstack.org Subject:?Re: Probl?me with image from snapshot ? Hi Dominic, and thanks a lot for your help. I only see one issue with what you said, or perhaps didn't say. ?You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Yes yes, i did ?that, sys prep ? generalize ? Regarding OpenStack, could you tell us what glance and cinder drivers you use? i?m not sure? for cinder: LVM on a iscsi bay ? Have you done other volume to image before? No, and it?s a good idea to test with a cirros instance. I will try tomorrow. ? Have you verified that the image finishes creating before trying to create a VM from it? Yes ? I'm not sure that snapshotting before creating an image is necessary. ?It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I just tried with an instance off? same problem, sam error message (Block Device Mapping is Invalid) ? I've done volume to image before, but it's been a little while. ?I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thanks a lot !! Really !! ? Franck VEDEL D?p. R?seaux?Informatiques?& T?l?coms IUT1 - Univ GRENOBLE Alpes 0476824462 Stages, Alternance, Emploi. http://www.rtgrenoble.fr Le 13 oct. 2021 ? 17:16, a ?crit : ? Franck; I only see one issue with what you said, or perhaps didn't say. ?You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Regarding OpenStack, could you tell us what glance and cinder drivers you use? Have you done other volume to image before? Have you verified that the image finishes creating before trying to create a VM from it? I'm not sure that snapshotting before creating an image is necessary. ?It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I've done volume to image before, but it's been a little while. ?I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr]? Sent: Wednesday, October 13, 2021 12:58 AM To: openstack-discuss Subject: Probl?me with image from snapshot Hello and first sorry for my english? thanks google. Something is wrong with what I want to do: I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). Here is what I want to do and which does not work as I want: - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. I create the snapshot, I place the "--public" parameter on the new image. I'm trying to create a new instance from this snapshot with the admin account: it works. I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? Thanks if you have ideas for helping me Franck VEDEL From katonalala at gmail.com Thu Oct 14 16:20:23 2021 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 14 Oct 2021 18:20:23 +0200 Subject: [neutron] Team meeting Message-ID: Hi Neutrinos, As we have PTG next week, let's cancel the team meeting. Cheers Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu Oct 14 16:29:43 2021 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 14 Oct 2021 18:29:43 +0200 Subject: [neutron] Drivers meeting agenda - 15.10.2021 Message-ID: Hi Neutron Drivers! As we have no quorum last week to decide on https://bugs.launchpad.net/neutron/+bug/1946251 tomorrow we can check it again and vote. The logs of the meeting from last week: https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-10-08-14.14.log.html (Sorry it is 4 days long as I missed to end the meeting.....) As we have PTG next week, let's cancel drivers meeting for that Friday, and I will be on PTO the week after (29. October) so I can't chair that one. See you online tomorrow. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu Oct 14 16:34:34 2021 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 14 Oct 2021 18:34:34 +0200 Subject: [neutron][PTG] Schedule for yoga PTG Message-ID: Hi, I made a first schedule for the PTG next week, please check it: https://etherpad.opendev.org/p/neutron-yoga-ptg If you have anything to change or add please raise your voice :-) There's still some parts which can change, as we will have multiple cross-project sessions. See you next week. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gustavofaganello.santos at windriver.com Thu Oct 14 18:37:43 2021 From: gustavofaganello.santos at windriver.com (Gustavo Faganello Santos) Date: Thu, 14 Oct 2021 15:37:43 -0300 Subject: [nova][dev] Reattaching mediated devices to instance coming back from suspended state Message-ID: Hello, everyone! I'm working on a solution for Nova to reattach previously used mediated devices (vGPU instances, in my case) to VMs coming back from suspension, which seems to have been left on hold in the past [1] because of an old libvirt limitation, and I'm having a bit of a hard time doing so, since I'm not too familiar with the repo. I have tried creating a function that does the opposite of the mdev detach function, but the get_all_devices method seems to return an empty list when looking for mdevs at the moment of resuming the VM. Looking at the instance's XML file, I noticed that the mdev property remains while the VM is suspended, but it disappears AFTER the whole resume function is executed. I'm failing to understand why the mdev list returns empty, even though the mdev property exists in the instance's XML, and also why the mdev is removed from the XML after the resume function is executed. With that in mind, does anyone know if there's been any attempt to solve this issue since it was left on hold? If not, is there anything I should know while I attempt to do so? Thanks in advance. Gustavo [1] https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L8007 From melwittt at gmail.com Thu Oct 14 19:02:13 2021 From: melwittt at gmail.com (melanie witt) Date: Thu, 14 Oct 2021 12:02:13 -0700 Subject: [nova][dev] Reattaching mediated devices to instance coming back from suspended state In-Reply-To: References: Message-ID: <2940f202-d632-c8f1-a0ed-d4473a9fc9c6@gmail.com> On Thu Oct 14 2021 11:37:43 GMT-0700 (Pacific Daylight Time), Gustavo Faganello Santos wrote: > Hello, everyone! > > I'm working on a solution for Nova to reattach previously used mediated > devices (vGPU instances, in my case) to VMs coming back from suspension, > which seems to have been left on hold in the past [1] because of an old > libvirt limitation, and I'm having a bit of a hard time doing so, since > I'm not too familiar with the repo. > > I have tried creating a function that does the opposite of the mdev > detach function, but the get_all_devices method seems to return an empty > list when looking for mdevs at the moment of resuming the VM. Looking at > the instance's XML file, I noticed that the mdev property remains while > the VM is suspended, but it disappears AFTER the whole resume function > is executed. I'm failing to understand why the mdev list returns empty, > even though the mdev property exists in the instance's XML, and also why > the mdev is removed from the XML after the resume function is executed. > > With that in mind, does anyone know if there's been any attempt to solve > this issue since it was left on hold? If not, is there anything I should > know while I attempt to do so? I'm not sure whether this will be helpful but there is similar (or adjacent?) work currently in progress to handle the case of recreating mediated devices after a compute host reboot [2][3]. The launchpad bug contains some info on workarounds for this case and the proposed patch pulls allocation information from the placement service to recreate the mdevs. -melanie [2] https://bugs.launchpad.net/nova/+bug/1900800 [3] https://review.opendev.org/c/openstack/nova/+/810220 > Thanks in advance. > Gustavo > > [1] > https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L8007 > From franck.vedel at univ-grenoble-alpes.fr Thu Oct 14 19:43:56 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Thu, 14 Oct 2021 21:43:56 +0200 Subject: =?utf-8?Q?Re=3A_Probl=C3=A8me_with_image_from_snapshot?= In-Reply-To: <0670B960225633449A24709C291A525251CB51C9@COM03.performair.local> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> <0670B960225633449A24709C291A525251CB431E@COM03.performair.local> <79DEE6DE-47E1-4618-8B26-D4CC1C3EC0F2@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB51C9@COM03.performair.local> Message-ID: <241AA8C1-47B4-450C-9DD0-B49420A4B75F@univ-grenoble-alpes.fr> Dominic, maybe what I want to do is not possible. I check my logs?. thank you very much for your time and your help. Franck > Le 14 oct. 2021 ? 18:16, DHilsbos at performair.com a ?crit : > > Franck; > > I don't see an option to upload a volume from a snapshot in the Victoria dashboard (Horizon), so I'm going to assume that can't / shouldn't be done. > > Uploading a volume to an image should be possible, assuming the volume is Available (un-attached). > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 11:29 PM > To: Dominic Hilsbos > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Probl?me with image from snapshot > > Yes, i?m the cluster admin. My cluster is based on Centos Stream / Kolla-ansible / Wallaby. > You?re right, I need to check all the logs. > (/var/log/kolla/cinder for example for me) > Or check in containers?. > > But before, I'am not sure what I am trying to do is possible, and since I am not sure of my explanations (in English), it is difficult to make myself fully understood about the problem. > > > Thank you very much for your help > > Franck VEDEL > > > > Le 13 oct. 2021 ? 22:06, DHilsbos at performair.com a ?crit : > > Franck; > > What version of OpenStack are you running? Are you the cluster administrator, or a user of the cluster? > > I?m running Victoria, all tips below assume that major version. > > Can you create an image backed volume outside of the instance creation process? > > Do you have access to the systems running the cluster, can you review logs on the controller computers? You?re looking for the logs from the glance and cinder services. Glance?s logs should be somewhere like /var/log/glance/. I only have api.log for glance. Cinder?s should be somewhere like /var/log/cinder/. I have api.log, backup.log, scheduler.log, and volume.log. > > You should also check your glance and cinder configurations. They will be at /etc/glance/glance-api.conf and /etc/cinder/cinder.conf. > In the glance configuration, you?re looking for the enabled_backends line in the [DEFAULT] section. If I remember correctly, it?s values has the form :. The type is the interesting part. > Cinder is a little more difficult. You?re still going to be looking for an enabled_backends line, in the [DEFAULT] section, but it?s value is just a name (enabled_backends = ). You need to locate a configuration section which matches the name ([]). You?ll then be looking for a volume_driver line. Based on you response, I suspect this will be: volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver. > > I believe the logs will be critical to diagnosing this issue. I suspect you?ll find the error in the cinder volume.log, though it might also be in scheduler.log, or even in the glance.log. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 12:02 PM > To: Dominic Hilsbos > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Probl?me with image from snapshot > > Hi Dominic, and thanks a lot for your help. > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? > Yes yes, i did that, sys prep ? generalize > > Regarding OpenStack, could you tell us what glance and cinder drivers you use? > i?m not sure? for cinder: LVM on a iscsi bay > > Have you done other volume to image before? > No, and it?s a good idea to test with a cirros instance. I will try tomorrow. > > Have you verified that the image finishes creating before trying to create a VM from it? > Yes > > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. > I just tried with an instance off? same problem, sam error message (Block Device Mapping is Invalid) > > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. > Thanks a lot !! Really !! > > Franck VEDEL > D?p. R?seaux Informatiques & T?l?coms > IUT1 - Univ GRENOBLE Alpes > 0476824462 > Stages, Alternance, Emploi. > http://www.rtgrenoble.fr > > > > Le 13 oct. 2021 ? 17:16, a ?crit : > > Franck; > > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? > > Regarding OpenStack, could you tell us what glance and cinder drivers you use? > > Have you done other volume to image before? > > Have you verified that the image finishes creating before trying to create a VM from it? > > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. > > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 12:58 AM > To: openstack-discuss > Subject: Probl?me with image from snapshot > > Hello and first sorry for my english? thanks google. > > Something is wrong with what I want to do: > I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). > > Here is what I want to do and which does not work as I want: > - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. > I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. > I create the snapshot, I place the "--public" parameter on the new image. > I'm trying to create a new instance from this snapshot with the admin account: it works. > I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: > > Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) > > Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? > > Thanks if you have ideas for helping me > > > Franck VEDEL > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Thu Oct 14 20:30:12 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 14 Oct 2021 13:30:12 -0700 Subject: [manila][ptg] No IRC meetings on 21st and 28th Oct 2021 Message-ID: Hi Zorillas, Since we'll be at the PTG [1], we'll skip the IRC meeting [2] on the 21st of Oct; and since a number of us may be taking some time off the week after, we'll skip that weekly occurrence (28th Oct) as well. Please feel free to grab attention to any issue via this mailing list, or hop over to #openstack-manila on OFTC. Our next weekly IRC meeting will be on 4th Nov 2021. Thanks, and see you at the PTG! Goutham [1] https://etherpad.opendev.org/p/yoga-ptg-manila-planning [2] https://wiki.openstack.org/wiki/Manila/Meetings -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Oct 14 20:58:44 2021 From: melwittt at gmail.com (melanie witt) Date: Thu, 14 Oct 2021 13:58:44 -0700 Subject: =?UTF-8?Q?Re=3a_Probl=c3=a8me_with_image_from_snapshot?= In-Reply-To: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> Message-ID: On Wed Oct 13 2021 00:57:52 GMT-0700 (Pacific Daylight Time), Franck VEDEL wrote: > Hello and first sorry for my english? thanks google. > > Something is wrong with what I want to do: > I use Wallaby, it works very well (apart from VpnaaS, I wasted too much > time this summer to make it work, without success, and the bug does not > seem to be fixed). > > Here is what I want to do and which does not work as I want: > - With an admin account, I launch a Win10 instance from the image I > created. The instance is working but it takes about 10 minutes to get > Win10 up and running. > I wanted to take a snapshot of this instance and then create a new image > from this snapshot. And that users use this new image. > I create the snapshot, I place the "--public" parameter on the new image. > I'm trying to create a new instance from this snapshot with the admin > account: it works. > I create a new user, who has his project, and sees all the images. I try > to create an instance with this new image and I get the message: > > Block Device Mapping is Invalid: failed to get snapshot > f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: > req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) > > Is it a legal problem? Is it possible to do as I do? otherwise how > should we do it? According to this cinder doc [1], it looks like what you're trying to do is valid, to create an image backed by a volume and boot instances from that image. The problem I see where the "failed to get snapshot" error is raised in nova for the non-admin user, it looks to be a problem with policy access for the GET /snapshots/{snapshot_id} cinder API. Although the image is public, the volume behind it was created by some project and by default the API will allow the admin project or the project that created/owns the volume [2]: volume:get_snapshot Default rule:admin_or_owner Operations GET /snapshots/{snapshot_id} This is why it works when you boot an instance using the admin account. Currently, you would need to change the above rule in the cinder policy.yaml in order to allow a different project than the owner to GET the snapshot. It's possible this is a bug in nova and that we should be using an elevated admin request context to call GET /snapshots/{snapshot_id} if the snapshot is for a volume-backed image. Hopefully I haven't completely misunderstood what is going on here, if so, please ignore me. :) HTH, -melanie [1] https://docs.openstack.org/cinder/wallaby/admin/blockstorage-volume-backed-image.html [2] https://docs.openstack.org/cinder/wallaby/configuration/block-storage/policy.html#cinder > Thanks if you have ideas for helping me > > > Franck VEDEL > From Daniel.Pereira at windriver.com Thu Oct 14 21:27:50 2021 From: Daniel.Pereira at windriver.com (Pereira, Daniel Oliveira) Date: Thu, 14 Oct 2021 21:27:50 +0000 Subject: [dev][cinder] Consultation about new cinder-backup features In-Reply-To: <20211004102331.e3otr2k2mjzglg42@localhost> References: <20210906132813.xsaxbsyyvf4ey4vm@localhost> <20211004102331.e3otr2k2mjzglg42@localhost> Message-ID: Hi all, my team is evaluating the cinder-backup multi backends configuration spec. It seems this spec fulfills our needs as it is, so we are considering working on its implementation, but we cannot at the moment commit to deliver this feature. About the improvement on NFS backup driver to allow backups on private NFS servers, we decided that we won't try to upstream this feature, based on feedback that we received. ? We also won't bring these topics to be discussed on Cinder PTG meeting. I would?like to thank Gorka Eguileor, Brian Rosmaita and Arkady Kanevsky for your comments in this thread. Regards, Daniel Pereira. From: Gorka Eguileor Sent: Monday, October 4, 2021 7:23 AM To: Pereira, Daniel Oliveira Cc: openstack-discuss at lists.openstack.org Subject: Re: [dev][cinder] Consultation about new cinder-backup features ? [Please note: This e-mail is from an EXTERNAL e-mail address] On 30/09, Daniel de Oliveira Pereira wrote: > On 06/09/2021 10:28, Gorka Eguileor wrote: > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > On 27/08, Daniel de Oliveira Pereira wrote: > >> Hello everyone, > >> > >> We have prototyped some new features on Cinder for our clients, and we > >> think that they are nice features and good candidates to be part of > >> upstream Cinder, so we would like to get feedback from OpenStack > >> community about these features and if you would be willing to accept > >> them in upstream OpenStack. > > > > Hi Daniel, > > > > Thank you very much for your willingness to give back!!! > > > > > >> > >> Our team implemented the following features for cinder-backup service: > >> > >>???? 1. A multi-backend backup driver, that allow OpenStack users to > >> choose, via API/CLI/Horizon, which backup driver (Ceph or NFS, in our > >> prototype) will be used during a backup operation to create a new volume > >> backup. > > > > This is a feature that has been discussed before, and e0ne already did > > some of the prerequisites for it. > > > > > >>???? 2. An improved NFS backup driver, that allow OpenStack users to back > >> up their volumes to private NFS servers, providing the NFS hostpath at > >> runtime via API/CLI/Horizon, while creating the volume backup. > >> > > > > What about the username and password? > > Hi Gorka, > > thanks for your feedback. > > Our prototype doesn't support authentication using username/password, > since this is a feature that NFS doesn't provide built-in support. > > > Can backups be restored from a remote location as well? > > Yes, if the location is the one where the backup was originally saved > (same NFS hostpath), as the backup location is stored on Cinder backups > table during the backup creation. It doesn't support restoring the > backup from an arbitrary remote NFS server. > > > > > This sounds like a very cool feature, but I'm not too comfortable with > > having it in Cinder. > > > > The idea is that Cinder provides an abstraction and doesn't let users > > know about implementation details. > > > > With that feature as it is a user could request a backup to an off-site > > location that could result in congestion in one of the outbound > > connections. > > I think this is a very good point, that we weren't taking into > consideration in our prototype. > > > > > I can only think of this being acceptable for admin users, and in that > > case I think it would be best to use the multi-backup destination > > feature instead. > > > > After all, how many times do we have to backup to a different location? > > Maybe I'm missing a use case. > > Our clients have privacy and security concerns with the same NFS server > being shared by OpenStack tenants to store volume backups, so they > required cinder-backup to be able to back up volumes to private NFS servers. > > > > > If the community thinks this as a desired feature I would encourage > > adding it with a policy that disables it by default. > > > > > >> Considering that cinder was configured to use the multi-backend backup > >> driver, this is how it works: > >> > >>???? During a volume backup operation, the user provides a "location" > >> parameter to indicate which backend will be used, and the backup > >> hostpath, if applicable (for NFS driver), to create the volume backup. > >> For instance: > >> > >>???? - Creating a backup using Ceph backend: > >>???? $ openstack volume backup create --name --location > >> ceph > >> > >>???? - Creating a backup using the improved NFS backend: > >>???? $ openstack volume backup create --name --location > >> nfs://my.nfs.server:/backups > >> > >>???? If the user chooses Ceph backend, the Ceph driver will be used to > >> create the backup. If the user chooses the NFS backend, the improved NFS > >> driver, previously mentioned, will be used to create the backup. > >> > >>???? The backup location, if provided, is stored on Cinder database, and > >> can be seen fetching the backup details: > >>???? $ openstack volume backup show > >> > >> Briefly, this is how the features were implemented: > >> > >>???? - Cinder API was updated to add an optional location parameter to > >> "create backup" method. Horizon, and OpenStack and Cinder CLIs were > >> updated accordingly, to handle the new parameter. > >>???? - Cinder backup controller was updated to handle the backup location > >> parameter, and a validator for the parameter was implemented using the > >> oslo config library. > >>???? - Cinder backup object model was updated to add a nullable location > >> property, so that the backup location could be stored on cinder database. > >>???? - a new backup driver base class, that extends BackupDriver and > >> accepts a backup context object, was implemented to handle the backup > >> configuration provided at runtime by the user. This new backup base > >> class requires that the concrete drivers implement a method to validate > >> the backup context (similar to BackupDriver.check_for_setup_error) > >>???? - the 2 new backup drivers, previously mentioned, were implemented > >> using these new backup base class. > >>???? - in BackupManager class, the "service" attribute, that on upstream > >> OpenStack holds the backup driver class name, was re-implemented as a > >> factory function that accepts a backup context object and return an > >> instance of a backup driver, according to the backup driver configured > >> on cinder.conf file and the backup context provided at runtime by the user. > >>???? - All the backup operations continue working as usual. > >> > > > > When this feature was discussed upstream we liked the idea of > > implementing this like we do multi-backends for the volume service, > > adding backup-types. > > I found this approved spec [1] (that, I believe, is product of the work > done by eOne that you mentioned before), but I couldn't find any work > items in progress related to it. > Do you know the current status of this spec? Is it ready to be > implemented or is there some more work to be done until there? If we > decide to work on its implementation, would be required to review, and > possibly update, the spec for the current development cycle? > > [1] > https://specs.openstack.org/openstack/cinder-specs/specs/victoria/backup-backends-configuration.html > Hi, I think all that would need to be done regarding the spec is to submit a patch to move it to the current release directory and fix the formatting issue of the tables from the "Data model impact" section. You'll be able to leverage Ivan's work [1] when implementing the multi-backup feature. Cheers, Gorka. [1]: https://review.opendev.org/c/openstack/cinder/+/630305 > > > > > In latest code backup creation operations have been modified to go > > through the scheduler, so that's a piece that is already implemented. > > > > > >> Could you please let us know your thoughts about these features and if > >> you would be open to adding them to upstream Cinder? If yes, we would be > >> willing to submit the specs and work on the upstream implementation, if > >> they are approved. > >> > >> Regards, > >> Daniel Pereira > >> > > > > I believe you will have the full community's support on the first idea > > (though probably not on the proposed implementation). > > > > I'm not so sure on the second one, iti will most likely depend on the > > use cases.? Many times the reasons why features are dismissed upstream > > is because there are no clear use cases that justify the addition of the > > code. > > > > Looking forward to continuing this conversation at the PTG, IRC, in a > > spec, or through here. > > > > Cheers, > > Gorka. > > > From gmann at ghanshyammann.com Thu Oct 14 22:52:03 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 14 Oct 2021 17:52:03 -0500 Subject: [tc] No TC weekly meeting next week due to meeting in PTG Message-ID: <17c81015efc.f692731b1056175.888775454375504790@ghanshyammann.com> Hello Everyone, As we will be meeting in PTG next week, we are cancelling the TC's next week (21st Oct) IRC meeting. -gmann From franck.vedel at univ-grenoble-alpes.fr Fri Oct 15 06:45:14 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Fri, 15 Oct 2021 08:45:14 +0200 Subject: =?utf-8?Q?Re=3A_Probl=C3=A8me_with_image_from_snapshot?= In-Reply-To: References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> Message-ID: Melanie, On the contrary, I believe that you have fully understood my problem, and your explanations are very clear. Thank you so much. I looked at the documentation, it is well explained, I understand what to do. I'm using kolla-ansible to deploy Wallaby, it's not going to be easy, because changing the default permissions for cinder doesn't look easy. Thanks again, you've saved me a lot of time, and it's going to help me with what I want to do with my students. Franck > Le 14 oct. 2021 ? 22:58, melanie witt a ?crit : > > According to this cinder doc [1], it looks like what you're trying to do is valid, to create an image backed by a volume and boot instances from that image. > > The problem I see where the "failed to get snapshot" error is raised in nova for the non-admin user, it looks to be a problem with policy access for the GET /snapshots/{snapshot_id} cinder API. Although the image is public, the volume behind it was created by some project and by default the API will allow the admin project or the project that created/owns the volume [2]: > > volume:get_snapshot > Default > rule:admin_or_owner > > Operations > GET /snapshots/{snapshot_id} > > This is why it works when you boot an instance using the admin account. Currently, you would need to change the above rule in the cinder policy.yaml in order to allow a different project than the owner to GET the snapshot. > > It's possible this is a bug in nova and that we should be using an elevated admin request context to call GET /snapshots/{snapshot_id} if the snapshot is for a volume-backed image. > > Hopefully I haven't completely misunderstood what is going on here, if so, please ignore me. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Oct 15 07:32:24 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 15 Oct 2021 09:32:24 +0200 Subject: [neutron] Drivers meeting agenda - 15.10.2021 In-Reply-To: References: Message-ID: <9488604.VV5PYv0bhD@p1> Hi, On czwartek, 14 pa?dziernika 2021 18:29:43 CEST Lajos Katona wrote: > Hi Neutron Drivers! > As we have no quorum last week to decide on > https://bugs.launchpad.net/neutron/+bug/1946251 tomorrow we can check it > again and vote. > The logs of the meeting from last week: > https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers. 202 > 1-10-08-14.14.log.html > > (Sorry it is 4 days long as I missed to end the meeting.....) > > As we have PTG next week, let's cancel drivers meeting for that Friday, and > I will be on PTO the week after (29. October) so I can't chair that one. > > See you online tomorrow. > Lajos Katona (lajoskatona) I will not be able to attend today's meeting. But in general I'm +1 for this RFE as an idea. We can discuss exact way how to do it in the API in the spec's review probably. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ricolin at ricolky.com Fri Oct 15 08:14:06 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 15 Oct 2021 16:14:06 +0800 Subject: [Multi-arch SIG][PTG] PTG plan Message-ID: Dear all Next week, Multi-arch SIG will have PTG at: 10/19 Tuesday 07 - 08 UTC time 10/19 Tuesday 14 -15 UTC time SIG have been low activity for months and we need more volunteers to join. Please sign up for PTG if you're interested. Also feel free to suggest topics PTG etherpad: https://etherpad.opendev.org/p/oct2021-ptg-multi-arch *Rico Lin* OIF Individual Board of directors, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Fri Oct 15 08:16:33 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 15 Oct 2021 16:16:33 +0800 Subject: [Heat][PTG] PTG plan Message-ID: Dear all Apologize for the late notice Next week Heat team will have PTG schedule at: Monday 14 -15 UTC Feel free to join and suggest topic to https://etherpad.opendev.org/p/oct2021-ptg-heat *Rico Lin* OIF Individual Board of directors, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Fri Oct 15 09:17:15 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Fri, 15 Oct 2021 18:17:15 +0900 Subject: [tacker][ptg] Yoga vPTG planning In-Reply-To: <57d5eecf-c468-5bc5-169e-adcd502a0896@gmail.com> References: <57d5eecf-c468-5bc5-169e-adcd502a0896@gmail.com> Message-ID: <4f9ffaa7-423e-5c75-ba86-8ed6864b2d07@gmail.com> Hi tacker team, As a reminder, we'll have PTG next week. Please check etherpad for the details[1]. You can find each link of meeting room at [2]. [1] https://etherpad.opendev.org/p/tacker-yoga-ptg [2] https://ptg.opendev.org/ptg.html Thanks, Yasufumi On 2021/07/19 1:41, yasufum wrote: > Hi everyone, > > The next vPTG will be held on 18-22 October as I shared at the previous > IRC meeting [1]. Registration has already opened [2]. We've decided to > have the next vPTG session on the same timeslots, 6-8 UTC, as previous > for most of us join from India and APAC regions. > > I've prepared etherpad for the next vPTG [3]. If you have any > suggestion, please add your topic on it. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023540.html > > [2] https://openinfra-ptg.eventbrite.com/ > [3] https://etherpad.opendev.org/p/tacker-yoga-ptg > > Thanks, > Yasufumi From mnasiadka at gmail.com Fri Oct 15 09:40:59 2021 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Fri, 15 Oct 2021 11:40:59 +0200 Subject: [kolla] Cancelling next weeks meeting (20 Oct 2021) Message-ID: Hello koalas, Since next week is PTG - I?m cancelling the meeting on 20th Oct 2021. Best regards, Michal From amonster369 at gmail.com Fri Oct 15 10:51:49 2021 From: amonster369 at gmail.com (A Monster) Date: Fri, 15 Oct 2021 11:51:49 +0100 Subject: How to use hosts with no storage disks Message-ID: In Openstack, is it possible to create compute nodes with no hard drives and use PXE in order to boot the host's system and therefore launch instances with no local drive which is needed to boot the VM's image. If not, what's the minimum storage needed to be given to hosts in order to get a fully functional system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Fri Oct 15 12:18:24 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Fri, 15 Oct 2021 14:18:24 +0200 Subject: [telemetry]Yoga vPTG Message-ID: <8FA00180-BE4D-4A80-A5B3-916B41FC996B@matthias-runge.de> Hello there, Next week, we?ll have PTG. There will be a telemetry session on Tuesday from 4pm to 5pm UTC. The planning ether pad used is at https://etherpad.opendev.org/p/telemetry-yoga-ptg Matthias From fungi at yuggoth.org Fri Oct 15 12:52:27 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Oct 2021 12:52:27 +0000 Subject: [ops] How to use hosts with no storage disks In-Reply-To: References: Message-ID: <20211015125226.jkp6b53nzzypabnc@yuggoth.org> On 2021-10-15 11:51:49 +0100 (+0100), A Monster wrote: > In Openstack, is it possible to create compute nodes with no hard > drives and use PXE in order to boot the host's system [...] This question is outside the scope of OpenStack itself, unless you're using another OpenStack deployment to manage the physical servers (for example TripleO has an "undercloud" which uses Ironic to manage the servers which then comprise the "overcloud" presented to users). OpenStack's services start on already booted servers, so you can in theory use any mechanism you like, including PXEboot, to boot those physical servers. I understand OpenStack Ironic is a great solution to this problem though, and can be set up entirely stand-alone with its Bifrost installer. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Fri Oct 15 13:57:50 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 15 Oct 2021 09:57:50 -0400 Subject: [cinder][PTG] yoga PTG schedule Message-ID: <277c3ab8-24b1-f931-4d74-d70bac8297be@gmail.com> The Cinder project team will be meeting on Tuesday 19 October through Friday 22 October from 1300-1700 UTC. For the most part, the scheduling is flexible, and we'll discuss topics roughly in the order given on the etherpad: https://etherpad.opendev.org/p/yoga-ptg-cinder We'll try to keep the "Currently at the PTG" page updated, but you know how that goes. https://ptg.opendev.org/ptg.html For your scheduling convenience, here's an outline of the schedule in a spreadsheet: https://ethercalc.openstack.org/wfno9g46fa7p - topics in red: cross-project, so the times for those are accurate - topics in blue: participatory activities for the team - topics in green: happy hour We'll be meeting in BlueJeans (except for the happy hour, which will be in meetpad). The sessions (except for happy hour) will be recorded. Connection info is on the etherpad: https://etherpad.opendev.org/p/yoga-ptg-cinder All OpenStack community members are welcome (especially for the happy hour). Looking forward to seeing everyone next week! brian From amy at demarco.com Fri Oct 15 14:04:28 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 15 Oct 2021 07:04:28 -0700 Subject: [Diversity] [PTG] Diversity nd Inclusion Session at PTG Message-ID: Hi Everyone, The Diversity and Inclusion WG will be meeting during the PTG on Monday October 18th at 14:00 UTC in the Diablo room. We welcome all Open Infrastructure project to attend and we would be happy to assist you with any questions you might have in regards to the Inclusive Naming initiative that OIF began last year, We plan to do a review of the current CoC or going through the current code base for the projects to find instances where changes are needed to provide both examples and patches where appropriate to assist in these endeavours. The activities we focus on will be determined by attendance during the session, Thanks, Amy (spotz) on behalf of the Diversity and Inclusion WG -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Fri Oct 15 14:32:15 2021 From: james.slagle at gmail.com (James Slagle) Date: Fri, 15 Oct 2021 10:32:15 -0400 Subject: [TripleO] Hackfest at the PTG on Thursday Message-ID: Hi TripleO, As previously mentioned, we're going to have a hackfest on Thursday next week during the PTG from 1300-1700UTC. The topic will be directord+task-core -- the proposed new task execution engine for TripleO. I've prepared an etherpad for the hackfest ahead of time: https://etherpad.opendev.org/p/tripleo-directord-hackfest There are details in the etherpad about how to set up 2 nodes for the hackfest. Virtual machines would work great for this, on any platform. I've run through it on a private OpenStack cloud, and also just using local libvirt. It would be good to get those setup ahead of the hackfest if you have time between now and Thursday. I'm looking forward to Thursday and some informal hacking! -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Fri Oct 15 14:55:22 2021 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 15 Oct 2021 10:55:22 -0400 Subject: OpenStack Xena for Ubuntu 21.10 and Ubuntu 20.04 LTS Message-ID: The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Xena on Ubuntu 21.10 (Impish Indri) and Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of the Xena release can be found at: https://www.openstack.org/software/xena To get access to the Ubuntu Xena packages: == Ubuntu 21.10 == OpenStack Xena is available by default for installation on Ubuntu 21.10. == Ubuntu 20.04 LTS == The Ubuntu Cloud Archive for OpenStack Xena can be enabled on Ubuntu 20.04 by running the following command: sudo add-apt-repository cloud-archive:xena The Ubuntu Cloud Archive for Xena includes updates for: aodh, barbican, ceilometer, ceph (16.2.6), cinder, designate, designate-dashboard, dpdk (20.11.3), glance, gnocchi, heat, heat-dashboard, horizon, ironic, ironic-ui, keystone, magnum, magnum-ui, manila, manila-ui, masakari, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-baremetal, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-sfc, neutron, neutron-dynamic-routing, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, openvswitch (2.16.0), ovn (21.09.0), ovn-octavia-provider, placement, sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, vitrage, watcher, watcher-dashboard, zaqar, and zaqar-ui. For a full list of packages and versions, please refer to: https://openstack-ci-reports.ubuntu.com/reports/cloud-archive/xena_versions.html == Known issues == OVN 21.09.0 coming soon: https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1947003 == Reporting bugs == If you have any issues please report bugs using the ?ubuntu-bug? tool to ensure that bugs get logged in the right place in Launchpad: sudo ubuntu-bug nova-conductor Thank you to everyone who contributed to OpenStack Xena! Corey (on behalf of the Ubuntu OpenStack Engineering team) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Fri Oct 15 08:44:21 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Fri, 15 Oct 2021 09:44:21 +0100 Subject: [wallaby][neutron][ovn] SSL connection to OVN-NB/SB OVSDB In-Reply-To: References: Message-ID: Hi, To configure the OVN Northbound and Southbound databases connection with SSL you need to run: $ ovn-nbctl set-ssl $ ovn-sbctl set-ssl Then, for Neutron you need to set these six configuration options (3 for Northbound and 3 for Southbound): # /etc/neutron/plugins/ml2/ml2_conf.ini [ovn] ovn_sb_ca_cert="" ovn_sb_certificate="" ovn_sb_private_key="" ovn_nb_ca_cert="" ovn_nb_certificate="" ovn_nb_private_key="" And last, configure the OVN metadata agent. Do the same as above at /etc/neutron/neutron_ovn_metadata_agent.ini That should be it! Hope it helps, Lucas On Wed, Oct 13, 2021 at 5:02 PM Faisal Sheikh wrote: > > Hi, > > I am using Openstack Wallaby release with OVN on Ubuntu 20.04. > My environment consists of 2 compute nodes and 1 controller node. > ovs-vswitchd (Open vSwitch) 2.15.0 > Ubuntu Kernel Version: 5.4.0-88-generic > compute node1 172.16.30.1 > compute node2 172.16.30.3 > controller/Network node IP 172.16.30.46 > > I want to configure the ovn southbound and northbound database > to listen on SSL connection. Set a certificate, private key, and CA > certificate on both compute nodes and controller nodes in > /etc/neutron/plugins/ml2/ml2_conf.ini and using string ssl:IP:Port to > connect the southbound/northbound database but I am unable to > establish connection on SSL. It's not connecting to ovsdb-server on > 6641/6642. > Error in the neutron logs is like below: > > 2021-10-12 17:15:27.728 50561 WARNING neutron.quota.resource_registry > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] > security_group_rule is already registered > 2021-10-12 17:15:27.754 50561 WARNING keystonemiddleware.auth_token > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] AuthToken > middleware is set with keystone_authtoken.service_token_roles_required > set to False. This is backwards compatible but deprecated behaviour. > Please set this to True. > 2021-10-12 17:15:27.761 50561 INFO oslo_service.service > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Starting 1 > workers > 2021-10-12 17:15:27.768 50561 INFO neutron.service > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Neutron service > started, listening on 0.0.0.0:9696 > 2021-10-12 17:15:27.776 50561 ERROR ovsdbapp.backend.ovs_idl.idlutils > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unable to open > stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 > 2021-10-12 17:15:27.779 50561 CRITICAL neutron > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unhandled error: > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > 2021-10-12 17:15:27.779 50561 ERROR neutron Traceback (most recent call last): > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/bin/neutron-server", line 10, in > 2021-10-12 17:15:27.779 50561 ERROR neutron sys.exit(main()) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", > line 19, in main > 2021-10-12 17:15:27.779 50561 ERROR neutron > server.boot_server(wsgi_eventlet.eventlet_wsgi_server) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, > in boot_server > 2021-10-12 17:15:27.779 50561 ERROR neutron server_func() > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line > 24, in eventlet_wsgi_server > 2021-10-12 17:15:27.779 50561 ERROR neutron neutron_api = > service.serve_wsgi(service.NeutronApiService) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in > serve_wsgi > 2021-10-12 17:15:27.779 50561 ERROR neutron > registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", > line 60, in publish > 2021-10-12 17:15:27.779 50561 ERROR neutron > _get_callback_manager().publish(resource, event, trigger, > payload=payload) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 149, in publish > 2021-10-12 17:15:27.779 50561 ERROR neutron return > self.notify(resource, event, trigger, payload=payload) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in > _wrapped > 2021-10-12 17:15:27.779 50561 ERROR neutron raise db_exc.RetryRequest(e) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in > __exit__ > 2021-10-12 17:15:27.779 50561 ERROR neutron self.force_reraise() > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in > force_reraise > 2021-10-12 17:15:27.779 50561 ERROR neutron raise self.value > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in > _wrapped > 2021-10-12 17:15:27.779 50561 ERROR neutron return function(*args, **kwargs) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 174, in notify > 2021-10-12 17:15:27.779 50561 ERROR neutron raise > exceptions.CallbackFailure(errors=errors) > 2021-10-12 17:15:27.779 50561 ERROR neutron > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > 2021-10-12 17:15:27.779 50561 ERROR neutron > 2021-10-12 17:15:27.783 50572 ERROR ovsdbapp.backend.ovs_idl.idlutils > [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: > Unknown error -1 > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager [-] > Error during notification for > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-373774 > process, after_init: Exception: Could not retrieve schema from > ssl:172.16.30.46:6641 > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > Traceback (most recent call last): > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 197, in _notify_loop > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > callback(resource, event, trigger, **kwargs) > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 294, in post_fork_initialize > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > self._wait_for_pg_drop_event() > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 357, in _wait_for_pg_drop_event > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 136, in nb_schema_helper > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line > 721, in __get__ > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > return self.func(owner) > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", > line 102, in schema_helper > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > line 215, in get_schema_helper > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > return create_schema_helper(fetch_schema_json(connection, > schema_name)) > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > line 204, in fetch_schema_json > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > raise Exception("Could not retrieve schema from %s" % connection) > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > Exception: Could not retrieve schema from ssl:172.16.30.46:6641 > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > 2021-10-12 17:15:27.787 50572 INFO neutron.wsgi [-] (50572) wsgi > starting up on http://0.0.0.0:9696 > 2021-10-12 17:15:27.924 50572 INFO oslo_service.service [-] Parent > process has died unexpectedly, exiting > 2021-10-12 17:15:27.925 50572 INFO neutron.wsgi [-] (50572) wsgi > exited, is_accepting=True > 2021-10-12 17:15:29.709 50573 INFO neutron.common.config [-] Logging enabled! > 2021-10-12 17:15:29.710 50573 INFO neutron.common.config [-] > /usr/bin/neutron-server version 18.0.0 > 2021-10-12 17:15:29.712 50573 INFO neutron.common.config [-] Logging enabled! > 2021-10-12 17:15:29.713 50573 INFO neutron.common.config [-] > /usr/bin/neutron-server version 18.0.0 > 2021-10-12 17:15:29.899 50573 INFO keyring.backend [-] Loading KWallet > 2021-10-12 17:15:29.904 50573 INFO keyring.backend [-] Loading SecretService > 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading Windows > 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading chainer > 2021-10-12 17:15:29.908 50573 INFO keyring.backend [-] Loading macOS > 2021-10-12 17:15:29.927 50573 INFO neutron.manager [-] Loading core plugin: ml2 > 2021-10-12 17:15:30.355 50573 INFO neutron.plugins.ml2.managers [-] > Configured type driver names: ['flat', 'geneve'] > 2021-10-12 17:15:30.357 50573 INFO > neutron.plugins.ml2.drivers.type_flat [-] Arbitrary flat > physical_network names allowed > 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] > Loaded type driver names: ['flat', 'geneve'] > 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] > Registered types: dict_keys(['flat', 'geneve']) > 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] > Tenant network_types: ['geneve'] > 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] > Configured extension driver names: ['port_security', 'qos'] > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > Loaded extension driver names: ['port_security', 'qos'] > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > Registered extension drivers: ['port_security', 'qos'] > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > Configured mechanism driver names: ['ovn'] > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] > Loaded mechanism driver names: ['ovn'] > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] > Registered mechanism drivers: ['ovn'] > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] No > mechanism drivers provide segment reachability information for agent > scheduling. > 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] > Initializing driver for type 'flat' > 2021-10-12 17:15:30.456 50573 INFO > neutron.plugins.ml2.drivers.type_flat [-] ML2 FlatTypeDriver > initialization complete > 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] > Initializing driver for type 'geneve' > 2021-10-12 17:15:30.456 50573 INFO > neutron.plugins.ml2.drivers.type_tunnel [-] geneve ID ranges: [(1, > 65536)] > 2021-10-12 17:15:32.555 50573 INFO neutron.plugins.ml2.managers > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > extension driver 'port_security' > 2021-10-12 17:15:32.555 50573 INFO > neutron.plugins.ml2.extensions.port_security > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] > PortSecurityExtensionDriver initialization complete > 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > extension driver 'qos' > 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > mechanism driver 'ovn' > 2021-10-12 17:15:32.556 50573 INFO > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting > OVNMechanismDriver > 2021-10-12 17:15:32.562 50573 WARNING > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Firewall driver > configuration is ignored > 2021-10-12 17:15:32.586 50573 INFO > neutron.services.logapi.drivers.ovn.driver > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] OVN logging > driver registered > 2021-10-12 17:15:32.588 50573 INFO neutron.plugins.ml2.plugin > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Modular L2 Plugin > initialization complete > 2021-10-12 17:15:32.589 50573 INFO neutron.plugins.ml2.managers > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Got port-security > extension from driver 'port_security' > 2021-10-12 17:15:32.589 50573 INFO neutron.extensions.vlantransparent > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Disabled > vlantransparent extension. > 2021-10-12 17:15:32.589 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > ovn-router > 2021-10-12 17:15:32.597 50573 INFO neutron.services.ovn_l3.plugin > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting > OVNL3RouterPlugin > 2021-10-12 17:15:32.597 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > qos > 2021-10-12 17:15:32.600 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > metering > 2021-10-12 17:15:32.603 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > port_forwarding > 2021-10-12 17:15:32.605 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading service > plugin ovn-router, it is required by port_forwarding > 2021-10-12 17:15:32.606 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > segments > 2021-10-12 17:15:32.684 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > auto_allocate > 2021-10-12 17:15:32.685 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > tag > 2021-10-12 17:15:32.687 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > timestamp > 2021-10-12 17:15:32.689 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > network_ip_availability > 2021-10-12 17:15:32.691 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > flavors > 2021-10-12 17:15:32.693 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > revisions > 2021-10-12 17:15:32.695 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > extension manager. > 2021-10-12 17:15:32.696 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > address-group not supported by any of loaded plugins > 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > address-scope > 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > router-admin-state-down-before-update not supported by any of loaded > plugins > 2021-10-12 17:15:32.698 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > agent > 2021-10-12 17:15:32.699 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > agent-resources-synced not supported by any of loaded plugins > 2021-10-12 17:15:32.700 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > allowed-address-pairs > 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > auto-allocated-topology > 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > availability_zone > 2021-10-12 17:15:32.702 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > availability_zone_filter not supported by any of loaded plugins > 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > data-plane-status not supported by any of loaded plugins > 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > default-subnetpools > 2021-10-12 17:15:32.704 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > dhcp_agent_scheduler not supported by any of loaded plugins > 2021-10-12 17:15:32.705 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > dns-integration not supported by any of loaded plugins > 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > dns-domain-ports not supported by any of loaded plugins > 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dvr not > supported by any of loaded plugins > 2021-10-12 17:15:32.707 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > empty-string-filtering not supported by any of loaded plugins > 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > expose-l3-conntrack-helper not supported by any of loaded plugins > 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > expose-port-forwarding-in-fip > 2021-10-12 17:15:32.709 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > external-net > 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > extra_dhcp_opt > 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > extraroute > 2021-10-12 17:15:32.711 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > extraroute-atomic not supported by any of loaded plugins > 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > filter-validation not supported by any of loaded plugins > 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > floating-ip-port-forwarding-description > 2021-10-12 17:15:32.713 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > fip-port-details > 2021-10-12 17:15:32.714 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > flavors > 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > floating-ip-port-forwarding > 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > floatingip-pools not supported by any of loaded plugins > 2021-10-12 17:15:32.716 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > ip_allocation > 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > ip-substring-filtering not supported by any of loaded plugins > 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > l2_adjacency > 2021-10-12 17:15:32.718 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > router > 2021-10-12 17:15:32.719 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > l3-conntrack-helper not supported by any of loaded plugins > 2021-10-12 17:15:32.720 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > ext-gw-mode > 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-ha > not supported by any of loaded plugins > 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > l3-flavors not supported by any of loaded plugins > 2021-10-12 17:15:32.722 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > l3-port-ip-change-not-allowed not supported by any of loaded plugins > 2021-10-12 17:15:32.723 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > l3_agent_scheduler not supported by any of loaded plugins > 2021-10-12 17:15:32.724 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension logging > not supported by any of loaded plugins > 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > metering > 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > metering_source_and_destination_fields > 2021-10-12 17:15:32.726 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > multi-provider > 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > net-mtu > 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > net-mtu-writable > 2021-10-12 17:15:32.728 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > network_availability_zone > 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > network-ip-availability > 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > network-segment-range not supported by any of loaded plugins > 2021-10-12 17:15:32.730 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > pagination > 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > port-device-profile > 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > port-mac-address-regenerate not supported by any of loaded plugins > 2021-10-12 17:15:32.732 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > port-numa-affinity-policy > 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > port-resource-request > 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > binding > 2021-10-12 17:15:32.734 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > binding-extended not supported by any of loaded plugins > 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > port-security > 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > project-id > 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > provider > 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos > 2021-10-12 17:15:32.737 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-bw-limit-direction > 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-bw-minimum-ingress > 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-default > 2021-10-12 17:15:32.739 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-fip > 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > qos-gateway-ip not supported by any of loaded plugins > 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-port-network-policy > 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-rule-type-details > 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-rules-alias > 2021-10-12 17:15:32.742 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > quotas > 2021-10-12 17:15:32.743 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > quota_details > 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > rbac-policies > 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > rbac-address-group not supported by any of loaded plugins > 2021-10-12 17:15:32.745 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > rbac-address-scope > 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > rbac-security-groups not supported by any of loaded plugins > 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > rbac-subnetpool not supported by any of loaded plugins > 2021-10-12 17:15:32.747 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > revision-if-match > 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > standard-attr-revisions > 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > router_availability_zone > 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > router-service-type not supported by any of loaded plugins > 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > security-groups-normalized-cidr > 2021-10-12 17:15:32.750 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > port-security-groups-filtering not supported by any of loaded plugins > 2021-10-12 17:15:32.751 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > security-groups-remote-address-group > 2021-10-12 17:15:32.756 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > security-group > 2021-10-12 17:15:32.757 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > segment > 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > segments-peer-subnet-host-routes > 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > service-type > 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > sorting > 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > standard-attr-segment > 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > standard-attr-description > 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > stateful-security-group not supported by any of loaded plugins > 2021-10-12 17:15:32.761 50573 WARNING neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Did not find > expected name "Stdattrs_common" in > /usr/lib/python3/dist-packages/neutron/extensions/stdattrs_common.py > 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > subnet-dns-publish-fixed-ip not supported by any of loaded plugins > 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > subnet_onboard not supported by any of loaded plugins > 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > subnet-segmentid-writable > 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > subnet-service-types not supported by any of loaded plugins > 2021-10-12 17:15:32.764 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > subnet_allocation > 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > subnetpool-prefix-ops not supported by any of loaded plugins > 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > tag-ports-during-bulk-creation not supported by any of loaded plugins > 2021-10-12 17:15:32.766 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > standard-attr-tag > 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > standard-attr-timestamp > 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension trunk > not supported by any of loaded plugins > 2021-10-12 17:15:32.768 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > trunk-details not supported by any of loaded plugins > 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > uplink-status-propagation not supported by any of loaded plugins > 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > vlan-transparent not supported by any of loaded plugins > 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:network > 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:subnet > 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:subnetpool > 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:port > 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:router > 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:floatingip > 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of CountableResource for resource:rbac_policy > 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:security_group > 2021-10-12 17:15:32.779 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:security_group_rule > 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:router > 2021-10-12 17:15:32.781 50573 WARNING neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] router is already > registered > 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:floatingip > 2021-10-12 17:15:32.782 50573 WARNING neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] floatingip is > already registered > 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of CountableResource for resource:rbac_policy > 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] rbac_policy is > already registered > 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:security_group > 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] security_group is > already registered > 2021-10-12 17:15:32.784 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:security_group_rule > 2021-10-12 17:15:32.784 50573 WARNING neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] > security_group_rule is already registered > 2021-10-12 17:15:32.810 50573 WARNING keystonemiddleware.auth_token > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] AuthToken > middleware is set with keystone_authtoken.service_token_roles_required > set to False. This is backwards compatible but deprecated behaviour. > Please set this to True. > 2021-10-12 17:15:32.816 50573 INFO oslo_service.service > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting 1 > workers > 2021-10-12 17:15:32.824 50573 INFO neutron.service > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Neutron service > started, listening on 0.0.0.0:9696 > 2021-10-12 17:15:32.831 50573 ERROR ovsdbapp.backend.ovs_idl.idlutils > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unable to open > stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 > 2021-10-12 17:15:32.834 50573 CRITICAL neutron > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unhandled error: > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > 2021-10-12 17:15:32.834 50573 ERROR neutron Traceback (most recent call last): > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/bin/neutron-server", line 10, in > 2021-10-12 17:15:32.834 50573 ERROR neutron sys.exit(main()) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", > line 19, in main > 2021-10-12 17:15:32.834 50573 ERROR neutron > server.boot_server(wsgi_eventlet.eventlet_wsgi_server) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, > in boot_server > 2021-10-12 17:15:32.834 50573 ERROR neutron server_func() > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line > 24, in eventlet_wsgi_server > 2021-10-12 17:15:32.834 50573 ERROR neutron neutron_api = > service.serve_wsgi(service.NeutronApiService) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in > serve_wsgi > 2021-10-12 17:15:32.834 50573 ERROR neutron > registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", > line 60, in publish > 2021-10-12 17:15:32.834 50573 ERROR neutron > _get_callback_manager().publish(resource, event, trigger, > payload=payload) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 149, in publish > 2021-10-12 17:15:32.834 50573 ERROR neutron return > self.notify(resource, event, trigger, payload=payload) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in > _wrapped > 2021-10-12 17:15:32.834 50573 ERROR neutron raise db_exc.RetryRequest(e) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in > __exit__ > 2021-10-12 17:15:32.834 50573 ERROR neutron self.force_reraise() > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in > force_reraise > 2021-10-12 17:15:32.834 50573 ERROR neutron raise self.value > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in > _wrapped > 2021-10-12 17:15:32.834 50573 ERROR neutron return function(*args, **kwargs) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 174, in notify > 2021-10-12 17:15:32.834 50573 ERROR neutron raise > exceptions.CallbackFailure(errors=errors) > 2021-10-12 17:15:32.834 50573 ERROR neutron > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > 2021-10-12 17:15:32.834 50573 ERROR neutron > 2021-10-12 17:15:32.838 50582 ERROR ovsdbapp.backend.ovs_idl.idlutils > [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: > Unknown error -1 > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager [-] > Error during notification for > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-904522 > process, after_init: Exception: Could not retrieve schema from > ssl:172.16.30.46:6641 > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > Traceback (most recent call last): > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 197, in _notify_loop > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > callback(resource, event, trigger, **kwargs) > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 294, in post_fork_initialize > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > self._wait_for_pg_drop_event() > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 357, in _wait_for_pg_drop_event > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 136, in nb_schema_helper > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line > 721, in __get__ > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > return self.func(owner) > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", > line 102, in schema_helper > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > line 215, in get_schema_helper > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > return create_schema_helper(fetch_schema_json(connection, > schema_name)) > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > line 204, in fetch_schema_json > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > raise Exception("Could not retrieve schema from %s" % connection) > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > Exception: Could not retrieve schema from ssl:172.16.30.46:6641 > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > 2021-10-12 17:15:32.842 50582 INFO neutron.wsgi [-] (50582) wsgi > starting up on http://0.0.0.0:9696 > 2021-10-12 17:15:32.961 50582 INFO oslo_service.service [-] Parent > process has died unexpectedly, exiting > 2021-10-12 17:15:32.963 50582 INFO neutron.wsgi [-] (50582) wsgi > exited, is_accepting=True > 2021-10-12 17:15:34.722 50583 INFO neutron.common.config [-] Logging enabled! > > I would really appreciate any input in this regard. > > Best regards, > Faisal Sheikh > From faisalsheikh.cyber at gmail.com Fri Oct 15 09:59:33 2021 From: faisalsheikh.cyber at gmail.com (Faisal Sheikh) Date: Fri, 15 Oct 2021 14:59:33 +0500 Subject: [wallaby][neutron][ovn] SSL connection to OVN-NB/SB OVSDB In-Reply-To: References: Message-ID: Hi Lucas, Thanks for your help. I was missing these two commands. $ ovn-nbctl set-ssl $ ovn-sbctl set-ssl It worked for me and now SSL connection is established with OVN NB/SB. kudos. BR, Muhammad Faisal Sheikh On Fri, Oct 15, 2021 at 1:44 PM Lucas Alvares Gomes wrote: > > Hi, > > To configure the OVN Northbound and Southbound databases connection > with SSL you need to run: > > $ ovn-nbctl set-ssl > $ ovn-sbctl set-ssl > > Then, for Neutron you need to set these six configuration options (3 > for Northbound and 3 for Southbound): > > # /etc/neutron/plugins/ml2/ml2_conf.ini > [ovn] > ovn_sb_ca_cert="" > ovn_sb_certificate="" > ovn_sb_private_key="" > ovn_nb_ca_cert="" > ovn_nb_certificate="" > ovn_nb_private_key="" > > And last, configure the OVN metadata agent. Do the same as above at > /etc/neutron/neutron_ovn_metadata_agent.ini > > That should be it! > > Hope it helps, > Lucas > > > > On Wed, Oct 13, 2021 at 5:02 PM Faisal Sheikh > wrote: > > > > Hi, > > > > I am using Openstack Wallaby release with OVN on Ubuntu 20.04. > > My environment consists of 2 compute nodes and 1 controller node. > > ovs-vswitchd (Open vSwitch) 2.15.0 > > Ubuntu Kernel Version: 5.4.0-88-generic > > compute node1 172.16.30.1 > > compute node2 172.16.30.3 > > controller/Network node IP 172.16.30.46 > > > > I want to configure the ovn southbound and northbound database > > to listen on SSL connection. Set a certificate, private key, and CA > > certificate on both compute nodes and controller nodes in > > /etc/neutron/plugins/ml2/ml2_conf.ini and using string ssl:IP:Port to > > connect the southbound/northbound database but I am unable to > > establish connection on SSL. It's not connecting to ovsdb-server on > > 6641/6642. > > Error in the neutron logs is like below: > > > > 2021-10-12 17:15:27.728 50561 WARNING neutron.quota.resource_registry > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] > > security_group_rule is already registered > > 2021-10-12 17:15:27.754 50561 WARNING keystonemiddleware.auth_token > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] AuthToken > > middleware is set with keystone_authtoken.service_token_roles_required > > set to False. This is backwards compatible but deprecated behaviour. > > Please set this to True. > > 2021-10-12 17:15:27.761 50561 INFO oslo_service.service > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Starting 1 > > workers > > 2021-10-12 17:15:27.768 50561 INFO neutron.service > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Neutron service > > started, listening on 0.0.0.0:9696 > > 2021-10-12 17:15:27.776 50561 ERROR ovsdbapp.backend.ovs_idl.idlutils > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unable to open > > stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 > > 2021-10-12 17:15:27.779 50561 CRITICAL neutron > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unhandled error: > > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 > > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > > 2021-10-12 17:15:27.779 50561 ERROR neutron Traceback (most recent call last): > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/bin/neutron-server", line 10, in > > 2021-10-12 17:15:27.779 50561 ERROR neutron sys.exit(main()) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", > > line 19, in main > > 2021-10-12 17:15:27.779 50561 ERROR neutron > > server.boot_server(wsgi_eventlet.eventlet_wsgi_server) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, > > in boot_server > > 2021-10-12 17:15:27.779 50561 ERROR neutron server_func() > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line > > 24, in eventlet_wsgi_server > > 2021-10-12 17:15:27.779 50561 ERROR neutron neutron_api = > > service.serve_wsgi(service.NeutronApiService) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in > > serve_wsgi > > 2021-10-12 17:15:27.779 50561 ERROR neutron > > registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", > > line 60, in publish > > 2021-10-12 17:15:27.779 50561 ERROR neutron > > _get_callback_manager().publish(resource, event, trigger, > > payload=payload) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 149, in publish > > 2021-10-12 17:15:27.779 50561 ERROR neutron return > > self.notify(resource, event, trigger, payload=payload) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in > > _wrapped > > 2021-10-12 17:15:27.779 50561 ERROR neutron raise db_exc.RetryRequest(e) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in > > __exit__ > > 2021-10-12 17:15:27.779 50561 ERROR neutron self.force_reraise() > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in > > force_reraise > > 2021-10-12 17:15:27.779 50561 ERROR neutron raise self.value > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in > > _wrapped > > 2021-10-12 17:15:27.779 50561 ERROR neutron return function(*args, **kwargs) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 174, in notify > > 2021-10-12 17:15:27.779 50561 ERROR neutron raise > > exceptions.CallbackFailure(errors=errors) > > 2021-10-12 17:15:27.779 50561 ERROR neutron > > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 > > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > > 2021-10-12 17:15:27.779 50561 ERROR neutron > > 2021-10-12 17:15:27.783 50572 ERROR ovsdbapp.backend.ovs_idl.idlutils > > [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: > > Unknown error -1 > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager [-] > > Error during notification for > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-373774 > > process, after_init: Exception: Could not retrieve schema from > > ssl:172.16.30.46:6641 > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > Traceback (most recent call last): > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 197, in _notify_loop > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > callback(resource, event, trigger, **kwargs) > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 294, in post_fork_initialize > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > self._wait_for_pg_drop_event() > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 357, in _wait_for_pg_drop_event > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 136, in nb_schema_helper > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line > > 721, in __get__ > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > return self.func(owner) > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", > > line 102, in schema_helper > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > > line 215, in get_schema_helper > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > return create_schema_helper(fetch_schema_json(connection, > > schema_name)) > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > > line 204, in fetch_schema_json > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > raise Exception("Could not retrieve schema from %s" % connection) > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > Exception: Could not retrieve schema from ssl:172.16.30.46:6641 > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > 2021-10-12 17:15:27.787 50572 INFO neutron.wsgi [-] (50572) wsgi > > starting up on http://0.0.0.0:9696 > > 2021-10-12 17:15:27.924 50572 INFO oslo_service.service [-] Parent > > process has died unexpectedly, exiting > > 2021-10-12 17:15:27.925 50572 INFO neutron.wsgi [-] (50572) wsgi > > exited, is_accepting=True > > 2021-10-12 17:15:29.709 50573 INFO neutron.common.config [-] Logging enabled! > > 2021-10-12 17:15:29.710 50573 INFO neutron.common.config [-] > > /usr/bin/neutron-server version 18.0.0 > > 2021-10-12 17:15:29.712 50573 INFO neutron.common.config [-] Logging enabled! > > 2021-10-12 17:15:29.713 50573 INFO neutron.common.config [-] > > /usr/bin/neutron-server version 18.0.0 > > 2021-10-12 17:15:29.899 50573 INFO keyring.backend [-] Loading KWallet > > 2021-10-12 17:15:29.904 50573 INFO keyring.backend [-] Loading SecretService > > 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading Windows > > 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading chainer > > 2021-10-12 17:15:29.908 50573 INFO keyring.backend [-] Loading macOS > > 2021-10-12 17:15:29.927 50573 INFO neutron.manager [-] Loading core plugin: ml2 > > 2021-10-12 17:15:30.355 50573 INFO neutron.plugins.ml2.managers [-] > > Configured type driver names: ['flat', 'geneve'] > > 2021-10-12 17:15:30.357 50573 INFO > > neutron.plugins.ml2.drivers.type_flat [-] Arbitrary flat > > physical_network names allowed > > 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] > > Loaded type driver names: ['flat', 'geneve'] > > 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] > > Registered types: dict_keys(['flat', 'geneve']) > > 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] > > Tenant network_types: ['geneve'] > > 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] > > Configured extension driver names: ['port_security', 'qos'] > > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > > Loaded extension driver names: ['port_security', 'qos'] > > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > > Registered extension drivers: ['port_security', 'qos'] > > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > > Configured mechanism driver names: ['ovn'] > > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] > > Loaded mechanism driver names: ['ovn'] > > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] > > Registered mechanism drivers: ['ovn'] > > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] No > > mechanism drivers provide segment reachability information for agent > > scheduling. > > 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] > > Initializing driver for type 'flat' > > 2021-10-12 17:15:30.456 50573 INFO > > neutron.plugins.ml2.drivers.type_flat [-] ML2 FlatTypeDriver > > initialization complete > > 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] > > Initializing driver for type 'geneve' > > 2021-10-12 17:15:30.456 50573 INFO > > neutron.plugins.ml2.drivers.type_tunnel [-] geneve ID ranges: [(1, > > 65536)] > > 2021-10-12 17:15:32.555 50573 INFO neutron.plugins.ml2.managers > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > > extension driver 'port_security' > > 2021-10-12 17:15:32.555 50573 INFO > > neutron.plugins.ml2.extensions.port_security > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] > > PortSecurityExtensionDriver initialization complete > > 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > > extension driver 'qos' > > 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > > mechanism driver 'ovn' > > 2021-10-12 17:15:32.556 50573 INFO > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting > > OVNMechanismDriver > > 2021-10-12 17:15:32.562 50573 WARNING > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Firewall driver > > configuration is ignored > > 2021-10-12 17:15:32.586 50573 INFO > > neutron.services.logapi.drivers.ovn.driver > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] OVN logging > > driver registered > > 2021-10-12 17:15:32.588 50573 INFO neutron.plugins.ml2.plugin > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Modular L2 Plugin > > initialization complete > > 2021-10-12 17:15:32.589 50573 INFO neutron.plugins.ml2.managers > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Got port-security > > extension from driver 'port_security' > > 2021-10-12 17:15:32.589 50573 INFO neutron.extensions.vlantransparent > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Disabled > > vlantransparent extension. > > 2021-10-12 17:15:32.589 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > ovn-router > > 2021-10-12 17:15:32.597 50573 INFO neutron.services.ovn_l3.plugin > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting > > OVNL3RouterPlugin > > 2021-10-12 17:15:32.597 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > qos > > 2021-10-12 17:15:32.600 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > metering > > 2021-10-12 17:15:32.603 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > port_forwarding > > 2021-10-12 17:15:32.605 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading service > > plugin ovn-router, it is required by port_forwarding > > 2021-10-12 17:15:32.606 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > segments > > 2021-10-12 17:15:32.684 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > auto_allocate > > 2021-10-12 17:15:32.685 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > tag > > 2021-10-12 17:15:32.687 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > timestamp > > 2021-10-12 17:15:32.689 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > network_ip_availability > > 2021-10-12 17:15:32.691 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > flavors > > 2021-10-12 17:15:32.693 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > revisions > > 2021-10-12 17:15:32.695 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > > extension manager. > > 2021-10-12 17:15:32.696 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > address-group not supported by any of loaded plugins > > 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > address-scope > > 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > router-admin-state-down-before-update not supported by any of loaded > > plugins > > 2021-10-12 17:15:32.698 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > agent > > 2021-10-12 17:15:32.699 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > agent-resources-synced not supported by any of loaded plugins > > 2021-10-12 17:15:32.700 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > allowed-address-pairs > > 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > auto-allocated-topology > > 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > availability_zone > > 2021-10-12 17:15:32.702 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > availability_zone_filter not supported by any of loaded plugins > > 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > data-plane-status not supported by any of loaded plugins > > 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > default-subnetpools > > 2021-10-12 17:15:32.704 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > dhcp_agent_scheduler not supported by any of loaded plugins > > 2021-10-12 17:15:32.705 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > dns-integration not supported by any of loaded plugins > > 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > dns-domain-ports not supported by any of loaded plugins > > 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dvr not > > supported by any of loaded plugins > > 2021-10-12 17:15:32.707 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > empty-string-filtering not supported by any of loaded plugins > > 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > expose-l3-conntrack-helper not supported by any of loaded plugins > > 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > expose-port-forwarding-in-fip > > 2021-10-12 17:15:32.709 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > external-net > > 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > extra_dhcp_opt > > 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > extraroute > > 2021-10-12 17:15:32.711 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > extraroute-atomic not supported by any of loaded plugins > > 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > filter-validation not supported by any of loaded plugins > > 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > floating-ip-port-forwarding-description > > 2021-10-12 17:15:32.713 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > fip-port-details > > 2021-10-12 17:15:32.714 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > flavors > > 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > floating-ip-port-forwarding > > 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > floatingip-pools not supported by any of loaded plugins > > 2021-10-12 17:15:32.716 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > ip_allocation > > 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > ip-substring-filtering not supported by any of loaded plugins > > 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > l2_adjacency > > 2021-10-12 17:15:32.718 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > router > > 2021-10-12 17:15:32.719 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > l3-conntrack-helper not supported by any of loaded plugins > > 2021-10-12 17:15:32.720 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > ext-gw-mode > > 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-ha > > not supported by any of loaded plugins > > 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > l3-flavors not supported by any of loaded plugins > > 2021-10-12 17:15:32.722 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > l3-port-ip-change-not-allowed not supported by any of loaded plugins > > 2021-10-12 17:15:32.723 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > l3_agent_scheduler not supported by any of loaded plugins > > 2021-10-12 17:15:32.724 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension logging > > not supported by any of loaded plugins > > 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > metering > > 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > metering_source_and_destination_fields > > 2021-10-12 17:15:32.726 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > multi-provider > > 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > net-mtu > > 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > net-mtu-writable > > 2021-10-12 17:15:32.728 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > network_availability_zone > > 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > network-ip-availability > > 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > network-segment-range not supported by any of loaded plugins > > 2021-10-12 17:15:32.730 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > pagination > > 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > port-device-profile > > 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > port-mac-address-regenerate not supported by any of loaded plugins > > 2021-10-12 17:15:32.732 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > port-numa-affinity-policy > > 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > port-resource-request > > 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > binding > > 2021-10-12 17:15:32.734 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > binding-extended not supported by any of loaded plugins > > 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > port-security > > 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > project-id > > 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > provider > > 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos > > 2021-10-12 17:15:32.737 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-bw-limit-direction > > 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-bw-minimum-ingress > > 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-default > > 2021-10-12 17:15:32.739 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-fip > > 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > qos-gateway-ip not supported by any of loaded plugins > > 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-port-network-policy > > 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-rule-type-details > > 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-rules-alias > > 2021-10-12 17:15:32.742 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > quotas > > 2021-10-12 17:15:32.743 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > quota_details > > 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > rbac-policies > > 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > rbac-address-group not supported by any of loaded plugins > > 2021-10-12 17:15:32.745 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > rbac-address-scope > > 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > rbac-security-groups not supported by any of loaded plugins > > 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > rbac-subnetpool not supported by any of loaded plugins > > 2021-10-12 17:15:32.747 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > revision-if-match > > 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > standard-attr-revisions > > 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > router_availability_zone > > 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > router-service-type not supported by any of loaded plugins > > 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > security-groups-normalized-cidr > > 2021-10-12 17:15:32.750 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > port-security-groups-filtering not supported by any of loaded plugins > > 2021-10-12 17:15:32.751 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > security-groups-remote-address-group > > 2021-10-12 17:15:32.756 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > security-group > > 2021-10-12 17:15:32.757 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > segment > > 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > segments-peer-subnet-host-routes > > 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > service-type > > 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > sorting > > 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > standard-attr-segment > > 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > standard-attr-description > > 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > stateful-security-group not supported by any of loaded plugins > > 2021-10-12 17:15:32.761 50573 WARNING neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Did not find > > expected name "Stdattrs_common" in > > /usr/lib/python3/dist-packages/neutron/extensions/stdattrs_common.py > > 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > subnet-dns-publish-fixed-ip not supported by any of loaded plugins > > 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > subnet_onboard not supported by any of loaded plugins > > 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > subnet-segmentid-writable > > 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > subnet-service-types not supported by any of loaded plugins > > 2021-10-12 17:15:32.764 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > subnet_allocation > > 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > subnetpool-prefix-ops not supported by any of loaded plugins > > 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > tag-ports-during-bulk-creation not supported by any of loaded plugins > > 2021-10-12 17:15:32.766 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > standard-attr-tag > > 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > standard-attr-timestamp > > 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension trunk > > not supported by any of loaded plugins > > 2021-10-12 17:15:32.768 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > trunk-details not supported by any of loaded plugins > > 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > uplink-status-propagation not supported by any of loaded plugins > > 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > vlan-transparent not supported by any of loaded plugins > > 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:network > > 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:subnet > > 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:subnetpool > > 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:port > > 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:router > > 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:floatingip > > 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of CountableResource for resource:rbac_policy > > 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:security_group > > 2021-10-12 17:15:32.779 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:security_group_rule > > 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:router > > 2021-10-12 17:15:32.781 50573 WARNING neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] router is already > > registered > > 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:floatingip > > 2021-10-12 17:15:32.782 50573 WARNING neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] floatingip is > > already registered > > 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of CountableResource for resource:rbac_policy > > 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] rbac_policy is > > already registered > > 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:security_group > > 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] security_group is > > already registered > > 2021-10-12 17:15:32.784 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:security_group_rule > > 2021-10-12 17:15:32.784 50573 WARNING neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] > > security_group_rule is already registered > > 2021-10-12 17:15:32.810 50573 WARNING keystonemiddleware.auth_token > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] AuthToken > > middleware is set with keystone_authtoken.service_token_roles_required > > set to False. This is backwards compatible but deprecated behaviour. > > Please set this to True. > > 2021-10-12 17:15:32.816 50573 INFO oslo_service.service > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting 1 > > workers > > 2021-10-12 17:15:32.824 50573 INFO neutron.service > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Neutron service > > started, listening on 0.0.0.0:9696 > > 2021-10-12 17:15:32.831 50573 ERROR ovsdbapp.backend.ovs_idl.idlutils > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unable to open > > stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 > > 2021-10-12 17:15:32.834 50573 CRITICAL neutron > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unhandled error: > > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 > > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > > 2021-10-12 17:15:32.834 50573 ERROR neutron Traceback (most recent call last): > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/bin/neutron-server", line 10, in > > 2021-10-12 17:15:32.834 50573 ERROR neutron sys.exit(main()) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", > > line 19, in main > > 2021-10-12 17:15:32.834 50573 ERROR neutron > > server.boot_server(wsgi_eventlet.eventlet_wsgi_server) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, > > in boot_server > > 2021-10-12 17:15:32.834 50573 ERROR neutron server_func() > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line > > 24, in eventlet_wsgi_server > > 2021-10-12 17:15:32.834 50573 ERROR neutron neutron_api = > > service.serve_wsgi(service.NeutronApiService) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in > > serve_wsgi > > 2021-10-12 17:15:32.834 50573 ERROR neutron > > registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", > > line 60, in publish > > 2021-10-12 17:15:32.834 50573 ERROR neutron > > _get_callback_manager().publish(resource, event, trigger, > > payload=payload) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 149, in publish > > 2021-10-12 17:15:32.834 50573 ERROR neutron return > > self.notify(resource, event, trigger, payload=payload) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in > > _wrapped > > 2021-10-12 17:15:32.834 50573 ERROR neutron raise db_exc.RetryRequest(e) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in > > __exit__ > > 2021-10-12 17:15:32.834 50573 ERROR neutron self.force_reraise() > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in > > force_reraise > > 2021-10-12 17:15:32.834 50573 ERROR neutron raise self.value > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in > > _wrapped > > 2021-10-12 17:15:32.834 50573 ERROR neutron return function(*args, **kwargs) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 174, in notify > > 2021-10-12 17:15:32.834 50573 ERROR neutron raise > > exceptions.CallbackFailure(errors=errors) > > 2021-10-12 17:15:32.834 50573 ERROR neutron > > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 > > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > > 2021-10-12 17:15:32.834 50573 ERROR neutron > > 2021-10-12 17:15:32.838 50582 ERROR ovsdbapp.backend.ovs_idl.idlutils > > [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: > > Unknown error -1 > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager [-] > > Error during notification for > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-904522 > > process, after_init: Exception: Could not retrieve schema from > > ssl:172.16.30.46:6641 > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > Traceback (most recent call last): > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 197, in _notify_loop > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > callback(resource, event, trigger, **kwargs) > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 294, in post_fork_initialize > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > self._wait_for_pg_drop_event() > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 357, in _wait_for_pg_drop_event > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 136, in nb_schema_helper > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line > > 721, in __get__ > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > return self.func(owner) > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", > > line 102, in schema_helper > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > > line 215, in get_schema_helper > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > return create_schema_helper(fetch_schema_json(connection, > > schema_name)) > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > > line 204, in fetch_schema_json > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > raise Exception("Could not retrieve schema from %s" % connection) > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > Exception: Could not retrieve schema from ssl:172.16.30.46:6641 > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > 2021-10-12 17:15:32.842 50582 INFO neutron.wsgi [-] (50582) wsgi > > starting up on http://0.0.0.0:9696 > > 2021-10-12 17:15:32.961 50582 INFO oslo_service.service [-] Parent > > process has died unexpectedly, exiting > > 2021-10-12 17:15:32.963 50582 INFO neutron.wsgi [-] (50582) wsgi > > exited, is_accepting=True > > 2021-10-12 17:15:34.722 50583 INFO neutron.common.config [-] Logging enabled! > > > > I would really appreciate any input in this regard. > > > > Best regards, > > Faisal Sheikh > > From gustavofaganello.santos at windriver.com Fri Oct 15 15:23:07 2021 From: gustavofaganello.santos at windriver.com (Gustavo Faganello Santos) Date: Fri, 15 Oct 2021 12:23:07 -0300 Subject: [nova][dev] Reattaching mediated devices to instance coming back from suspended state In-Reply-To: <2940f202-d632-c8f1-a0ed-d4473a9fc9c6@gmail.com> References: <2940f202-d632-c8f1-a0ed-d4473a9fc9c6@gmail.com> Message-ID: <38d1c208-a522-6e17-a469-1c069c04051a@windriver.com> On 14/10/2021 16:02, melanie witt wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > On Thu Oct 14 2021 11:37:43 GMT-0700 (Pacific Daylight Time), Gustavo > Faganello Santos wrote: >> Hello, everyone! >> >> I'm working on a solution for Nova to reattach previously used mediated >> devices (vGPU instances, in my case) to VMs coming back from suspension, >> which seems to have been left on hold in the past [1] because of an old >> libvirt limitation, and I'm having a bit of a hard time doing so, since >> I'm not too familiar with the repo. >> >> I have tried creating a function that does the opposite of the mdev >> detach function, but the get_all_devices method seems to return an empty >> list when looking for mdevs at the moment of resuming the VM. Looking at >> the instance's XML file, I noticed that the mdev property remains while >> the VM is suspended, but it disappears AFTER the whole resume function >> is executed. I'm failing to understand why the mdev list returns empty, >> even though the mdev property exists in the instance's XML, and also why >> the mdev is removed from the XML after the resume function is executed. >> >> With that in mind, does anyone know if there's been any attempt to solve >> this issue since it was left on hold? If not, is there anything I should >> know while I attempt to do so? > > I'm not sure whether this will be helpful but there is similar (or > adjacent?) work currently in progress to handle the case of recreating > mediated devices after a compute host reboot [2][3]. The launchpad bug > contains some info on workarounds for this case and the proposed patch > pulls allocation information from the placement service to recreate the > mdevs. Thank you for your reply! I'm aware of that work, but I'm afraid that it unfortunately does not relate too much to what I'm going for. > > -melanie > > [2] https://bugs.launchpad.net/nova/+bug/1900800 > [3] https://review.opendev.org/c/openstack/nova/+/810220 > >> Thanks in advance. >> Gustavo >> >> [1] >> https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L8007 >> >> > From iurygregory at gmail.com Fri Oct 15 15:24:39 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Fri, 15 Oct 2021 17:24:39 +0200 Subject: [ironic] Yoga PTG schedule In-Reply-To: References: Message-ID: Hello ironicers, We have some changes in our schedule: *Monday (18 Oct) - Room Juno 15:00 - 17:00 UTC* * Support OpenBMC * Persistent memory Support * Redfish Host Connection Interface * Boot from Volume + UEFI *Tuesday (19 Oct) - Room Juno 14:00 - 17:00 UTC* * The rise of compossible hardware, again * Self-configuring Ironic Service + Eliminate manual commands * Is there any way we can drive a co-operative use mode of ironic amongst some of the users? *Wednesday (18 Oct) - Room Juno 14:00 - 16:00 UTC* * Main operator areas of interest for improvement - documentation / graphical console support / performance resource tracker benchmarking / nova integration * Bulk operations * Prioritize 3rd party CI in a box *Thursday (18 Oct) - Room Kilo 14:00 - 16:00 UTC* * Secure RBAC items in Yoga * having to go look at logs is an antipattern * pxe-grub *Friday (22 Oct) - Room Kilo 14:00 - 16:00 UTC* * Remove instance (non-BFV, non-ramdisk) networking booting * Direct SDN Integrations The new schedule is already available in the etherpad [1] [1] https://etherpad.opendev.org/p/ironic-yoga-ptg Em sex., 8 de out. de 2021 ?s 17:57, Iury Gregory escreveu: > Hello Ironicers! > > In our etherpad [1] we have 18 topics for this PTG and we have a total of > 11 slots. > This is the proposed schedule (we will discuss in our upstream meeting on > Monday). > > *Monday (18 Oct) - Room Juno 15:00 - 17:00 UTC* > * Support OpenBMC > * Persistent memory Support > * Redfish Host Connection Interface > * Boot from Volume + UEFI > > *Tuesday (19 Oct) - Room Juno 14:00 - 17:00 UTC* > * Posting to placement ourselves > * The rise of compossible hardware, again > * Self-configuring Ironic Service > * Is there any way we can drive a co-operative use mode of ironic amongst > some of the users? > > *Wednesday (18 Oct) - Room Juno 14:00 - 16:00 UTC* > * Prioritize 3rd party CI in a box > * Secure RBAC items in Yoga > * Bulk operations > > *Thursday (18 Oct) - Room Kilo 14:00 - 16:00 UTC* > * having to go look at logs is an antipattern > * pxe-grub > * Remove instance (non-BFV, non-ramdisk) networking booting > * Direct SDN Integrations > > *Friday (22 Oct) - Room Kilo 14:00 - 16:00 UTC* > * Eliminate manual commands > * Certificate Management > * Stopping use of wiki.openstack.org > > In case we don't have enough time we can book more slots if the community > is ok and the slots are available. We will also have a section in the > etherpad for last-minute topics =) > > [1] https://etherpad.opendev.org/p/ironic-yoga-ptg > > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Part of the ironic-core and puppet-manager-core team in OpenStack* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Oct 15 16:07:29 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 15 Oct 2021 11:07:29 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min Message-ID: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * TC this week IRC meeting held on Oct 14th Thursday. * Most of the meeting discussions are summarized below (Completed or in-progress activities section). Meeting full logs are available @ - https://meetings.opendev.org/meetings/tc/2021/tc.2021-10-14-15.00.log.html * Next week's meeting is cancelled as we are meeting in PTG. We will have the next IRC meeting on Oct 28th, Thursday 15:00 UTC, feel free the topic on agenda[1] by Oct 27th. 2. What we completed this week: ========================= * None in this week. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ * TC is using the etherpad[2] for Xena cycle working item. We will be checking it in PTG * Current status is: 9 completed, 3 to-be-discussed in PTG, 1 in-progress Open Reviews ----------------- * Six open reviews for ongoing activities[3]. New project 'Skyline' proposal ------------------------------------ * You might be aware of this new dashboard proposal in the previous month's discussion. * A new project 'Skyline: an OpenStack dashboard optimized by UI and UE' is now proposed in governance to be an official OpenStack project[4]. * Skyline team is planning to meet in PTG on Tue, Wed and Thu at 5UTC, please ask your queries or have feedback/discussion with the team next week. Place to maintain the external hosted ELK, E-R, O-H services ------------------------------------------------------------------------- * We had a final discussion or I will say just a status update on this which was mentioned in last week's email summary[5]. * Now onwards, discussion and migration work will be done in TACT SIG (#openstack-infra IRC channel). Add project health check tool ----------------------------------- * No updates on this, we will continue discussing it in PTG for the next steps on this and what to do with TC liaison things. * Meanwhile, we are reviewing Rico proposal on collecting stats tools [6]. Stable Core team process change --------------------------------------- * Current proposal is under review[7]. Feel free to provide early feedback if you have any. Call for 'Technical Writing' SIG Chair/Maintainers ---------------------------------------------------------- * As agreed in last week's TC meeting, we will be moving this SIG work towards TC. * TC members have been added to the core members list in the SIG repos. * We will be discussing where to move the training repos/work in PTG. TC tags analysis ------------------- * Operator feedback is asked on open infra newsletter too, and we will continue the discussion in PTG and will take the final decision based on feedback we receive, if any[9]. Complete the policy pop up team ---------------------------------------- * Policy pop team has served its purpose and we have new RBAC as one of the the community-wide goal for the Yoga cycle. * We are marking this popup team as completed[10]. Project updates ------------------- * Retiring js-openstack-lib [11] Yoga release community-wide goal ----------------------------------------- * Please add the possible candidates in this etherpad [12]. * Current status: "Secure RBAC" is selected for Yoga cycle[13]. PTG planning ---------------- * We will be meeting in PTG next week, please check the details in this etherpad [14] * Do not forget to join the TC+community leaders sessions on Monday, Oct 18 15 UTC - 17 UTC. Test support for TLS default: ---------------------------------- * Rico has started a separate email thread over testing with tls-proxy enabled[15], we encourage projects to participate in that testing and help to enable the tls-proxy in gate testing. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[16]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [17] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [18] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://etherpad.opendev.org/p/tc-xena-tracke [3] https://review.opendev.org/q/projects:openstack/governance+status:open [4] https://review.opendev.org/c/openstack/governance/+/814037 [5]http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025251.html [6] https://review.opendev.org/c/openstack/governance/+/810037 [7] https://review.opendev.org/c/openstack/governance/+/810721 [9] https://governance.openstack.org/tc/reference/tags/index.html [10] https://review.opendev.org/c/openstack/governance/+/814186 [11] https://review.opendev.org/c/openstack/governance/+/798540 [12] https://review.opendev.org/c/openstack/governance/+/807163 [13] https://etherpad.opendev.org/p/y-series-goals [14] https://etherpad.opendev.org/p/tc-yoga-ptg [15] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023000.html [16] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [17] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [18] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From laurentfdumont at gmail.com Fri Oct 15 16:31:02 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 15 Oct 2021 12:31:02 -0400 Subject: [ops] How to use hosts with no storage disks In-Reply-To: <20211015125226.jkp6b53nzzypabnc@yuggoth.org> References: <20211015125226.jkp6b53nzzypabnc@yuggoth.org> Message-ID: If we break it down, I'm not sure a VM will be able to boot with no volume/root disk from an image though? I guess you could have the root VM drive all in RAM, but I don't think that Openstack understands that. On Fri, Oct 15, 2021 at 8:55 AM Jeremy Stanley wrote: > On 2021-10-15 11:51:49 +0100 (+0100), A Monster wrote: > > In Openstack, is it possible to create compute nodes with no hard > > drives and use PXE in order to boot the host's system > [...] > > This question is outside the scope of OpenStack itself, unless > you're using another OpenStack deployment to manage the physical > servers (for example TripleO has an "undercloud" which uses Ironic > to manage the servers which then comprise the "overcloud" presented > to users). OpenStack's services start on already booted servers, so > you can in theory use any mechanism you like, including PXEboot, to > boot those physical servers. I understand OpenStack Ironic is a > great solution to this problem though, and can be set up entirely > stand-alone with its Bifrost installer. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Oct 15 16:56:30 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Oct 2021 16:56:30 +0000 Subject: [ops] How to use hosts with no storage disks In-Reply-To: References: <20211015125226.jkp6b53nzzypabnc@yuggoth.org> Message-ID: <20211015165630.vtx2khqluctlowh5@yuggoth.org> On 2021-10-15 12:31:02 -0400 (-0400), Laurent Dumont wrote: > If we break it down, I'm not sure a VM will be able to boot with > no volume/root disk from an image though? > > I guess you could have the root VM drive all in RAM, but I don't > think that Openstack understands that. [...] Well, the question seemed to be primarily about booting the underlying hardware (compute nodes) over the network. This is actually pretty commonly done, at least for provisioning, but could certainly also be used to get enough of a kernel running to find the root disk over iSCSI or whatever. As for the virtual machines (server instances), you can boot-from-volume and use any sort of remote storage Cinder supports, right? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jpenick at gmail.com Fri Oct 15 17:06:21 2021 From: jpenick at gmail.com (James Penick) Date: Fri, 15 Oct 2021 10:06:21 -0700 Subject: How to use hosts with no storage disks In-Reply-To: References: Message-ID: This is something we've talked about doing at Yahoo some day. There are three separate problems to solve: 1. Diskless booting the compute node off the network. Mechanically this is possible via a number of approaches. You'd have a ramdisk with the necessary components baked in, so once the ramdisk loaded you'd be in the OS. I'm not sure if this can be fully accomplished via Ironic as yet. I'd need to ask an Ironic expert to weigh in. 2. Configuration of the compute node. Either a CI job which is aware of the compute node coming up and pushing configuration via something like Ansible, or perhaps using cloud-init with the necessary pieces loaded into a config-drive image which is provided as a part of the boot process. If we can have Ironic manage diskless booting systems then this would be a solved problem with user data. 3. VM storage could either be "local" via a large ramdisk partition (assuming you have a sufficient quantity of ram in your compute nodes), an NFS share which is mounted to the compute node, or volume backed instances. We were investigating this earlier this year and got stuck on the third problem. Local storage via ramdisk isn't really an option for us, since we already pack our compute nodes with a lot of ram, and we need that memory for the instances. NFS has issues with security, since we don't want one giant volume exported to all compute nodes due to security concerns, and a per-compute node export would need to be orchestrated. Volume backed instances seemed ideal, however we ran into some issues there, which are partially related to the block storage product we use. I'm hopeful we'll get back to this next year, a class of instance flavors booted on diskless compute nodes would allow us to offer even more cost-effective options for our customers. -James On Fri, Oct 15, 2021 at 3:54 AM A Monster wrote: > In Openstack, is it possible to create compute nodes with no hard drives > and use PXE in order to boot the host's system and therefore launch > instances with no local drive which is needed to boot the VM's image. > > If not, what's the minimum storage needed to be given to hosts in order to > get a fully functional system. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Oct 15 17:13:08 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 15 Oct 2021 19:13:08 +0200 Subject: How to use hosts with no storage disks In-Reply-To: References: Message-ID: Hi, On Fri, Oct 15, 2021 at 7:10 PM James Penick wrote: > This is something we've talked about doing at Yahoo some day. There are > three separate problems to solve: > > 1. Diskless booting the compute node off the network. Mechanically this is > possible via a number of approaches. You'd have a ramdisk with the > necessary components baked in, so once the ramdisk loaded you'd be in the > OS. I'm not sure if this can be fully accomplished via Ironic as yet. I'd > need to ask an Ironic expert to weigh in. > https://docs.openstack.org/ironic/latest/admin/ramdisk-boot.html Dmitry > 2. Configuration of the compute node. Either a CI job which is aware of > the compute node coming up and pushing configuration via something like > Ansible, or perhaps using cloud-init with the necessary pieces loaded into > a config-drive image which is provided as a part of the boot process. If we > can have Ironic manage diskless booting systems then this would be a solved > problem with user data. > 3. VM storage could either be "local" via a large ramdisk partition > (assuming you have a sufficient quantity of ram in your compute nodes), an > NFS share which is mounted to the compute node, or volume backed instances. > > We were investigating this earlier this year and got stuck on the third > problem. Local storage via ramdisk isn't really an option for us, since we > already pack our compute nodes with a lot of ram, and we need that memory > for the instances. NFS has issues with security, since we don't want one > giant volume exported to all compute nodes due to security concerns, and a > per-compute node export would need to be orchestrated. Volume backed > instances seemed ideal, however we ran into some issues there, which are > partially related to the block storage product we use. I'm hopeful we'll > get back to this next year, a class of instance flavors booted on diskless > compute nodes would allow us to offer even more cost-effective options for > our customers. > > -James > > > On Fri, Oct 15, 2021 at 3:54 AM A Monster wrote: > >> In Openstack, is it possible to create compute nodes with no hard drives >> and use PXE in order to boot the host's system and therefore launch >> instances with no local drive which is needed to boot the VM's image. >> >> If not, what's the minimum storage needed to be given to hosts in order >> to get a fully functional system. >> > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Oct 15 17:17:32 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 15 Oct 2021 19:17:32 +0200 Subject: [release] Release countdown for week R-24, Oct 11-15 Message-ID: <87c555e5-bb3d-2cc4-3ba1-a07e898fa9db@est.tech> Welcome back to the release countdown emails! These will be sent at major points in the Yogadevelopment cycle, which should conclude with a final release on March 30, 2022. Development Focus ----------------- At this stage in the release cycle, focus should be on planning the Yogadevelopment cycle, assessing Yogacommunity goals and approving Yogaspecs. General Information ------------------- Yoga is a 25 weeks long development cycle.In case you haven't seen it yet, please take a look over the schedule for this release: https://releases.openstack.org/ yoga /schedule.html By default, the team PTL is responsible for handling the release cycle and approving release requests. This task can (and probably should) be delegated to release liaisons. Now is a good time to review release liaison information for your team and make sure it is up to date: https://opendev.org/openstack/releases/src/branch/master/data/release_liaisons.yaml By default, all your team deliverables from the Yogarelease are continued in Yogawith a similar release model. Upcoming Deadlines & Dates -------------------------- Yoga PTG: October 18-22 Yoga-1 milestone:November 18, 2021 El?d Ill?s irc: elodilles -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri Oct 15 17:23:08 2021 From: zigo at debian.org (Thomas Goirand) Date: Fri, 15 Oct 2021 19:23:08 +0200 Subject: [all][tc] Skyline as a new official project [was: What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min] In-Reply-To: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> References: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> Message-ID: <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> On 10/15/21 6:07 PM, Ghanshyam Mann wrote: > New project 'Skyline' proposal > ------------------------------------ > * You might be aware of this new dashboard proposal in the previous month's > discussion. > * A new project 'Skyline: an OpenStack dashboard optimized by UI and UE' is > now proposed in governance to be an official OpenStack project[4]. > * Skyline team is planning to meet in PTG on Tue, Wed and Thu at 5UTC, please > ask your queries or have feedback/discussion with the team next week. Skyline looks nice. However, looking nice isn't enough. Before it becomes an OpenStack official component, maybe it should first try to reach our standard. I'm namely thinking about having a proper setuptool integration (using PBR?) for example, and starting tagging releases. I'm very much interested in packaging this for Debian/Ubuntu, if it's not a JS dependency hell. Though the current Makefile thingy doesn't look appealing. I've seen the console has at least 40 JS direct dependency. How many indirect dependency is this? Has anyone looked into it? Is the team ready to help making it package-able in a distro policy compliant way? Your thoughts? Cheers, Thomas Goirand (zigo) From amonster369 at gmail.com Fri Oct 15 18:15:54 2021 From: amonster369 at gmail.com (A Monster) Date: Fri, 15 Oct 2021 19:15:54 +0100 Subject: How to use hosts with no storage disks In-Reply-To: References: Message-ID: As far as I know, ironic aims to provision bare metal machines instead of virtual machines, in my case, what I want to accomplish is to boot the host's operating system through network, and then use either a remote disk in which the image service copies the vm's image to, and then boot from that image, or if it's possible, use the ram instead of a disk for that task, and that would allow me to use diskless computer nodes (hosts). On Fri, 15 Oct 2021 at 18:06, James Penick wrote: > This is something we've talked about doing at Yahoo some day. There are > three separate problems to solve: > > 1. Diskless booting the compute node off the network. Mechanically this is > possible via a number of approaches. You'd have a ramdisk with the > necessary components baked in, so once the ramdisk loaded you'd be in the > OS. I'm not sure if this can be fully accomplished via Ironic as yet. I'd > need to ask an Ironic expert to weigh in. > 2. Configuration of the compute node. Either a CI job which is aware of > the compute node coming up and pushing configuration via something like > Ansible, or perhaps using cloud-init with the necessary pieces loaded into > a config-drive image which is provided as a part of the boot process. If we > can have Ironic manage diskless booting systems then this would be a solved > problem with user data. > 3. VM storage could either be "local" via a large ramdisk partition > (assuming you have a sufficient quantity of ram in your compute nodes), an > NFS share which is mounted to the compute node, or volume backed instances. > > We were investigating this earlier this year and got stuck on the third > problem. Local storage via ramdisk isn't really an option for us, since we > already pack our compute nodes with a lot of ram, and we need that memory > for the instances. NFS has issues with security, since we don't want one > giant volume exported to all compute nodes due to security concerns, and a > per-compute node export would need to be orchestrated. Volume backed > instances seemed ideal, however we ran into some issues there, which are > partially related to the block storage product we use. I'm hopeful we'll > get back to this next year, a class of instance flavors booted on diskless > compute nodes would allow us to offer even more cost-effective options for > our customers. > > -James > > > On Fri, Oct 15, 2021 at 3:54 AM A Monster wrote: > >> In Openstack, is it possible to create compute nodes with no hard drives >> and use PXE in order to boot the host's system and therefore launch >> instances with no local drive which is needed to boot the VM's image. >> >> If not, what's the minimum storage needed to be given to hosts in order >> to get a fully functional system. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpenick at gmail.com Fri Oct 15 18:18:02 2021 From: jpenick at gmail.com (James Penick) Date: Fri, 15 Oct 2021 11:18:02 -0700 Subject: How to use hosts with no storage disks In-Reply-To: References: Message-ID: You are correct, I meant you would use Ironic to provision the compute node, which Nova would then use to provision VMs. On Fri, Oct 15, 2021 at 11:16 AM A Monster wrote: > As far as I know, ironic aims to provision bare metal machines instead of > virtual machines, in my case, what I want to accomplish is to boot the > host's operating system through network, and then use either a remote disk > in which the image service copies the vm's image to, and then boot from > that image, or if it's possible, use the ram instead of a disk for that > task, and that would allow me to use diskless computer nodes (hosts). > > > > On Fri, 15 Oct 2021 at 18:06, James Penick wrote: > >> This is something we've talked about doing at Yahoo some day. There are >> three separate problems to solve: >> >> 1. Diskless booting the compute node off the network. Mechanically this >> is possible via a number of approaches. You'd have a ramdisk with the >> necessary components baked in, so once the ramdisk loaded you'd be in the >> OS. I'm not sure if this can be fully accomplished via Ironic as yet. I'd >> need to ask an Ironic expert to weigh in. >> 2. Configuration of the compute node. Either a CI job which is aware of >> the compute node coming up and pushing configuration via something like >> Ansible, or perhaps using cloud-init with the necessary pieces loaded into >> a config-drive image which is provided as a part of the boot process. If we >> can have Ironic manage diskless booting systems then this would be a solved >> problem with user data. >> 3. VM storage could either be "local" via a large ramdisk partition >> (assuming you have a sufficient quantity of ram in your compute nodes), an >> NFS share which is mounted to the compute node, or volume backed instances. >> >> We were investigating this earlier this year and got stuck on the third >> problem. Local storage via ramdisk isn't really an option for us, since we >> already pack our compute nodes with a lot of ram, and we need that memory >> for the instances. NFS has issues with security, since we don't want one >> giant volume exported to all compute nodes due to security concerns, and a >> per-compute node export would need to be orchestrated. Volume backed >> instances seemed ideal, however we ran into some issues there, which are >> partially related to the block storage product we use. I'm hopeful we'll >> get back to this next year, a class of instance flavors booted on diskless >> compute nodes would allow us to offer even more cost-effective options for >> our customers. >> >> -James >> >> >> On Fri, Oct 15, 2021 at 3:54 AM A Monster wrote: >> >>> In Openstack, is it possible to create compute nodes with no hard >>> drives and use PXE in order to boot the host's system and therefore launch >>> instances with no local drive which is needed to boot the VM's image. >>> >>> If not, what's the minimum storage needed to be given to hosts in order >>> to get a fully functional system. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Fri Oct 15 20:35:30 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 15 Oct 2021 20:35:30 +0000 Subject: How to use hosts with no storage disks In-Reply-To: References: Message-ID: <0670B960225633449A24709C291A525251CB632A@COM03.performair.local> Issue 3, as laid out below, can be addressed using Ceph RBD. Use it behind cinder & glance, and no local storage is required. Our OpenStack cluster has small OS drives, and doesn't store either volumes or images locally. Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: James Penick [mailto:jpenick at gmail.com] Sent: Friday, October 15, 2021 11:18 AM To: A Monster Cc: openstack-discuss Subject: Re: How to use hosts with no storage disks You are correct, I meant you would use Ironic to provision the compute node, which Nova would then use to provision VMs.? On Fri, Oct 15, 2021 at 11:16 AM A Monster wrote: As far as I know, ironic aims to provision bare metal machines instead of virtual machines, in my case, what I want to accomplish is to boot the host's operating?system through network, and then use either a remote disk in which the image service copies the vm's image to, and then boot from that image, or if it's possible, use the ram instead of a disk for that task, and that would allow me to use diskless computer nodes (hosts). On Fri, 15 Oct 2021 at 18:06, James Penick wrote: This is something we've talked about doing at Yahoo some day. There are three separate problems to solve: 1. Diskless booting the compute node off the network. Mechanically this is possible via a number of approaches. You'd have a ramdisk with the necessary components baked in, so once the ramdisk loaded you'd be in the OS. I'm not sure if this can be fully accomplished via Ironic as yet. I'd need to ask an Ironic expert to weigh in. 2. Configuration of the compute node. Either a CI job which is aware of the compute node coming up and pushing configuration via something like Ansible, or perhaps using cloud-init with the necessary pieces loaded into a config-drive image which is provided as a part of the boot process. If we can have Ironic?manage diskless booting systems then this would be a solved problem with user data. 3. VM storage could either be "local" via a large ramdisk partition (assuming you have a sufficient quantity of ram in your compute nodes), an NFS share which is mounted to the compute node, or volume backed instances. We were investigating this earlier this year and got stuck on the third problem. Local storage via ramdisk isn't really an option for us, since we already pack our compute nodes with a lot of ram, and we need that memory for the instances. NFS has issues with security, since we don't want one giant volume exported to all compute nodes due to security concerns, and a per-compute node export would need to be orchestrated. Volume backed instances seemed ideal, however we ran into some issues there, which are partially related to the block storage product we use. I'm hopeful we'll get back to this next year, a class of instance flavors booted on diskless compute nodes would allow us to offer even more cost-effective options for our customers. -James On Fri, Oct 15, 2021 at 3:54 AM A Monster wrote: In?Openstack, is it possible to create compute nodes with no hard drives and use PXE in order to boot the host's system and therefore launch instances with no local drive which is needed to boot the VM's image. If not, what's the minimum storage needed to be given to hosts in order to get a fully functional system. From amonster369 at gmail.com Sat Oct 16 20:44:45 2021 From: amonster369 at gmail.com (A Monster) Date: Sat, 16 Oct 2021 21:44:45 +0100 Subject: The best linux distribution on which to deploy Openstack Message-ID: As a centos 7 user I have many experience using this distribution however centos 7 doesn't support the newest openstack releases ( after train ) and centos 8 will soon lose the support from Redhat since it's EOL is scheduled for 31/12/2021 and the centos stream distributions are upstreams for RHEL therefor is most likely unstable. So which distribution should I use ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat Oct 16 22:17:01 2021 From: zigo at debian.org (Thomas Goirand) Date: Sun, 17 Oct 2021 00:17:01 +0200 Subject: The best linux distribution on which to deploy Openstack In-Reply-To: References: Message-ID: <61626810-32c6-e62a-5736-dc56ed82eff8@debian.org> On 10/16/21 10:44 PM, A Monster wrote: > As a centos 7 user I have many experience?using this distribution > however centos 7 doesn't support the newest openstack releases?( after > train ) and centos 8 will soon lose the support from Redhat since it's > EOL is scheduled for 31/12/2021 and the centos stream distributions are > upstreams for RHEL therefor is most likely unstable. > > So which distribution should I use ??? Debian? :) Thomas From Charles.Short at windriver.com Sat Oct 16 23:01:47 2021 From: Charles.Short at windriver.com (Short, Charles) Date: Sat, 16 Oct 2021 23:01:47 +0000 Subject: The best linux distribution on which to deploy Openstack In-Reply-To: References: Message-ID: From: A Monster Sent: Saturday, October 16, 2021 4:45 PM To: openstack-discuss at lists.openstack.org Subject: The best linux distribution on which to deploy Openstack [Please note: This e-mail is from an EXTERNAL e-mail address] As a centos 7 user I have many experience using this distribution however centos 7 doesn't support the newest openstack releases ( after train ) and centos 8 will soon lose the support from Redhat since it's EOL is scheduled for 31/12/2021 and the centos stream distributions are upstreams for RHEL therefor is most likely unstable. So which distribution should I use ? The answer is to use the one your are the most comfortable with. They all do the same thing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sun Oct 17 07:19:43 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 17 Oct 2021 09:19:43 +0200 Subject: The best linux distribution on which to deploy Openstack In-Reply-To: References: Message-ID: On Sat, 16 Oct 2021 at 22:46, A Monster wrote: > > As a centos 7 user I have many experience using this distribution however centos 7 doesn't support the newest openstack releases ( after train ) and centos 8 will soon lose the support from Redhat since it's EOL is scheduled for 31/12/2021 and the centos stream distributions are upstreams for RHEL therefor is most likely unstable. > > So which distribution should I use ? Use the one that you are most familiar/comfortable with and that is supported by OpenStack deployment projects. For example, with Kolla Ansible, at the moment, you can choose from CentOS Stream 8, Debian Bullseye and Ubuntu 20.04 (sorted alphabetically; all have equal support). Soon, it will support Rocky Linux 8 as well (and then newer releases as they start coming). Kolla Ansible docs for Xena: https://docs.openstack.org/kolla-ansible/xena/ -yoctozepto From radoslaw.piliszek at gmail.com Sun Oct 17 08:30:06 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 17 Oct 2021 10:30:06 +0200 Subject: The best linux distribution on which to deploy Openstack In-Reply-To: References: Message-ID: On Sun, 17 Oct 2021 at 10:03, A Monster wrote: > > What about Centos 7 , what are the openstack releases that it supports ? You have said that already. Train is the latest release on CentOS 7. -yoctozepto > On Sun, 17 Oct 2021 at 08:19, Rados?aw Piliszek wrote: >> >> On Sat, 16 Oct 2021 at 22:46, A Monster wrote: >> > >> > As a centos 7 user I have many experience using this distribution however centos 7 doesn't support the newest openstack releases ( after train ) and centos 8 will soon lose the support from Redhat since it's EOL is scheduled for 31/12/2021 and the centos stream distributions are upstreams for RHEL therefor is most likely unstable. >> > >> > So which distribution should I use ? >> >> Use the one that you are most familiar/comfortable with and that is >> supported by OpenStack deployment projects. >> >> For example, with Kolla Ansible, at the moment, you can choose from >> CentOS Stream 8, Debian Bullseye and Ubuntu 20.04 (sorted >> alphabetically; all have equal support). >> Soon, it will support Rocky Linux 8 as well (and then newer releases >> as they start coming). >> >> Kolla Ansible docs for Xena: https://docs.openstack.org/kolla-ansible/xena/ >> >> -yoctozepto From seenafallah at gmail.com Sat Oct 16 21:24:27 2021 From: seenafallah at gmail.com (Seena Fallah) Date: Sun, 17 Oct 2021 00:54:27 +0330 Subject: [dev][cinder] snapshot revert to any point Message-ID: Hi, There is a lack of feature to revert to any snapshot point in supported drivers like RBD. I've made a change to support this feature. Can someone please review them? https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/812032 https://review.opendev.org/c/openstack/cinder/+/806807 Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiujunting at inspur.com Mon Oct 18 02:23:53 2021 From: qiujunting at inspur.com (=?gb2312?B?SnVudGluZ3FpdSBRaXVqdW50aW5nICjH8b785sMp?=) Date: Mon, 18 Oct 2021 02:23:53 +0000 Subject: [Sahara]Sahara project PTG meeting from 3:00 to 5:00 PM on October 19, 2021. Message-ID: Hi all I'm very sorry: I missed the scheduled sahara PTG meeting time. We tentatively schedule the Sahara project PTG meeting from 3:00 to 5:00 PM on October 19, 2021. Use IRC channel:#openstack-sahara to conduct PTG conferences. My topics are as follows: 1. Sahara supports the creation of cloud hosts by specifying system volumes. 2. Sahara deploys a dedicated cluster through cloud host VM tools (qemu-guest-agent). ???: Juntingqiu Qiujunting (???) ????: 2021?9?24? 18:05 ???: 'jeremyfreudberg at gmail.com' ; Faling Rui (???) ; 'ltoscano at redhat.com' ??: 'openstack-discuss at lists.openstack.org' ??: [Sahara]Currently about the development of the Sahara community there are some points Hi all: Currently about the development of the Sahara community there are some points as following: 1. About the schedule of the regular meeting of the Sahara project? What is your suggestion? How about the regular meeting time every Wednesday afternoon 15:00 to 16:30? 2. Regarding the Sahara project maintenance switch from StoryBoard to launchpad. https://storyboard.openstack.org/ https://blueprints.launchpad.net/openstack/ The reasons are as follows: 1. OpenSatck core projects are maintained on launchpad, such as nova, cinder, neutron, etc. 2. Most OpenStack contributors are used to working on launchpad. 3. Do you have any suggestions? If you think this is feasible, I will post this content in the Sahara community later. Thank you for your help. Thank you Fossen. --------------------------------- Fossen Qiu | ??? CBRD | ?????????? T: 18249256272 E: qiujunting at inspur.com ???? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3519 bytes Desc: image001.jpg URL: From gao.hanxiang at 99cloud.net Mon Oct 18 03:22:11 2021 From: gao.hanxiang at 99cloud.net (=?utf-8?B?6auY54Ca57+U?=) Date: Mon, 18 Oct 2021 11:22:11 +0800 Subject: [all][tc] Skyline as a new official project [was: What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min] In-Reply-To: <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> References: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> Message-ID: <93C08133-972B-44BE-9F2A-661A1B86651F@99cloud.net> Skyline-apiserver is a pure Python code project, following the Python wheel packaging standard, using pip for installation, and the dependency management of the project using poetry[1] Skyline-console uses npm for dependency management, development and testing. During the packaging and distribution process, webpack will be used to process the source code and dependent library code first, and output the packaged static resource files. These static resource files will be stored in an empty Python module[2]. The file directory is for example: - skyline_console - __init__.py - __main__.py - static - index.html - some_a.css - some_b.js ... Pack this empty module in Python wheel, and additionally include these static resources as "data_files"[3][4][5], so that it can be distributed like a normal Python package without having to deal with JS dependencies. When deploying with Nginx, when you need to fill in the static resource path, use "python -m skyline_console" to find it. There is a packed skyline packag[6] on "tarballs.opendev.org" for you to preview. [1] https://python-poetry.org/ [2] https://opendev.org/skyline/skyline-console/src/branch/master/Makefile#L73-L77 [3] https://packaging.python.org/guides/distributing-packages-using-setuptools/#data-files [4] https://setuptools.pypa.io/en/latest/deprecated/distutils/setupscript.html#distutils-additional-files [5] https://opendev.org/skyline/skyline-console/src/branch/master/pyproject.toml#L6 [6] https://tarballs.opendev.org/skyline/skyline-apiserver/ > 2021?10?16? 01:23?Thomas Goirand ??? > > On 10/15/21 6:07 PM, Ghanshyam Mann wrote: >> New project 'Skyline' proposal >> ------------------------------------ >> * You might be aware of this new dashboard proposal in the previous month's >> discussion. >> * A new project 'Skyline: an OpenStack dashboard optimized by UI and UE' is >> now proposed in governance to be an official OpenStack project[4]. >> * Skyline team is planning to meet in PTG on Tue, Wed and Thu at 5UTC, please >> ask your queries or have feedback/discussion with the team next week. > > Skyline looks nice. However, looking nice isn't enough. Before it > becomes an OpenStack official component, maybe it should first try to > reach our standard. I'm namely thinking about having a proper setuptool > integration (using PBR?) for example, and starting tagging releases. > > I'm very much interested in packaging this for Debian/Ubuntu, if it's > not a JS dependency hell. Though the current Makefile thingy doesn't > look appealing. > > I've seen the console has at least 40 JS direct dependency. How many > indirect dependency is this? Has anyone looked into it? > > Is the team ready to help making it package-able in a distro policy > compliant way? > > Your thoughts? > > Cheers, > > Thomas Goirand (zigo) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Mon Oct 18 06:42:30 2021 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Mon, 18 Oct 2021 12:12:30 +0530 Subject: [Openstack-victoria] [LIBVIRT] Live migration doesn't work, error in libvirt Message-ID: Hi, I am using openstack victoria and i am facing an issue when using the live-migration feature. After choosing the live-migration of an instance from Compute1 to compute2 i am getting an error in nova-compute.log (of compute 1) stating: *ERROR nova.virt.libvirt.driver [-] [instance: 59c95d46-2cbc-4787-89f7-8b36b826ffad] Live Migration failure: operation failed: Failed to connect to remote libvirt URI qemu+tcp://compute2/system: unable to connect to server at 'compute2:16509': Connection refused: libvirt.libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+tcp://compute2/system: unable to connect to server at 'compute2:16509': Connection refused* Which states that the libvirtd.tcp socket is not running (libvirtd should run on 16509 port inorder for live-migration to succeed). Journalctl -xe output *Oct 18 06:32:52 compute2 systemd[1]: libvirtd-tcp.socket: Socket service libvirtd.service already active, refusing.Oct 18 06:32:52 compute2 systemd[1]: Failed to listen on Libvirt non-TLS IP socket.* I am trying to solve the above issue by adding --listen in the libvirtd_opts parameter in the service file and also in the /etc/default/libvirtd file. But after doing that the libvirtd service doesn't start. Can someone suggest a way forward for this? Thank you With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Mon Oct 18 06:51:23 2021 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Mon, 18 Oct 2021 12:21:23 +0530 Subject: [SOLVED] [Openstack-victoria] [LIBVIRT] Live migration doesn't work, error in libvirt In-Reply-To: References: Message-ID: The issue was in the starting order of the service. To fix the issue stop the Libvirt service and start the service by issuing the command: systemctl start libvirtd-tcp.socket On Mon, Oct 18, 2021 at 12:12 PM Swogat Pradhan wrote: > Hi, > I am using openstack victoria and i am facing an issue when using the > live-migration feature. > After choosing the live-migration of an instance from Compute1 to compute2 > i am getting an error in nova-compute.log (of compute 1) stating: > > *ERROR nova.virt.libvirt.driver [-] [instance: > 59c95d46-2cbc-4787-89f7-8b36b826ffad] Live Migration failure: operation > failed: Failed to connect to remote libvirt URI qemu+tcp://compute2/system: > unable to connect to server at 'compute2:16509': Connection refused: > libvirt.libvirtError: operation failed: Failed to connect to remote libvirt > URI qemu+tcp://compute2/system: unable to connect to server at > 'compute2:16509': Connection refused* > > Which states that the libvirtd.tcp socket is not running (libvirtd should > run on 16509 port inorder for live-migration to succeed). > > Journalctl -xe output > > *Oct 18 06:32:52 compute2 systemd[1]: libvirtd-tcp.socket: Socket service > libvirtd.service already active, refusing.Oct 18 06:32:52 compute2 > systemd[1]: Failed to listen on Libvirt non-TLS IP socket.* > > I am trying to solve the above issue by adding --listen in the > libvirtd_opts parameter in the service file and also in the > /etc/default/libvirtd file. > But after doing that the libvirtd service doesn't start. > > Can someone suggest a way forward for this? > > Thank you > With regards, > Swogat Pradhan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikarnatathe at gmail.com Mon Oct 18 06:58:31 2021 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Mon, 18 Oct 2021 12:28:31 +0530 Subject: Openstack magnum Message-ID: Hello All, I am trying to create a kubernetes cluster using magnum. Image: fedora-coreos. The stack gets stucked in CREATE_IN_PROGRESS. See the output below. openstack coe cluster list +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | uuid | name | keypair | node_count | master_count | status | health_status | +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | 2 | 1 | CREATE_IN_PROGRESS | None | +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'refs_map': None, 'removed_rsrc_list': [], 'attributes': None, 'refs': None} | | creation_time | 2021-10-18T06:44:02Z | | description | | | links | [{'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', 'rel': 'self'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', 'rel': 'stack'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', 'rel': 'nested'}] | | logical_resource_id | kube_masters | | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 | | required_by | ['kube_cluster_deploy', 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] | | resource_name | kube_masters | | resource_status | CREATE_IN_PROGRESS | | resource_status_reason | state changed | | resource_type | OS::Heat::ResourceGroup | | updated_time | 2021-10-18T06:44:02Z | +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Vikarna -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Mon Oct 18 07:28:44 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 18 Oct 2021 12:28:44 +0500 Subject: [xena][glance] Upgrade to Xena Shows Error Message-ID: Hi, I am trying to upgrade glance from wallaby to xena. The package upgrade goes successful. Vut When I am doing database upgrade. Its showing me below error. Can you guys please advise on it. su -s /bin/bash glance -c "glance-manage db_upgrade" 2021-10-18 12:23:59.852 20534 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:314 2021-10-18 12:23:59.868 20534 CRITICAL glance [-] Unhandled error: TypeError: argument of type 'NoneType' is not iterable 2021-10-18 12:23:59.868 20534 ERROR glance Traceback (most recent call last): 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/bin/glance-manage", line 10, in 2021-10-18 12:23:59.868 20534 ERROR glance sys.exit(main()) 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 557, in main 2021-10-18 12:23:59.868 20534 ERROR glance return CONF.command.action_fn() 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 385, in upgrade 2021-10-18 12:23:59.868 20534 ERROR glance self.command_object.upgrade(CONF.command.version) 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 127, in upgrade 2021-10-18 12:23:59.868 20534 ERROR glance self._sync(version) 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 176, in _sync 2021-10-18 12:23:59.868 20534 ERROR glance alembic_command.upgrade(a_config, version) 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/lib/python3/dist-packages/alembic/command.py", line 277, in upgrade 2021-10-18 12:23:59.868 20534 ERROR glance if ":" in revision: 2021-10-18 12:23:59.868 20534 ERROR glance TypeError: argument of type 'NoneType' is not iterable 2021-10-18 12:23:59.868 20534 ERROR glance However the db_sync was successful and below is the DB version detail. su -s /bin/bash glance -c "glance-manage db_version" 2021-10-18 12:25:14.780 20683 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:314 2021-10-18 12:25:14.783 20683 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2021-10-18 12:25:14.784 20683 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. wallaby_contract01 su -s /bin/bash glance -c "glance-manage db_sync" 2021-10-18 12:25:25.773 20712 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:314 2021-10-18 12:25:25.776 20712 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2021-10-18 12:25:25.776 20712 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. Database is up to date. No migrations needed. -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Mon Oct 18 08:19:19 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Mon, 18 Oct 2021 10:19:19 +0200 Subject: How to force kolla-ansible to rotate logs with given interval [kolla-ansible] Message-ID: Hi, different services in kolla-ansible have different log rotation policies and I?d like to make the logs easier to maintain and search (tried central logging with Kibana, but somehow I don?t like this solution). So I tried to write common config file for all logs. As I understand all logs should be rotated by cron container, inside of which there?s logrotate.conf file (and as I can see the logs are rotated according to this file). So I?ve copied this file, modified according to my needs and put it ina /etc/kolla/config with the name cron-logrotate-global.conf (as documentation says). And? nothing. I?ve checked permissions of this file - everything seems to be ok, so what?s the problem? Below is my logrotate.conf file Best regards, Adam Tomas cat /etc/kolla/config/cron-logrotate-global.conf daily rotate 31 copytruncate compress delaycompress notifempty missingok minsize 0M maxsize 100M su root kolla "/var/log/kolla/ansible.log" { } "/var/log/kolla/aodh/*.log" { } "/var/log/kolla/barbican/*.log" { } "/var/log/kolla/ceilometer/*.log" { } "/var/log/kolla/chrony/*.log" { } "/var/log/kolla/cinder/*.log" { } "/var/log/kolla/cloudkitty/*.log" { } "/var/log/kolla/designate/*.log" { } "/var/log/kolla/elasticsearch/*.log" { } "/var/log/kolla/fluentd/*.log" { } "/var/log/kolla/glance/*.log" { } "/var/log/kolla/haproxy/haproxy.log" { } "/var/log/kolla/heat/*.log" { } "/var/log/kolla/horizon/*.log" { } "/var/log/kolla/influxdb/*.log" { } "/var/log/kolla/iscsi/iscsi.log" { } "/var/log/kolla/kafka/*.log" { } "/var/log/kolla/keepalived/keepalived.log" { } "/var/log/kolla/keystone/*.log" { } "/var/log/kolla/kibana/*.log" { } "/var/log/kolla/magnum/*.log" { } "/var/log/kolla/mariadb/*.log" { } "/var/log/kolla/masakari/*.log" { } "/var/log/kolla/monasca/*.log" { } "/var/log/kolla/neutron/*.log" { postrotate chmod 644 /var/log/kolla/neutron/*.log endscript } "/var/log/kolla/nova/*.log" { } "/var/log/kolla/octavia/*.log" { } "/var/log/kolla/rabbitmq/*.log" { } "/var/log/kolla/rally/*.log" { } "/var/log/kolla/skydive/*.log" { } "/var/log/kolla/storm/*.log" { } "/var/log/kolla/swift/*.log" { } "/var/log/kolla/vitrage/*.log" { } "/var/log/kolla/zookeeper/*.log" { } From syedammad83 at gmail.com Mon Oct 18 08:32:39 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 18 Oct 2021 13:32:39 +0500 Subject: Openstack magnum In-Reply-To: References: Message-ID: Hi, Can you check if the master server is deployed as a nova instance ? if yes, then login to the instance and check cloud-init and heat agent logs to see the errors. Ammad On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe wrote: > Hello All, > > I am trying to create a kubernetes cluster using magnum. Image: > fedora-coreos. > > > The stack gets stucked in CREATE_IN_PROGRESS. See the output below. > openstack coe cluster list > > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > | uuid | name | keypair | > node_count | master_count | status | health_status | > > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | > 2 | 1 | CREATE_IN_PROGRESS | None | > > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > > openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters > > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > > > > > > | > > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | attributes | {'refs_map': None, 'removed_rsrc_list': [], > 'attributes': None, 'refs': None} > > > > > > | > | creation_time | 2021-10-18T06:44:02Z > > > > > > > | > | description | > > > > > > > | > | links | [{'href': ' > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', > 'rel': 'self'}, {'href': ' > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', > 'rel': 'stack'}, {'href': ' > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', > 'rel': 'nested'}] | > | logical_resource_id | kube_masters > > > > > > > | > | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 > > > > > > > | > | required_by | ['kube_cluster_deploy', > 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] > > > > > > | > | resource_name | kube_masters > > > > > > > | > | resource_status | CREATE_IN_PROGRESS > > > > > > > | > | resource_status_reason | state changed > > > > > > > | > | resource_type | OS::Heat::ResourceGroup > > > > > > > | > | updated_time | 2021-10-18T06:44:02Z > > > > > > > | > > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > Vikarna > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Mon Oct 18 08:36:49 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 18 Oct 2021 14:06:49 +0530 Subject: [glance] PTL on vacation - weekly meetings update Message-ID: Hi All, I'm starting my vacation from 25th October and will be back on November 15th. Please direct any issues to the rest of the core team. Also there will be no weekly meeting on 28th October and tentative cancellation of 4th November and 11th November unless there is something in the agenda by Tuesday 2nd November and 9th November EOB [1]. [1] https://etherpad.opendev.org/p/glance-team-meeting-agenda Thank you, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikarnatathe at gmail.com Mon Oct 18 08:39:08 2021 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Mon, 18 Oct 2021 14:09:08 +0530 Subject: Openstack magnum In-Reply-To: References: Message-ID: > > > Hi Ammad, > > Thanks for responding. > > Yes the instance is getting created, but i am unable to login though i > have generated the keypair. There is no default password for this image to > login via console. > > openstack server list > > +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ > | ID | Name > | Status | Networks | Image | > Flavor | > > +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ > | cf955a75-8cd2-4f91-a01f-677159b57cb2 | > k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, > 10.14.20.181 | fedora-coreos-latest | m1.large | > > > ssh -i id_rsa core at 10.14.20.181 > The authenticity of host '10.14.20.181 (10.14.20.181)' can't be > established. > ECDSA key fingerprint is > SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. > Are you sure you want to continue connecting (yes/no/[fingerprint])? yes > Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known > hosts. > core at 10.14.20.181: Permission denied > (publickey,gssapi-keyex,gssapi-with-mic). > > On Mon, 18 Oct 2021 at 14:02, Ammad Syed wrote: > >> Hi, >> >> Can you check if the master server is deployed as a nova instance ? if >> yes, then login to the instance and check cloud-init and heat agent logs to >> see the errors. >> >> Ammad >> >> On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe >> wrote: >> >>> Hello All, >>> >>> I am trying to create a kubernetes cluster using magnum. Image: >>> fedora-coreos. >>> >>> >>> The stack gets stucked in CREATE_IN_PROGRESS. See the output below. >>> openstack coe cluster list >>> >>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>> | uuid | name | keypair | >>> node_count | master_count | status | health_status | >>> >>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>> | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | >>> 2 | 1 | CREATE_IN_PROGRESS | None | >>> >>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>> >>> openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters >>> >>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | Field | Value >>> >>> >>> >>> >>> >>> >>> | >>> >>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | attributes | {'refs_map': None, 'removed_rsrc_list': [], >>> 'attributes': None, 'refs': None} >>> >>> >>> >>> >>> >>> | >>> | creation_time | 2021-10-18T06:44:02Z >>> >>> >>> >>> >>> >>> >>> | >>> | description | >>> >>> >>> >>> >>> >>> >>> | >>> | links | [{'href': ' >>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', >>> 'rel': 'self'}, {'href': ' >>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', >>> 'rel': 'stack'}, {'href': ' >>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', >>> 'rel': 'nested'}] | >>> | logical_resource_id | kube_masters >>> >>> >>> >>> >>> >>> >>> | >>> | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 >>> >>> >>> >>> >>> >>> >>> | >>> | required_by | ['kube_cluster_deploy', >>> 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] >>> >>> >>> >>> >>> >>> | >>> | resource_name | kube_masters >>> >>> >>> >>> >>> >>> >>> | >>> | resource_status | CREATE_IN_PROGRESS >>> >>> >>> >>> >>> >>> >>> | >>> | resource_status_reason | state changed >>> >>> >>> >>> >>> >>> >>> | >>> | resource_type | OS::Heat::ResourceGroup >>> >>> >>> >>> >>> >>> >>> | >>> | updated_time | 2021-10-18T06:44:02Z >>> >>> >>> >>> >>> >>> >>> | >>> >>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> >>> Vikarna >>> >> >> >> -- >> Regards, >> >> >> Syed Ammad Ali >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Mon Oct 18 09:31:53 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Mon, 18 Oct 2021 12:31:53 +0300 Subject: [OpenStack-Ansible] LXC containers apt upgrade In-Reply-To: <2cce6f95893340dcba81c88e278213b8@elca.ch> References: <2cce6f95893340dcba81c88e278213b8@elca.ch> Message-ID: <1243101634549390@mail.yandex.ru> An HTML attachment was scrubbed... URL: From gao.hanxiang at 99cloud.net Mon Oct 18 10:44:08 2021 From: gao.hanxiang at 99cloud.net (=?UTF-8?B?6auY54Ca57+U?=) Date: Mon, 18 Oct 2021 18:44:08 +0800 (GMT+08:00) Subject: =?UTF-8?B?W3RjXVtob3Jpem9uXVtza3lsaW5lXSBXZWxjb21lIHRvIHRoZSBTa3lsaW5lIFBURw==?= Message-ID: Hi all, Skyline project members will hold their own PTG this week (Tuesday, Wednesday and Thursday at 5 UTC). At present, the skyline project has submitted an application to become an official OpenStack project, and we also welcome more friends to join us. Skyline is an OpenStack dashboard optimized by UI and UE. It has a modern technology stack and ecology, is easier for developers to maintain and operate by users, and has higher concurrency performance. Here are two videos to preview Skyline: - Skyline technical overview[1]. - Skyline dashboard operating demo[2]. Skyline has the following technical advantages: 1. Separation of concerns, front-end focus on functional design and user experience, back-end focus on data logic. 2. Embrace modern browser technology and ecology: React, Ant Design, and Mobx. 3. Most functions directly call OpenStack-API, the call chain is simple, the logic is clearer, and the API responds quickly. 4. Use React component to process rendering, the page display process is fast and smooth, bringing users a better UI and UE experience. At present, Skyline has completed the function development of OpenStack core component, as well as most of the functions of VPNaaS, Octavia and other components. corresponding automated test jobs[3][4] are also integrated on Zuul, and there is good code coverage. Devstack deployment integration has also been completed, and integration of kolla and kolla-ansible will complete pending patch[5][6] after Skyline becomes an official project. Skyline?s next roadmap will be to cover all existing functions of Horizon and complete the page development of other OpenStack components. [1] https://www.youtube.com/watch?v=Ro8tROYKDlE [2] https://www.youtube.com/watch?v=pFAJLwzxv0 [3] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-apiserver [4] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-console [5] https://review.opendev.org/c/openstack/kolla/+/810796 [6] https://review.opendev.org/c/openstack/kolla-ansible/+/810566 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Mon Oct 18 10:58:23 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 18 Oct 2021 11:58:23 +0100 Subject: [neutron] Bug Deputy Report October 11 - 18 Message-ID: High: * https://bugs.launchpad.net/neutron/+bug/1946588 - "[OVN]Metadata get warn logs after boot instance server about "MetadataServiceReadyWaitTimeoutException"" - Assigned to: hailun huang * https://bugs.launchpad.net/neutron/+bug/1946748 - " [stable/stein] neutron-tempest-plugin jobs fail with "AttributeError: module 'tempest.common.utils' has no attribute 'is_network_feature_enabled'"" - Assigned to: Bernard Cafarelli Medium: * https://bugs.launchpad.net/neutron/+bug/1946589 - "[OVN] localport might not be updated when create multiple subnets for its network" - Unassigned * https://bugs.launchpad.net/neutron/+bug/1946666 - "[ovn] neutron_ovn_db_sync_util crashes (ACL already exists)" - Assigned to: Daniel Speichert * https://bugs.launchpad.net/neutron/+bug/1946713 - "[ovn]Network's availability_zones is empty" - Assigned to: hailun huang * https://bugs.launchpad.net/neutron/+bug/1947334 - "[OVN] Migration to OVN does not create the OVN QoS DB registers" - Assigned to: Rodolfo Alonso * https://bugs.launchpad.net/neutron/+bug/1947366 - "[OVN] Migration to OVN removes "connectivity" parameter from VIF details" - Assigned to: Rodolfo Alonso * https://bugs.launchpad.net/neutron/+bug/1947378 - " [OVN] VIF details "connectivity" parameter is not correctly populated" - Assigned to: Rodolfo Alonso Needs further triage: * https://bugs.launchpad.net/neutron/+bug/1946624 - "OVSDB Error: Transaction causes multiple rows in "Port_Group" table to have identical values" - Marked as Incomplete * https://bugs.launchpad.net/neutron/+bug/1946764 - "[OVN]Any dhcp options which are string type should be escape" * https://bugs.launchpad.net/neutron/+bug/1946781 - " Appropriate way to allocate /64 ipv6 per instance" Cheers, Lucas From fungi at yuggoth.org Mon Oct 18 12:18:18 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 18 Oct 2021 12:18:18 +0000 Subject: [all][tc] Skyline as a new official project [was: What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min] In-Reply-To: <93C08133-972B-44BE-9F2A-661A1B86651F@99cloud.net> References: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> <93C08133-972B-44BE-9F2A-661A1B86651F@99cloud.net> Message-ID: <20211018121818.rerqlp7ek7z3rnya@yuggoth.org> On 2021-10-18 11:22:11 +0800 (+0800), ??? wrote: > Skyline-apiserver is a pure Python code project, following the > Python wheel packaging standard, using pip for installation, and > the dependency management of the project using poetry[1] > > Skyline-console uses npm for dependency management, development > and testing. During the packaging and distribution process, > webpack will be used to process the source code and dependent > library code first, and output the packaged static resource files. > These static resource files will be stored in an empty Python > module[2]. [...] GNU/Linux distributions like Debian are going to want to separately package the original source code for all of these Web components and their dependencies, and recreate them at the time the distro's binary packages are built. I believe the concerns are making it easy for them to find the source for all of it, and to attempt to use dependencies which these distributions already package in order to reduce their workload. Further, it helps to make sure the software is capable of using multiple versions of its dependencies when possible, because it's going to be installed into shared environments with other software which may have some of the same dependencies, so may need to be able to agree on common versions they all support. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sbauza at redhat.com Mon Oct 18 14:59:19 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 18 Oct 2021 16:59:19 +0200 Subject: [nova] Yoga PTG schedule Message-ID: Hello folks, Not sure people know about our etherpad for the Yoga PTG. This is this one : https://etherpad.opendev.org/p/nova-yoga-ptg You can see the schedule above but, here is there : PTG Schedule https://www.openstack.org/ptg/#tab_schedule https://ptg.opendev.org/ Fancy rendered PDF with hyperlinks for the schedule: https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf Connection details https://meet.jit.si/vPTG-Newton - *Monday*: project support team discussions, e.g. SIGs, QA, Infra, Release mgmt, Oslo - *Tuesday* *13:00 UTC - 17:00 UTC* - Nova (Placement) sessions - 13:00 - 14: 00 UTC Cyborg - Nova cross project mini-session - 14:00 - 14:30 UTC Oslo - Nova cross project mini-session - 15:00 - 16:00 UTC RBAC discussions with popup team - *Wednesday 14:00 UTC - 17:00 UTC*: Nova (Placement) sessions - 14:00 - 15:00 UTC Neutron - Nova cross project mini-session - 15:00 - 15:30 UTC Interop discussion with Arkady - *Thursday 14:00 UTC - 17:00 UTC* - Nova (Placement) sessions - 16:00 - 17:00 UTC Cinder - Nova cross project mini-session - *Friday 14:00 UTC - 17:00 UTC* - Nova (Placement) sessions See you then tomorrow at 1pm UTC ! -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjoen at dds.nl Mon Oct 18 16:38:17 2021 From: tjoen at dds.nl (tjoen) Date: Mon, 18 Oct 2021 18:38:17 +0200 Subject: [xena][glance] Upgrade to Xena Shows Error In-Reply-To: References: Message-ID: <5329490b-9c7f-8c4e-c382-424fcd0de035@dds.nl> On 10/18/21 09:28, Ammad Syed wrote: > I am trying to upgrade glance from wallaby to xena. The package upgrade > goes successful. Vut When I am doing database upgrade. Its showing me below > error. Can you guys please advise on it. > > su -s /bin/bash glance -c "glance-manage db_upgrade" > 2021-10-18 12:23:59.852 20534 DEBUG oslo_db.sqlalchemy.engines [-] MySQL > server mode set to > STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION > _check_effective_sql_mode Not sure if ty was the same error I encountered. I found in my notes that I needed to do # mysql_upgrade -u root -p From ignaziocassano at gmail.com Mon Oct 18 16:54:22 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 18 Oct 2021 18:54:22 +0200 Subject: [openstack][manila] queens netapp share migration Message-ID: Hello all, I have an installation of openstack queens and manila is using netapp fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. When I try share migration it fails. manila migration-start --preserve-metadata False --preserve-snapshots False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 #aggr_fas04_MANILA_TO2_UNITY600_Mixed In the share log file I read: 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager NetAppException: Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in Vserver svmp2-nfs-1138 is not part of any data motion operations. The svmp2-nfs-1138 is the share type where migration start from. Both source and destination are on netapp. Any help, please? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Mon Oct 18 18:37:39 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Mon, 18 Oct 2021 14:37:39 -0400 Subject: [TripleO] Gate blocker - please hold rechecks - tripleo-ci-centos-8-scenario001-standalone Message-ID: Hello All, We have a gate blocker for tripleo at: https://bugs.launchpad.net/tripleo/+bug/1947548 tripleo-ci-centos-8-scenario001-standalone is failing. We are testing some reverts. Please hold rechecks if you are rechecking for this failure. We will update this list when the error is cleared. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Mon Oct 18 18:59:36 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 19 Oct 2021 03:59:36 +0900 Subject: [tacker] Skip weekly IRC meeting Message-ID: <3c9f372f-8597-1be3-0238-6c60c97003df@gmail.com> Hi team, Since we are going to have PTG sessions thorough this week, I'd like to skip IRC meeting on Oct 19. Thanks, Yasufumi From ignaziocassano at gmail.com Mon Oct 18 19:26:08 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 18 Oct 2021 21:26:08 +0200 Subject: [openstack][manila] data-node ? Message-ID: Hello, I need to migrate some share in host assisted mode, but seems I need a data-node. I am using openstack queens on centos 7. How can I install a data-node ? I cannot find any manila packages related to it? Please, anyone can send me some documentation link ? I found only manila-scheduler, manila-api end manila-share services Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Oct 18 19:30:38 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 18 Oct 2021 21:30:38 +0200 Subject: [openstack][manila] data-node ? In-Reply-To: References: Message-ID: PS I found it under systemd but I did nod find any documentation for configuring it. Thanks Ignazio Il giorno lun 18 ott 2021 alle ore 21:26 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello, > I need to migrate some share in host assisted mode, but seems I need a > data-node. > I am using openstack queens on centos 7. > How can I install a data-node ? > I cannot find any manila packages related to it? > Please, anyone can send me some documentation link ? > I found only manila-scheduler, manila-api end manila-share services > Thanks > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From felipefuty01 at gmail.com Mon Oct 18 19:34:59 2021 From: felipefuty01 at gmail.com (Felipe Rodrigues) Date: Mon, 18 Oct 2021 16:34:59 -0300 Subject: [openstack][manila] queens netapp share migration In-Reply-To: References: Message-ID: Hi Ignazio, It seems like a bug, since NetApp driver does not support storage assisted migration across backends (SVMs).. We'll check it and open a bug to it. Just a note: there is a bug with the same error opened [1]. It may be the same as yours. Please, check there and mark as affecting you too. [1] https://bugs.launchpad.net/manila/+bug/1723513 Best regards, Felipe. On Mon, Oct 18, 2021 at 1:57 PM Ignazio Cassano wrote: > Hello all, > I have an installation of openstack queens and manila is using netapp > fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. > When I try share migration it fails. > > manila migration-start --preserve-metadata False --preserve-snapshots > False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 > c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 > #aggr_fas04_MANILA_TO2_UNITY600_Mixed > > > In the share log file I read: > 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager NetAppException: > Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in Vserver svmp2-nfs-1138 > is not part of any data motion operations. > > The svmp2-nfs-1138 is the share type where migration start from. > Both source and destination are on netapp. > Any help, please? > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Oct 18 21:47:07 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 18 Oct 2021 23:47:07 +0200 Subject: [openstack][manila] queens netapp share migration In-Reply-To: References: Message-ID: Yes, Felipe. It is the same bug. I posted my comment. Thanks Ignazio Il Lun 18 Ott 2021, 21:35 Felipe Rodrigues ha scritto: > Hi Ignazio, > > It seems like a bug, since NetApp driver does not support storage assisted > migration across backends (SVMs).. > > We'll check it and open a bug to it. > > Just a note: there is a bug with the same error opened [1]. It may be the > same as yours. Please, check there and mark as affecting you too. > > [1] https://bugs.launchpad.net/manila/+bug/1723513 > > Best regards, Felipe. > > > On Mon, Oct 18, 2021 at 1:57 PM Ignazio Cassano > wrote: > >> Hello all, >> I have an installation of openstack queens and manila is using netapp >> fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. >> When I try share migration it fails. >> >> manila migration-start --preserve-metadata False --preserve-snapshots >> False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 >> c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 >> #aggr_fas04_MANILA_TO2_UNITY600_Mixed >> >> >> In the share log file I read: >> 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager NetAppException: >> Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in Vserver svmp2-nfs-1138 >> is not part of any data motion operations. >> >> The svmp2-nfs-1138 is the share type where migration start from. >> Both source and destination are on netapp. >> Any help, please? >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.carden at gmail.com Mon Oct 18 21:34:23 2021 From: mike.carden at gmail.com (Mike Carden) Date: Tue, 19 Oct 2021 08:34:23 +1100 Subject: [tc][horizon][skyline] Welcome to the Skyline PTG In-Reply-To: References: Message-ID: Hi. The video [2] https://www.youtube.com/watch?v=pFAJLwzxv0 is coming up on YouTube as 'Video Unavailable'. -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Tue Oct 19 00:28:24 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 18 Oct 2021 17:28:24 -0700 Subject: [openstack][manila] data-node ? In-Reply-To: References: Message-ID: On Mon, Oct 18, 2021 at 12:37 PM Ignazio Cassano wrote: > PS > I found it under systemd but I did nod find any documentation for > configuring it. > We've done a poor job of documenting this in our install guide: https://docs.openstack.org/manila/latest/install/ However, https://docs.openstack.org/manila/queens/admin/shared-file-systems-share-migration.html#configuration should speak to the configuration necessary. I've added a tracker for improving the install doc: https://bugs.launchpad.net/manila/+bug/1947644 > Thanks > Ignazio > > Il giorno lun 18 ott 2021 alle ore 21:26 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> Hello, >> I need to migrate some share in host assisted mode, but seems I need a >> data-node. >> I am using openstack queens on centos 7. >> How can I install a data-node ? >> I cannot find any manila packages related to it? >> Please, anyone can send me some documentation link ? >> I found only manila-scheduler, manila-api end manila-share services >> Thanks >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Tue Oct 19 01:10:46 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 19 Oct 2021 01:10:46 +0000 Subject: [cyborg][ptg] Yoga PTG meeting Message-ID: <607e5faa6b554392897fa8d963dabbff@inspur.com> Hello, As part of the Yoga PTG, the Cyborg team project will meet on Wednesday October 19 , from 6UTC-8UTC,( https://ethercalc.openstack.org/8tum5yl1bx43 report 503 now), but you can join us on #openstack-cyborg channel. We have created an Etherpad to define the agenda: https://etherpad.opendev.org/p/cyborg-yoga-ptg Feel free to add topics you would like to see discussed. Thanks Brin Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Tue Oct 19 01:42:19 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 19 Oct 2021 01:42:19 +0000 Subject: [nova][cyborg] No meeting today due virtual PTG Message-ID: Hi all, As agreed Cyborg Team with today meeting [1], the meeting is *CANCELLED* as all of us will be attending the virtual PTG today. If you have any idea or feature/issue want to discuss, you can add it to etherpad [1], whether you can join or not, but you should describe its details, we can talk and give a reply. [1] https://etherpad.opendev.org/p/cyborg-yoga-ptg Thanks brinzhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 19 04:19:43 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 19 Oct 2021 06:19:43 +0200 Subject: [openstack][manila] data-node ? In-Reply-To: References: Message-ID: Thanks, I'll check it out. Ignazio Il Mar 19 Ott 2021, 02:28 Goutham Pacha Ravi ha scritto: > > On Mon, Oct 18, 2021 at 12:37 PM Ignazio Cassano > wrote: > >> PS >> I found it under systemd but I did nod find any documentation for >> configuring it. >> > > We've done a poor job of documenting this in our install guide: > https://docs.openstack.org/manila/latest/install/ > > However, > https://docs.openstack.org/manila/queens/admin/shared-file-systems-share-migration.html#configuration > should speak to the configuration necessary. > I've added a tracker for improving the install doc: > https://bugs.launchpad.net/manila/+bug/1947644 > > > > >> Thanks >> Ignazio >> >> Il giorno lun 18 ott 2021 alle ore 21:26 Ignazio Cassano < >> ignaziocassano at gmail.com> ha scritto: >> >>> Hello, >>> I need to migrate some share in host assisted mode, but seems I need a >>> data-node. >>> I am using openstack queens on centos 7. >>> How can I install a data-node ? >>> I cannot find any manila packages related to it? >>> Please, anyone can send me some documentation link ? >>> I found only manila-scheduler, manila-api end manila-share services >>> Thanks >>> Ignazio >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Oct 19 05:51:53 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 19 Oct 2021 00:51:53 -0500 Subject: [openstack-helm] No Meeting Oct 19th Message-ID: Hey team, Since this week is the PTG, the meeting for this week is cancelled. We will meet for our session on Wednesday Oct 20th, then resume normal schedule next week. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 19 06:21:18 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 19 Oct 2021 08:21:18 +0200 Subject: [openstack][manila] queens netapp share migration In-Reply-To: References: Message-ID: Hi Felipe, the problem is if it is a bug or if it is not supported by design. The error in the bug you reported is the same I am facing but the bug mentions a situation where controller is busy. Our controller is always very busy. Ignazio Il Lun 18 Ott 2021, 21:35 Felipe Rodrigues ha scritto: > Hi Ignazio, > > It seems like a bug, since NetApp driver does not support storage assisted > migration across backends (SVMs).. > > We'll check it and open a bug to it. > > Just a note: there is a bug with the same error opened [1]. It may be the > same as yours. Please, check there and mark as affecting you too. > > [1] https://bugs.launchpad.net/manila/+bug/1723513 > > Best regards, Felipe. > > > On Mon, Oct 18, 2021 at 1:57 PM Ignazio Cassano > wrote: > >> Hello all, >> I have an installation of openstack queens and manila is using netapp >> fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. >> When I try share migration it fails. >> >> manila migration-start --preserve-metadata False --preserve-snapshots >> False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 >> c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 >> #aggr_fas04_MANILA_TO2_UNITY600_Mixed >> >> >> In the share log file I read: >> 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager NetAppException: >> Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in Vserver svmp2-nfs-1138 >> is not part of any data motion operations. >> >> The svmp2-nfs-1138 is the share type where migration start from. >> Both source and destination are on netapp. >> Any help, please? >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 19 06:26:19 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 19 Oct 2021 08:26:19 +0200 Subject: [openstack][manila] data-node ? In-Reply-To: References: Message-ID: Hello, the doc is very poor. The manila-data service is mentioned but there are not configurazion instructions. I think this servirce is important where share driver assisted migration is not supported. Ignazio Il Mar 19 Ott 2021, 02:28 Goutham Pacha Ravi ha scritto: > > On Mon, Oct 18, 2021 at 12:37 PM Ignazio Cassano > wrote: > >> PS >> I found it under systemd but I did nod find any documentation for >> configuring it. >> > > We've done a poor job of documenting this in our install guide: > https://docs.openstack.org/manila/latest/install/ > > However, > https://docs.openstack.org/manila/queens/admin/shared-file-systems-share-migration.html#configuration > should speak to the configuration necessary. > I've added a tracker for improving the install doc: > https://bugs.launchpad.net/manila/+bug/1947644 > > > > >> Thanks >> Ignazio >> >> Il giorno lun 18 ott 2021 alle ore 21:26 Ignazio Cassano < >> ignaziocassano at gmail.com> ha scritto: >> >>> Hello, >>> I need to migrate some share in host assisted mode, but seems I need a >>> data-node. >>> I am using openstack queens on centos 7. >>> How can I install a data-node ? >>> I cannot find any manila packages related to it? >>> Please, anyone can send me some documentation link ? >>> I found only manila-scheduler, manila-api end manila-share services >>> Thanks >>> Ignazio >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 19 07:39:28 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 19 Oct 2021 09:39:28 +0200 Subject: [openstack][manila] queens netapp share migration In-Reply-To: References: Message-ID: Hello Felipe, I am ttryng on stein beacause net netapp attached to this installation is not so busy. I do not know if is because netapp is not so busy or if the openstack version is stein and nont queens, but share migration seems to work but the export location is not changed fron to to destination. The svm source has an address on a vlan and destination on another vlan. Why it does not change the export location ? Ignazio Il giorno lun 18 ott 2021 alle ore 21:35 Felipe Rodrigues < felipefuty01 at gmail.com> ha scritto: > Hi Ignazio, > > It seems like a bug, since NetApp driver does not support storage assisted > migration across backends (SVMs).. > > We'll check it and open a bug to it. > > Just a note: there is a bug with the same error opened [1]. It may be the > same as yours. Please, check there and mark as affecting you too. > > [1] https://bugs.launchpad.net/manila/+bug/1723513 > > Best regards, Felipe. > > > On Mon, Oct 18, 2021 at 1:57 PM Ignazio Cassano > wrote: > >> Hello all, >> I have an installation of openstack queens and manila is using netapp >> fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. >> When I try share migration it fails. >> >> manila migration-start --preserve-metadata False --preserve-snapshots >> False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 >> c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 >> #aggr_fas04_MANILA_TO2_UNITY600_Mixed >> >> >> In the share log file I read: >> 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager NetAppException: >> Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in Vserver svmp2-nfs-1138 >> is not part of any data motion operations. >> >> The svmp2-nfs-1138 is the share type where migration start from. >> Both source and destination are on netapp. >> Any help, please? >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Oct 19 07:47:43 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 19 Oct 2021 08:47:43 +0100 Subject: How to force kolla-ansible to rotate logs with given interval [kolla-ansible] In-Reply-To: References: Message-ID: Hi Adam, We don't currently support customising this file via /etc/kolla/config/cron-logrotate-global.conf. Where did you see it documented? It would be possible to support it by adding a with_first_found loop to the cron task in ansible/roles/common/tasks/config.yml. Mark On Mon, 18 Oct 2021 at 09:23, Adam Tomas wrote: > > Hi, > different services in kolla-ansible have different log rotation policies and I?d like to make the logs easier to maintain and search (tried central logging with Kibana, but somehow I don?t like this solution). So I tried to write common config file for all logs. > As I understand all logs should be rotated by cron container, inside of which there?s logrotate.conf file (and as I can see the logs are rotated according to this file). So I?ve copied this file, modified according to my needs and put it ina /etc/kolla/config with the name cron-logrotate-global.conf (as documentation says). And? nothing. I?ve checked permissions of this file - everything seems to be ok, so what?s the problem? Below is my logrotate.conf file > > Best regards, > Adam Tomas > > cat /etc/kolla/config/cron-logrotate-global.conf > > daily > rotate 31 > copytruncate > compress > delaycompress > notifempty > missingok > minsize 0M > maxsize 100M > su root kolla > "/var/log/kolla/ansible.log" > { > } > "/var/log/kolla/aodh/*.log" > { > } > "/var/log/kolla/barbican/*.log" > { > } > "/var/log/kolla/ceilometer/*.log" > { > } > "/var/log/kolla/chrony/*.log" > { > } > "/var/log/kolla/cinder/*.log" > { > } > "/var/log/kolla/cloudkitty/*.log" > { > } > "/var/log/kolla/designate/*.log" > { > } > "/var/log/kolla/elasticsearch/*.log" > { > } > "/var/log/kolla/fluentd/*.log" > { > } > "/var/log/kolla/glance/*.log" > { > } > "/var/log/kolla/haproxy/haproxy.log" > { > } > "/var/log/kolla/heat/*.log" > { > } > "/var/log/kolla/horizon/*.log" > { > } > "/var/log/kolla/influxdb/*.log" > { > } > "/var/log/kolla/iscsi/iscsi.log" > { > } > "/var/log/kolla/kafka/*.log" > { > } > "/var/log/kolla/keepalived/keepalived.log" > { > } > "/var/log/kolla/keystone/*.log" > { > } > "/var/log/kolla/kibana/*.log" > { > } > "/var/log/kolla/magnum/*.log" > { > } > "/var/log/kolla/mariadb/*.log" > { > } > "/var/log/kolla/masakari/*.log" > { > } > "/var/log/kolla/monasca/*.log" > { > } > "/var/log/kolla/neutron/*.log" > { > postrotate > chmod 644 /var/log/kolla/neutron/*.log > endscript > } > "/var/log/kolla/nova/*.log" > { > } > "/var/log/kolla/octavia/*.log" > { > } > "/var/log/kolla/rabbitmq/*.log" > { > } > "/var/log/kolla/rally/*.log" > { > } > "/var/log/kolla/skydive/*.log" > { > } > "/var/log/kolla/storm/*.log" > { > } > "/var/log/kolla/swift/*.log" > { > } > "/var/log/kolla/vitrage/*.log" > { > } > "/var/log/kolla/zookeeper/*.log" > { > } From bkslash at poczta.onet.pl Tue Oct 19 08:40:20 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 19 Oct 2021 10:40:20 +0200 Subject: How to force kolla-ansible to rotate logs with given interval [kolla-ansible] In-Reply-To: References: Message-ID: <7E3D9425-7CE1-4189-9AA8-FF74DCCD873B@poczta.onet.pl> Hi Mark, thank you for the answer. I?ve changed ansible/roles/common/templates/cron-logrotate-global.conf.j2 (minsize 0M) and ansible/roles/common/defaults/main.yml (changing cron_logrotate_rotation_interval to ?daily? and cron_logrotate_rotation_count to ?31") and logrotate.conf inside cron container now has my settings, but? logs still rotates according to default rules (every 6 weeks and only if the log is > than 30M). > Wiadomo?? napisana przez Mark Goddard w dniu 19.10.2021, o godz. 09:47: > > Hi Adam, > > We don't currently support customising this file via > /etc/kolla/config/cron-logrotate-global.conf. Where did you see it > documented? > > It would be possible to support it by adding a with_first_found loop > to the cron task in ansible/roles/common/tasks/config.yml. > how exactly? Best regards, Adam Tomas > Mark > > On Mon, 18 Oct 2021 at 09:23, Adam Tomas wrote: >> >> Hi, >> different services in kolla-ansible have different log rotation policies and I?d like to make the logs easier to maintain and search (tried central logging with Kibana, but somehow I don?t like this solution). So I tried to write common config file for all logs. >> As I understand all logs should be rotated by cron container, inside of which there?s logrotate.conf file (and as I can see the logs are rotated according to this file). So I?ve copied this file, modified according to my needs and put it ina /etc/kolla/config with the name cron-logrotate-global.conf (as documentation says). And? nothing. I?ve checked permissions of this file - everything seems to be ok, so what?s the problem? Below is my logrotate.conf file >> >> Best regards, >> Adam Tomas >> >> cat /etc/kolla/config/cron-logrotate-global.conf >> >> daily >> rotate 31 >> copytruncate >> compress >> delaycompress >> notifempty >> missingok >> minsize 0M >> maxsize 100M >> su root kolla >> "/var/log/kolla/ansible.log" >> { >> } >> "/var/log/kolla/aodh/*.log" >> { >> } >> "/var/log/kolla/barbican/*.log" >> { >> } >> "/var/log/kolla/ceilometer/*.log" >> { >> } >> "/var/log/kolla/chrony/*.log" >> { >> } >> "/var/log/kolla/cinder/*.log" >> { >> } >> "/var/log/kolla/cloudkitty/*.log" >> { >> } >> "/var/log/kolla/designate/*.log" >> { >> } >> "/var/log/kolla/elasticsearch/*.log" >> { >> } >> "/var/log/kolla/fluentd/*.log" >> { >> } >> "/var/log/kolla/glance/*.log" >> { >> } >> "/var/log/kolla/haproxy/haproxy.log" >> { >> } >> "/var/log/kolla/heat/*.log" >> { >> } >> "/var/log/kolla/horizon/*.log" >> { >> } >> "/var/log/kolla/influxdb/*.log" >> { >> } >> "/var/log/kolla/iscsi/iscsi.log" >> { >> } >> "/var/log/kolla/kafka/*.log" >> { >> } >> "/var/log/kolla/keepalived/keepalived.log" >> { >> } >> "/var/log/kolla/keystone/*.log" >> { >> } >> "/var/log/kolla/kibana/*.log" >> { >> } >> "/var/log/kolla/magnum/*.log" >> { >> } >> "/var/log/kolla/mariadb/*.log" >> { >> } >> "/var/log/kolla/masakari/*.log" >> { >> } >> "/var/log/kolla/monasca/*.log" >> { >> } >> "/var/log/kolla/neutron/*.log" >> { >> postrotate >> chmod 644 /var/log/kolla/neutron/*.log >> endscript >> } >> "/var/log/kolla/nova/*.log" >> { >> } >> "/var/log/kolla/octavia/*.log" >> { >> } >> "/var/log/kolla/rabbitmq/*.log" >> { >> } >> "/var/log/kolla/rally/*.log" >> { >> } >> "/var/log/kolla/skydive/*.log" >> { >> } >> "/var/log/kolla/storm/*.log" >> { >> } >> "/var/log/kolla/swift/*.log" >> { >> } >> "/var/log/kolla/vitrage/*.log" >> { >> } >> "/var/log/kolla/zookeeper/*.log" >> { >> } From vikarnatathe at gmail.com Tue Oct 19 09:16:40 2021 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Tue, 19 Oct 2021 14:46:40 +0530 Subject: Openstack magnum In-Reply-To: References: