From akekane at redhat.com Fri Oct 1 05:42:14 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 1 Oct 2021 11:12:14 +0530 Subject: [glance] No meeting on 07th October Message-ID: Hi All, We decided to cancel our next (October 7th) weekly meeting. According to schedule we will meet directly on October 14th. In case of any queries, reach us on #openstack-glance IRC channel. Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Fri Oct 1 07:58:57 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Fri, 1 Oct 2021 09:58:57 +0200 Subject: [oslo] oslo.metrics package for Fedora In-Reply-To: <663593806.87658.1632279717177@mail.yahoo.com> References: <1578604251.1736431.1632195525621.ref@mail.yahoo.com> <1578604251.1736431.1632195525621@mail.yahoo.com> <20210921055711.coiwgigvp22imrhd@p1.localdomain> <663593806.87658.1632279717177@mail.yahoo.com> Message-ID: On Wed, Sep 22, 2021 at 03:01:57AM +0000, Hirotaka Wakabayashi wrote: > Hello Slawek, > > Thank you for your kind reply. I will use the RDO's spec file to make the > Fedora package. :) > > My application packed for Fedora is a simple notification listener using > oslo.messaging that requires oslo.metrics. As Artem says, any packages in > Fedora must resolve the dependencies without using RDO packages. > > Best Regards, > Hirotaka Wakabayashi > > On Tuesday, September 21, 2021, 02:57:21 PM GMT+9, Slawek Kaplonski wrote: > > Hi, > > On Tue, Sep 21, 2021 at 03:38:45AM +0000, Hirotaka Wakabayashi wrote: > > Hello Oslo Team, > > > > I am a Fedora packager. I want to package oslo.metrics for Fedora because > > my package uses oslo.messaging that requires oslo.metrics as you know. > > oslo.messaging package repository already exists in Fedora. I will take over > > it from the former package maintainer. the oslo.metrics repository doesn't existso I need to make it. > > > > If any concerns with it, please reply. I can update the version as soon as the > > new version releases by using Fedora's release monitoring system. > > Sorry, I'm late in the game here. Your package IS in Fedora and uses both oslo.metrics and also oslo.messaging? Looking at the dist-git[1], it seems oslo.messaging has been removed from Fedora. In order to get it back, you'll need to go through a package review. I would suspect this could go in quickly, since there is already a spec. Matthias [1] https://src.fedoraproject.org/rpms/python-oslo-messaging/tree/rawhide Matthias Runge 2021-10-01 07:33:05 UTC -- Matthias Runge From hberaud at redhat.com Fri Oct 1 14:25:50 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 1 Oct 2021 16:25:50 +0200 Subject: [release] Release countdown for week R-0, Oct 4 - Oct 8 Message-ID: Development Focus ----------------- We will be releasing the coordinated OpenStack Xena release next week, on October 6, 2021. Thanks to everyone involved in the Xena cycle! We are now in pre-release freeze, so no new deliverable will be created until final release, unless a release-critical regression is spotted. Otherwise, teams attending the virtual PTG should start to plan what they will be discussing there! General Information ------------------- On release day, the release team will produce final versions of deliverables following the cycle-with-rc release model, by re-tagging the commit used for the last RC. A patch doing just that will be proposed soon. PTLs and release liaisons should watch for that final release patch from the release team. While not required, we would appreciate having an ack from each team before we approve it on the 16th, so that their approval is included in the metadata that goes onto the signed tag. Upcoming Deadlines & Dates -------------------------- Final Xena release: October 6 Yoga PTG: October 18-22 -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Fri Oct 1 14:54:56 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Fri, 1 Oct 2021 20:24:56 +0530 Subject: [TripleO] Issue in running Pre-Introspection In-Reply-To: References: Message-ID: Hi Team,, Upon further debugging, I found that pre-introspection internally calls the ansible playbook located at path /usr/share/ansible/validation-playbooks File "dhcp-introspection.yaml" has hosts mentioned as undercloud. - hosts: *undercloud* become: true vars: ... ... But the artifacts created for dhcp-introspection at location /home/stack/validations/artifacts/_dhcp-introspection.yaml_2021-10-01T11 has file *hosts *present which has *localhost* written into it as a result of which when command gets executed it gives the error *"Could not match supplied host pattern, ignoring: undercloud:"* Can someone suggest how is this artifacts written in tripleo and the way we can change hosts file entry to undercloud so that it can work Similar is the case with other tasks like undercloud-tokenflush, ctlplane-ip-range etc Regards Anirudh Gupta On Wed, Sep 29, 2021 at 4:47 PM Anirudh Gupta wrote: > Hi Team, > > I tried installing Undercloud using the below link: > > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud > > I am getting the following error: > > (undercloud) [stack at undercloud ~]$ openstack tripleo validator run > --group pre-introspection > Selected log directory '/home/stack/validations' does not exist. > Attempting to create it. > > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | UUID | Validations | > Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | > > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | 7029c1f6-5ab4-465d-82d7-3f29058012ce | check-cpu | > PASSED | localhost | localhost | | 0:00:02.531 | > | db059017-30f1-4b97-925e-3f55b586d492 | check-disk-space | > PASSED | localhost | localhost | | 0:00:04.432 | > | e23dd9a1-90d3-4797-ae0a-b43e55ab6179 | check-ram | > PASSED | localhost | localhost | | 0:00:01.324 | > | 598ca02d-258a-44ad-b78d-3877321cdfe6 | check-selinux-mode | > PASSED | localhost | localhost | | 0:00:01.591 | > | c4435b4c-b432-4a1e-8a99-00638034a884 | *check-network-gateway > | FAILED* | undercloud | *No host matched* | | > | > | cb1eed23-ef2f-4acd-a43a-86fb09bf0372 | *undercloud-disk-space > | FAILED* | undercloud | *No host matched* | | > | > | abde5329-9289-4b24-bf16-c4d82b03e67a | *undercloud-neutron-sanity-check > | FAILED* | undercloud | *No host matched* | | > | > | d0e5fdca-ece6-4a37-b759-ed1fac31a10f | *ctlplane-ip-range > | FAILED* | undercloud | No host matched | | > | > | 91511807-225c-4852-bb52-6d0003c51d49 | *dhcp-introspection > | FAILED* | undercloud | No host matched | | > | > | e96f7704-d2fb-465d-972b-47e2f057449c |* undercloud-tokenflush > | FAILED *| undercloud | No host matched | | > | > > > As per the validation link, > > https://docs.openstack.org/tripleo-validations/wallaby/validations-pre-introspection-details.html > > check-network-gateway > > If gateway in undercloud.conf is different from local_ip, verify that the > gateway exists and is reachable > > Observation - In my case IP specified in local_ip and gateway, both are > pingable, but still this error is being observed > > > ctlplane-ip-range? > > > Check the number of IP addresses available for the overcloud nodes. > > Verify that the number of IP addresses defined in dhcp_start and dhcp_end fields > in undercloud.conf is not too low. > > - > > ctlplane_iprange_min_size: 20 > > Observation - In my case I have defined more than 20 IPs > > > Similarly for disk related issue, I have dedicated 100 GB space in /var > and / > > Filesystem Size Used Avail Use% Mounted on > devtmpfs 12G 0 12G 0% /dev > tmpfs 12G 84K 12G 1% /dev/shm > tmpfs 12G 8.7M 12G 1% /run > tmpfs 12G 0 12G 0% /sys/fs/cgroup > /dev/mapper/cl-root 100G 2.5G 98G 3% / > /dev/mapper/cl-home 47G 365M 47G 1% /home > /dev/mapper/cl-var 103G 1.1G 102G 2% /var > /dev/vda1 947M 200M 747M 22% /boot > tmpfs 2.4G 0 2.4G 0% /run/user/0 > tmpfs 2.4G 0 2.4G 0% /run/user/1000 > > Despite setting al the parameters, still I am not able to pass > pre-introspection checks. *"NO Host Matched" *is found in the table. > > > Regards > > Anirudh Gupta > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Oct 1 15:52:51 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 1 Oct 2021 17:52:51 +0200 Subject: [tc][docs] missing documentation Message-ID: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> Hi, With this mail I want to raise multiple topics towards TC, related to Documentation (SIG): * This week I had the task in the Release Management Team to notify the Documentation (Technical Writing) SIG to apply their processes to create the new release series landing pages for docs.openstack.org. Currently the SIG is chaired by Stephen Finucane, but he won't be around in the next cycle so the Technical Writing SIG will remain without a chair and active members. * Another point that came up is that a lot of projects are missing documentation in Victoria and Wallaby releases as they don't even have a single patch merged on their stable/victoria or stable/wallaby branches, not even the auto-generated patches (showing the lack of stable maintainers of the given projects). For example compare Ussuri [1] and Wallaby [2] projects page. ??? - one proposed solution for this is to auto-merge the auto-generated patches (but on the other hand this does not solve the issue of lacking active maintainers) Thanks, El?d [1] https://docs.openstack.org/ussuri/projects.html [2] https://docs.openstack.org/wallaby/projects.html From fungi at yuggoth.org Fri Oct 1 16:01:44 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 Oct 2021 16:01:44 +0000 Subject: [tc][docs] missing documentation In-Reply-To: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> References: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> Message-ID: <20211001160143.5n2e5dsm6qikopuf@yuggoth.org> On 2021-10-01 17:52:51 +0200 (+0200), El?d Ill?s wrote: [...] > the lack of stable maintainers of the given projects [...] I believe that's what https://review.opendev.org/810721 is attempting to solve, but could use more reviews. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Fri Oct 1 16:16:54 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Oct 2021 11:16:54 -0500 Subject: [tc][docs] missing documentation In-Reply-To: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> References: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> Message-ID: <17c3ca4eed7.c6239dbe366092.5303458889627498004@ghanshyammann.com> ---- On Fri, 01 Oct 2021 10:52:51 -0500 El?d Ill?s wrote ---- > Hi, > > With this mail I want to raise multiple topics towards TC, related to > Documentation (SIG): > > * This week I had the task in the Release Management Team to notify the > Documentation (Technical Writing) SIG to apply their processes to create > the new release series landing pages for docs.openstack.org. Currently > the SIG is chaired by Stephen Finucane, but he won't be around in the > next cycle so the Technical Writing SIG will remain without a chair and > active members. > > * Another point that came up is that a lot of projects are missing > documentation in Victoria and Wallaby releases as they don't even have a > single patch merged on their stable/victoria or stable/wallaby branches, > not even the auto-generated patches (showing the lack of stable > maintainers of the given projects). For example compare Ussuri [1] and > Wallaby [2] projects page. > - one proposed solution for this is to auto-merge the > auto-generated patches (but on the other hand this does not solve the > issue of lacking active maintainers) Thanks, Elod, for raising the issue. This is very helpful for TC to analyze the project status. To solve it now, I agree with your proposal to auto-merge the auto-generated patches and have their documentation fixed for stable branches. And to solve the stable branch maintainer, we are in-progress to change the stable branch team structure[1]. The current proposal is along with global stable maintainer team as an advisory body and allows the project team to have/manage their stable branch team as they do for the master branch, and that team can handle/manage their stable branch activities/members. I will try to get more attention from TC on this and merge it soon. On Documentation SIG chair, we appreciate Stephen's work and taking care of it. I am adding it in the next meeting agenda also we will discuss the plan in PTG. [1] https://review.opendev.org/c/openstack/governance/+/810721/ -gmann > > Thanks, > > El?d > > > [1] https://docs.openstack.org/ussuri/projects.html > [2] https://docs.openstack.org/wallaby/projects.html > > > > From gmann at ghanshyammann.com Fri Oct 1 17:09:07 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Oct 2021 12:09:07 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 1st Oct, 21: Reading: 5 min Message-ID: <17c3cd4bdff.e3b20c14368201.4658562949417284171@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * TC this week IRC meeting held on Sept 30th Thursday. * Most of the meeting discussions are summarized in the below sections (Completed or in-progress activities section). To know more details, you can check the complete logs @ https://meetings.opendev.org/meetings/tc/2021/tc.2021-09-30-15.00.log.html * We will have next week's video call meeting on Oct 7th, Thursday 15:00 UTC, feel free the topic in agenda [1] by Oct 6th. 2. What we completed this week: ========================= * Listed the 'Places for projects to spreading the word'[2] * Removed stable release 'Unmaintained' phase[3] 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ * TC is using the etherpad[4] for Xena cycle working item. We will be checking and updating the status biweekly on the same etherpad. * Current status is: 8 completed, 5 in-progress Open Reviews ----------------- * Five open reviews for ongoing activities[5]. Place to maintain the external hosted ELK, E-R, O-H services ------------------------------------------------------------------------- * We discussed about the technical possibility and the place to add the ELK services maintenance[6]. * As there is no other infra project than OpenStack has shown the interest to use and maintain it, we are disucssing where we can fit this in OpenStack. TACT SIG is one place we are leaning towards. We will continue to discuss it in next TC meeting and prepare some draft plan. Add project health check tool ----------------------------------- * No updates on this than previous week. * We are reviewing Rico proposal on collecting stats tool[7] and TODO of documenting the usage and interpretation of those stats. Stable Core team process change --------------------------------------- * Draft proposal resolution is still under review[8] . Feel free to provide early feedback if you have any. * Elod has raised more issues today[9], that a few projects stable change (even auto generated patch) are not merged and so does their stable branch doc site is not up. For now, we are fine to auto/single core approval of those auto-generated patches and proceed to make stable branch doc site up. Call for 'Technical Writing' SIG Chair/Maintainers ---------------------------------------------------------- * The technical writing SIG[10] provides documentation guidance, assistance, tooling, and style guides for OpenStack project teams. * As you might have read the email from Elod[9], Stephen who is current chair for this SIG is not planning to continue to chair. Please let us know if you are interested to help in this Doc TC tags analysis ------------------- * As discussed in the last PTG, TC is working on an analysis of the usefulness of the Tags framework[11] or what all tags can be cleaned up. * We are still waiting for the operator's response to the email on openstack-disscuss ML[12]. If you are an operator, please respond to the email and based on the feedback we will continue the discussion in PTG. Project updates ------------------- * Add the cinder-netapp charm to Openstack charms[13] * Retiring js-openstack-lib [14] * Retire puppet-freezer[15] Yoga release community-wide goal ----------------------------------------- * Please add the possible candidates in this etherpad [16]. * Current status: "Secure RBAC" is selected for Yoga cycle[17]. PTG planning ---------------- * We are collecting the PTG topics in etherpad[18], feel free to add any topic you would like to discuss. * We discussed the live stream of one of the TC PTG sessions like we did last time. Once we will have more topics in etherpad then we can select the appropriate one. Test support for TLS default: ---------------------------------- * Rico has started a separate email thread over testing with tls-proxy enabled[19], we encourage projects to participate in that testing and help to enable the tls-proxy in gate testing. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[20]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [21] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [22] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://docs.openstack.org/project-team-guide/spread-the-word.html [3] https://review.opendev.org/c/openstack/project-team-guide/+/810499 [4] https://etherpad.opendev.org/p/tc-xena-tracke [5] https://review.opendev.org/q/projects:openstack/governance+status:open [6] https://etherpad.opendev.org/p/elk-service-maintenance-plan [7] https://review.opendev.org/c/openstack/governance/+/810037 [8] https://review.opendev.org/c/openstack/governance/+/810721 [9] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025161.html [10] https://governance.openstack.org/sigs/ [11] https://governance.openstack.org/tc/reference/tags/index.html [12] http://lists.openstack.org/pipermail/openstack-discuss/2021-September/024804.html [13] https://review.opendev.org/c/openstack/governance/+/809011 [14] https://review.opendev.org/c/openstack/governance/+/798540 [15] https://review.opendev.org/c/openstack/governance/+/807163 [16] https://etherpad.opendev.org/p/y-series-goals [17] https://review.opendev.org/c/openstack/governance/+/803783 [18] https://etherpad.opendev.org/p/tc-yoga-ptg [19] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023000.html [20] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [21] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [22] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours From elod.illes at est.tech Fri Oct 1 17:41:28 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 1 Oct 2021 19:41:28 +0200 Subject: [tc][docs] missing documentation In-Reply-To: <20211001160143.5n2e5dsm6qikopuf@yuggoth.org> References: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> <20211001160143.5n2e5dsm6qikopuf@yuggoth.org> Message-ID: El?d On 2021. 10. 01. 18:01, Jeremy Stanley wrote: > On 2021-10-01 17:52:51 +0200 (+0200), El?d Ill?s wrote: > [...] >> the lack of stable maintainers of the given projects > [...] > > I believe that's what https://review.opendev.org/810721 is > attempting to solve, but could use more reviews. Partly, as my experience (or maybe just feeling?) is that those projects that does not even merge the bot proposed stable patches usually have reviewing problems on master branches as well. From peter.matulis at canonical.com Fri Oct 1 18:00:31 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Fri, 1 Oct 2021 14:00:31 -0400 Subject: [tc][docs] missing documentation In-Reply-To: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> References: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> Message-ID: How does the Projects page get populated? On Fri, Oct 1, 2021 at 11:56 AM El?d Ill?s wrote: > Hi, > > With this mail I want to raise multiple topics towards TC, related to > Documentation (SIG): > > * This week I had the task in the Release Management Team to notify the > Documentation (Technical Writing) SIG to apply their processes to create > the new release series landing pages for docs.openstack.org. Currently > the SIG is chaired by Stephen Finucane, but he won't be around in the > next cycle so the Technical Writing SIG will remain without a chair and > active members. > > * Another point that came up is that a lot of projects are missing > documentation in Victoria and Wallaby releases as they don't even have a > single patch merged on their stable/victoria or stable/wallaby branches, > not even the auto-generated patches (showing the lack of stable > maintainers of the given projects). For example compare Ussuri [1] and > Wallaby [2] projects page. > - one proposed solution for this is to auto-merge the > auto-generated patches (but on the other hand this does not solve the > issue of lacking active maintainers) > > Thanks, > > El?d > > > [1] https://docs.openstack.org/ussuri/projects.html > [2] https://docs.openstack.org/wallaby/projects.html > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Fri Oct 1 20:58:25 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 1 Oct 2021 16:58:25 -0400 Subject: [TripleO] Gate blocker - please hold rechecks In-Reply-To: References: <20210930192027.tytawpypbirzylyk@yuggoth.org> Message-ID: On Thu, Sep 30, 2021 at 4:17 PM Ronelle Landy wrote: > > > On Thu, Sep 30, 2021 at 3:25 PM Jeremy Stanley wrote: > >> On 2021-09-30 13:54:02 -0400 (-0400), Ronelle Landy wrote: >> > We have a gate blocker for tripleo at: >> > https://bugs.launchpad.net/tripleo/+bug/1945682 >> > >> > This tox error is impacting tox jobs on multiple tripleo-related repos. >> > A resolution is being worked on by infra. >> [...] >> >> This was due to a regression in a bug fix change[0] which merged to >> zuul-jobs, and the emergency revert[1] of that fix merged roughly an >> hour ago (18:17 UTC) so should no longer be causing new failures. >> I'm working on a regression test to exercise the tox feature TripleO >> was using and incorporate a solution for that so we can make sure >> it's not impacted when we re-merge[2] the original fix. >> > > Thanks for the quick resolution here. > Failed jobs are clearing the gate and will be rechecked if needed. > >> >> [0] https://review.opendev.org/806612 >> [1] https://review.opendev.org/812001 >> [2] https://review.opendev.org/812005 >> >> -- >> Jeremy Stanley >> > > Note that the OVB issue is still ongoing. > OVB issue should be resolved now > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sat Oct 2 09:10:55 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 2 Oct 2021 11:10:55 +0200 Subject: [tc][docs] missing documentation In-Reply-To: References: <1bf9b26e-ef7f-20e8-b7eb-af5c6f73e507@est.tech> <20211001160143.5n2e5dsm6qikopuf@yuggoth.org> Message-ID: QQ - do you have a listing of missing projects handy? or better yet: some script to list those - that could help TC in deriving project health criteria. -yoctozepto On Fri, 1 Oct 2021 at 19:42, El?d Ill?s wrote: > > > El?d > > On 2021. 10. 01. 18:01, Jeremy Stanley wrote: > > On 2021-10-01 17:52:51 +0200 (+0200), El?d Ill?s wrote: > > [...] > >> the lack of stable maintainers of the given projects > > [...] > > > > I believe that's what https://review.opendev.org/810721 is > > attempting to solve, but could use more reviews. > Partly, as my experience (or maybe just feeling?) is that those projects > that does not even merge the bot proposed stable patches usually have > reviewing problems on master branches as well. > > From hojat.gazestani1 at gmail.com Sun Oct 3 09:45:34 2021 From: hojat.gazestani1 at gmail.com (hojat openstack-nsx-VOIP SBC) Date: Sun, 3 Oct 2021 13:15:34 +0330 Subject: Access denied for user nova Message-ID: Hi I have a problem which is described here , Does anyone have any idea to resolve this issue? Regards, Hojii. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Mon Oct 4 05:23:40 2021 From: sorrison at gmail.com (Sam Morrison) Date: Mon, 4 Oct 2021 16:23:40 +1100 Subject: [kolla] skipping base if already exist Message-ID: <9217E6F1-22AC-4579-A920-5EFA7DCCAE56@gmail.com> Hi, We?ve started to use kolla to build container images and trying to figure out if I?m doing it wrong it it?s just not how kolla works. What I?m trying to do it not rebuild the base and openstack-base images when we build an image for a project. Example. We build a horizon image and it builds and pushes up to our registry the following kolla/ubuntu-source-base <> kolla/ubuntu-source-openstack-base kolla/ubuntu-source-horizon Now I can rebuild this without having to again build the base images with ?skip parents But now I want to build a barbican image and I can?t use skip-parents as the barbican image also requires barbican-base. Which means I need to go and rebuild the ubuntu base and Openstack base images again. Is there a way to essentially skip parents but only if they don?t exist in the registry already? Or make skip-parents only mean skip base and Openstack-base? Thanks in advance, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Oct 4 07:18:53 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 4 Oct 2021 09:18:53 +0200 Subject: [kolla] skipping base if already exist In-Reply-To: <9217E6F1-22AC-4579-A920-5EFA7DCCAE56@gmail.com> References: <9217E6F1-22AC-4579-A920-5EFA7DCCAE56@gmail.com> Message-ID: On Mon, 4 Oct 2021 at 07:24, Sam Morrison wrote: > > Hi, > > We?ve started to use kolla to build container images and trying to figure out if I?m doing it wrong it it?s just not how kolla works. > > What I?m trying to do it not rebuild the base and openstack-base images when we build an image for a project. > > Example. > > We build a horizon image and it builds and pushes up to our registry the following > > kolla/ubuntu-source-base > kolla/ubuntu-source-openstack-base > kolla/ubuntu-source-horizon > > > Now I can rebuild this without having to again build the base images with > > ?skip parents > > > But now I want to build a barbican image and I can?t use skip-parents as the barbican image also requires barbican-base. Which means I need to go and rebuild the ubuntu base and Openstack base images again. > > Is there a way to essentially skip parents but only if they don?t exist in the registry already? Or make skip-parents only mean skip base and Openstack-base? You might then be interested in --skip-existing -yoctozepto > > Thanks in advance, > Sam > > From eblock at nde.ag Mon Oct 4 07:15:09 2021 From: eblock at nde.ag (Eugen Block) Date: Mon, 04 Oct 2021 07:15:09 +0000 Subject: Access denied for user nova In-Reply-To: Message-ID: <20211004071509.Horde.XyH99zDwEsAOUPdKEkUGH48@webmail.nde.ag> Hi, your hostname seems to be controller001 but your config settings refer to controller01 (as far as I checked). That would explain it, wouldn't it? The "access denied" message: 2021-10-02 12:52:16 141 [Warning] Access denied for user 'nova'@'controller001' (using password: YES) and your nova endpoint: openstack endpoint create --region RegionOne compute public http://controller01:8774/v2.1 Zitat von hojat openstack-nsx-VOIP SBC : > Hi > > I have a problem which is described here > , > Does anyone have any idea to resolve this issue? > > Regards, > Hojii. From geguileo at redhat.com Mon Oct 4 10:23:31 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 4 Oct 2021 12:23:31 +0200 Subject: [dev][cinder] Consultation about new cinder-backup features In-Reply-To: References: <20210906132813.xsaxbsyyvf4ey4vm@localhost> Message-ID: <20211004102331.e3otr2k2mjzglg42@localhost> On 30/09, Daniel de Oliveira Pereira wrote: > On 06/09/2021 10:28, Gorka Eguileor wrote: > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > On 27/08, Daniel de Oliveira Pereira wrote: > >> Hello everyone, > >> > >> We have prototyped some new features on Cinder for our clients, and we > >> think that they are nice features and good candidates to be part of > >> upstream Cinder, so we would like to get feedback from OpenStack > >> community about these features and if you would be willing to accept > >> them in upstream OpenStack. > > > > Hi Daniel, > > > > Thank you very much for your willingness to give back!!! > > > > > >> > >> Our team implemented the following features for cinder-backup service: > >> > >> 1. A multi-backend backup driver, that allow OpenStack users to > >> choose, via API/CLI/Horizon, which backup driver (Ceph or NFS, in our > >> prototype) will be used during a backup operation to create a new volume > >> backup. > > > > This is a feature that has been discussed before, and e0ne already did > > some of the prerequisites for it. > > > > > >> 2. An improved NFS backup driver, that allow OpenStack users to back > >> up their volumes to private NFS servers, providing the NFS hostpath at > >> runtime via API/CLI/Horizon, while creating the volume backup. > >> > > > > What about the username and password? > > Hi Gorka, > > thanks for your feedback. > > Our prototype doesn't support authentication using username/password, > since this is a feature that NFS doesn't provide built-in support. > > > Can backups be restored from a remote location as well? > > Yes, if the location is the one where the backup was originally saved > (same NFS hostpath), as the backup location is stored on Cinder backups > table during the backup creation. It doesn't support restoring the > backup from an arbitrary remote NFS server. > > > > > This sounds like a very cool feature, but I'm not too comfortable with > > having it in Cinder. > > > > The idea is that Cinder provides an abstraction and doesn't let users > > know about implementation details. > > > > With that feature as it is a user could request a backup to an off-site > > location that could result in congestion in one of the outbound > > connections. > > I think this is a very good point, that we weren't taking into > consideration in our prototype. > > > > > I can only think of this being acceptable for admin users, and in that > > case I think it would be best to use the multi-backup destination > > feature instead. > > > > After all, how many times do we have to backup to a different location? > > Maybe I'm missing a use case. > > Our clients have privacy and security concerns with the same NFS server > being shared by OpenStack tenants to store volume backups, so they > required cinder-backup to be able to back up volumes to private NFS servers. > > > > > If the community thinks this as a desired feature I would encourage > > adding it with a policy that disables it by default. > > > > > >> Considering that cinder was configured to use the multi-backend backup > >> driver, this is how it works: > >> > >> During a volume backup operation, the user provides a "location" > >> parameter to indicate which backend will be used, and the backup > >> hostpath, if applicable (for NFS driver), to create the volume backup. > >> For instance: > >> > >> - Creating a backup using Ceph backend: > >> $ openstack volume backup create --name --location > >> ceph > >> > >> - Creating a backup using the improved NFS backend: > >> $ openstack volume backup create --name --location > >> nfs://my.nfs.server:/backups > >> > >> If the user chooses Ceph backend, the Ceph driver will be used to > >> create the backup. If the user chooses the NFS backend, the improved NFS > >> driver, previously mentioned, will be used to create the backup. > >> > >> The backup location, if provided, is stored on Cinder database, and > >> can be seen fetching the backup details: > >> $ openstack volume backup show > >> > >> Briefly, this is how the features were implemented: > >> > >> - Cinder API was updated to add an optional location parameter to > >> "create backup" method. Horizon, and OpenStack and Cinder CLIs were > >> updated accordingly, to handle the new parameter. > >> - Cinder backup controller was updated to handle the backup location > >> parameter, and a validator for the parameter was implemented using the > >> oslo config library. > >> - Cinder backup object model was updated to add a nullable location > >> property, so that the backup location could be stored on cinder database. > >> - a new backup driver base class, that extends BackupDriver and > >> accepts a backup context object, was implemented to handle the backup > >> configuration provided at runtime by the user. This new backup base > >> class requires that the concrete drivers implement a method to validate > >> the backup context (similar to BackupDriver.check_for_setup_error) > >> - the 2 new backup drivers, previously mentioned, were implemented > >> using these new backup base class. > >> - in BackupManager class, the "service" attribute, that on upstream > >> OpenStack holds the backup driver class name, was re-implemented as a > >> factory function that accepts a backup context object and return an > >> instance of a backup driver, according to the backup driver configured > >> on cinder.conf file and the backup context provided at runtime by the user. > >> - All the backup operations continue working as usual. > >> > > > > When this feature was discussed upstream we liked the idea of > > implementing this like we do multi-backends for the volume service, > > adding backup-types. > > I found this approved spec [1] (that, I believe, is product of the work > done by eOne that you mentioned before), but I couldn't find any work > items in progress related to it. > Do you know the current status of this spec? Is it ready to be > implemented or is there some more work to be done until there? If we > decide to work on its implementation, would be required to review, and > possibly update, the spec for the current development cycle? > > [1] > https://specs.openstack.org/openstack/cinder-specs/specs/victoria/backup-backends-configuration.html > Hi, I think all that would need to be done regarding the spec is to submit a patch to move it to the current release directory and fix the formatting issue of the tables from the "Data model impact" section. You'll be able to leverage Ivan's work [1] when implementing the multi-backup feature. Cheers, Gorka. [1]: https://review.opendev.org/c/openstack/cinder/+/630305 > > > > > In latest code backup creation operations have been modified to go > > through the scheduler, so that's a piece that is already implemented. > > > > > >> Could you please let us know your thoughts about these features and if > >> you would be open to adding them to upstream Cinder? If yes, we would be > >> willing to submit the specs and work on the upstream implementation, if > >> they are approved. > >> > >> Regards, > >> Daniel Pereira > >> > > > > I believe you will have the full community's support on the first idea > > (though probably not on the proposed implementation). > > > > I'm not so sure on the second one, iti will most likely depend on the > > use cases. Many times the reasons why features are dismissed upstream > > is because there are no clear use cases that justify the addition of the > > code. > > > > Looking forward to continuing this conversation at the PTG, IRC, in a > > spec, or through here. > > > > Cheers, > > Gorka. > > > From smooney at redhat.com Mon Oct 4 12:22:59 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 4 Oct 2021 13:22:59 +0100 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> <4a27c961-354e-270c-5dc7-789a5770fe5c@debian.org> Message-ID: On Wed, Sep 29, 2021 at 9:41 PM Michael Johnson wrote: > > I would like to for Designate. Assuming the eventlet issues get resolved. > > There is at least one bug in 1.16 that has been resolved on the 2.x > chain and some of the new features set us up for new security related > features. i belive that the latest release of eventlets is now compatiable with dnspython 2.x https://eventlet.net/doc/changelog.html#id1 so yes i think we should be movign to eventlet 0.32.0+ and dnspython 2.x > > Michael > > On Wed, Sep 29, 2021 at 1:19 PM Corey Bryant wrote: > > > > > > > > On Fri, Sep 10, 2021 at 2:11 PM Corey Bryant wrote: > >> > >> > >> > >> On Wed, Sep 30, 2020 at 11:53 AM Sean Mooney wrote: > >> > >>> > >>> we do not know if there are other failrue > >>> neutron has a spereate issue which was tracked by https://github.com/eventlet/eventlet/issues/619 > >>> and nova hit the ssl issue with websockify and eventlets tracked by https://github.com/eventlet/eventlet/issues/632 > >>> > >>> so the issue is really eventlets is not compatiabley with dnspython 2.0 > >>> so before openstack can uncap dnspython eventlets need to gain support for dnspython 2.0 > >>> that should hopefully resolve the issues that nova, neutron and other projects are now hitting. > >>> > >>> it is unlikely that this is something we can resolve in openstack alone, not unless we are willing to monkeyptych > >>> eventlets and other dependcies so really we need to work with eventlets and or dnspython to resolve the incompatiablity > >>> caused by the dnspython changes in 2.0 > >> > >> > >> It looks like there's been some progress on eventlet supporting dnspython 2.0: https://github.com/eventlet/eventlet/commit/aeb0390094a1c3f29bb4f25a8dab96587a86b3e8 > > > > > > Does anyone know if there are plans to (attempt to) move to dnspython 2.0 in yoga? > > > > Thanks, > > Corey > From jean-francois.taltavull at elca.ch Mon Oct 4 13:03:08 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Mon, 4 Oct 2021 13:03:08 +0000 Subject: [OpenStack-Ansible] LXC containers apt upgrade Message-ID: <2cce6f95893340dcba81c88e278213b8@elca.ch> Hi All, Following the recent Let's Encrypt certificates expiration, I was wondering what was the best policy to apt upgrade the operating system used by LXC containers running on controller nodes. Has anyone ever defined such a policy ? Is there an OSA tool to do this ? Regards, Jean-Fran?ois From amy at demarco.com Mon Oct 4 13:06:18 2021 From: amy at demarco.com (Amy Marrich) Date: Mon, 4 Oct 2021 08:06:18 -0500 Subject: Diversity and Inclusion Meeting Reminder - Tooday Message-ID: The Diversity & Inclusion WG invites members of all OIF projects to attend our next meeting Monday October 4th, at 17:00 UTC in the #openinfra- diversity channel on OFTC. The agenda can be found at https://etherpad.openstack.org/p/diversity-wg-agenda. Please feel free to add any topics you wish to discuss at the meeting. Thanks, Amy (apotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence.romsics at gmail.com Mon Oct 4 13:45:55 2021 From: bence.romsics at gmail.com (Bence Romsics) Date: Mon, 4 Oct 2021 15:45:55 +0200 Subject: [neutron] bug deputy report of week 2021-09-27 Message-ID: Hi Neutrinos, Here comes last week's report: Unassigned: * https://bugs.launchpad.net/neutron/+bug/1945283 test_overlapping_sec_grp_rules from neutron_tempest_plugin.scenario is failing intermittently * https://bugs.launchpad.net/neutron/+bug/1945306 [dvr+l3ha] north-south traffic not working when VM and main router are not on the same host Medium: * https://bugs.launchpad.net/neutron/+bug/1945512 [HA] HA router first transition to master should not wait fix proposed by ralonsoh: https://review.opendev.org/c/openstack/neutron/+/811751 * https://bugs.launchpad.net/neutron/+bug/1945651 [ovn] Updating binding profile through CLI doesn't work fix proposed by dalvarez and slaweq: https://review.opendev.org/c/openstack/neutron/+/811971 Low: * https://bugs.launchpad.net/neutron/+bug/1945954 [os-ken] Missing subclass for SUBTYPE_RIB_*_MULTICAST in mrtlib fix proposed by ralonsoh: https://review.opendev.org/c/openstack/os-ken/+/812293 Duplicate: * https://bugs.launchpad.net/neutron/+bug/1945747 GET security group rule is missing description attribute fixed on master, but not yet backported to ussuri where it was reported Still being triaged: * https://bugs.launchpad.net/neutron/+bug/1945560 Neutron-metering doesnt get "bandwidth" metric Cheers, Bence (rubasov) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Oct 4 14:20:19 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 4 Oct 2021 16:20:19 +0200 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> <4a27c961-354e-270c-5dc7-789a5770fe5c@debian.org> Message-ID: <266d8064-6b04-d951-4318-96412f7351a8@debian.org> On 10/4/21 2:22 PM, Sean Mooney wrote: > On Wed, Sep 29, 2021 at 9:41 PM Michael Johnson wrote: >> >> I would like to for Designate. Assuming the eventlet issues get resolved. >> >> There is at least one bug in 1.16 that has been resolved on the 2.x >> chain and some of the new features set us up for new security related >> features. > i belive that the latest release of eventlets is now compatiable with > dnspython 2.x > https://eventlet.net/doc/changelog.html#id1 > > so yes i think we should be movign to eventlet 0.32.0+ and dnspython 2.x FYI, in Debian, we have backported patches to the Eventlet version for Victoria, Wallaby and Xena. I didn't have much time to test that yet though. Cheers, Thomas Goirand (zigo) From ralonsoh at redhat.com Mon Oct 4 15:51:08 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 4 Oct 2021 17:51:08 +0200 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: <266d8064-6b04-d951-4318-96412f7351a8@debian.org> References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> <4a27c961-354e-270c-5dc7-789a5770fe5c@debian.org> <266d8064-6b04-d951-4318-96412f7351a8@debian.org> Message-ID: Hello: We are bumping both libraries in https://review.opendev.org/c/openstack/requirements/+/811555/6/upper-constraints.txt (still under review). Regards On Mon, Oct 4, 2021 at 4:34 PM Thomas Goirand wrote: > On 10/4/21 2:22 PM, Sean Mooney wrote: > > On Wed, Sep 29, 2021 at 9:41 PM Michael Johnson > wrote: > >> > >> I would like to for Designate. Assuming the eventlet issues get > resolved. > >> > >> There is at least one bug in 1.16 that has been resolved on the 2.x > >> chain and some of the new features set us up for new security related > >> features. > > i belive that the latest release of eventlets is now compatiable with > > dnspython 2.x > > https://eventlet.net/doc/changelog.html#id1 > > > > so yes i think we should be movign to eventlet 0.32.0+ and dnspython 2.x > > FYI, in Debian, we have backported patches to the Eventlet version for > Victoria, Wallaby and Xena. I didn't have much time to test that yet > though. > > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Oct 4 16:09:24 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 4 Oct 2021 18:09:24 +0200 Subject: [oslo] Propose to EOL stable/queens, stable/rocky on all the oslo scope Message-ID: Hi, On our last meeting of the oslo team we discussed the problem with broken stable branches (rocky and older) in oslo's projects [1]. Indeed, almost all these branches are broken. El?d Ill?s kindly generated a list of periodic-stable errors on Oslo's stable branches [2]. Given the lack of active maintainers on Oslo and given the current status of the CI in those branches, I propose to make them End Of Life. I will wait until the end of month for anyone who would like to maybe step up as maintainer of those branches and who would at least try to fix CI of them. If no one will volunteer for that, I'll EOLing those branches for all the projects under the oslo umbrella. Let us know your thoughts. Thank you for your attention. [1] https://meetings.opendev.org/meetings/oslo/2021/oslo.2021-10-04-15.00.log.txt [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023939.html -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Mon Oct 4 16:46:01 2021 From: corey.bryant at canonical.com (Corey Bryant) Date: Mon, 4 Oct 2021 12:46:01 -0400 Subject: Help with eventlet 0.26.1 and dnspython >= 2 In-Reply-To: References: <56ab7d1e-c8d6-8505-dd81-fb4a20534fdf@debian.org> <5489b743e2ee0052500a961aa99c3aa613e81caa.camel@redhat.com> <4a27c961-354e-270c-5dc7-789a5770fe5c@debian.org> <266d8064-6b04-d951-4318-96412f7351a8@debian.org> Message-ID: Great to see! Thanks for sharing. Corey On Mon, Oct 4, 2021 at 11:51 AM Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > Hello: > > We are bumping both libraries in > https://review.opendev.org/c/openstack/requirements/+/811555/6/upper-constraints.txt > (still under review). > > Regards > > On Mon, Oct 4, 2021 at 4:34 PM Thomas Goirand wrote: > >> On 10/4/21 2:22 PM, Sean Mooney wrote: >> > On Wed, Sep 29, 2021 at 9:41 PM Michael Johnson >> wrote: >> >> >> >> I would like to for Designate. Assuming the eventlet issues get >> resolved. >> >> >> >> There is at least one bug in 1.16 that has been resolved on the 2.x >> >> chain and some of the new features set us up for new security related >> >> features. >> > i belive that the latest release of eventlets is now compatiable with >> > dnspython 2.x >> > https://eventlet.net/doc/changelog.html#id1 >> > >> > so yes i think we should be movign to eventlet 0.32.0+ and dnspython 2.x >> >> FYI, in Debian, we have backported patches to the Eventlet version for >> Victoria, Wallaby and Xena. I didn't have much time to test that yet >> though. >> >> Cheers, >> >> Thomas Goirand (zigo) >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Mon Oct 4 17:28:02 2021 From: neil at tigera.io (Neil Jerram) Date: Mon, 4 Oct 2021 18:28:02 +0100 Subject: [stable][requirements][zuul] unpinned setuptools dependency on stable In-Reply-To: References: <6J4UZQ.VOBD0LVDTPUX1@est.tech> <827e99c6-99b2-54c8-a627-5153e3b84e6b@est.tech> Message-ID: Is anyone helping to progress this? I just checked that stable/ussuri devstack is still broken. Best wishes, Neil On Tue, Sep 28, 2021 at 9:20 AM Neil Jerram wrote: > But I don't think that solution works for devstack, does it? Is there a > way to pin setuptools in a stable/ussuri devstack run, except by changing > the stable branch of the requirements project? > > > On Mon, Sep 27, 2021 at 7:50 PM El?d Ill?s wrote: > >> Hi again, >> >> as I see there is no objection yet about using gibi's solution [1] (as I >> already summarized the situation in my previous mail [2]) for a fix for >> similar cases, so with a general stable core hat on, I *suggest* >> everyone to use that solution to pin the setuptools in tox for every >> failing cases (so that to avoid similar future errors as well). >> >> [1] https://review.opendev.org/810461 >> [2] >> >> http://lists.openstack.org/pipermail/openstack-discuss/2021-September/025059.html >> >> El?d >> >> >> On 2021. 09. 27. 14:47, Balazs Gibizer wrote: >> > >> > >> > On Fri, Sep 24 2021 at 10:21:33 PM +0200, Thomas Goirand >> > wrote: >> >> Hi Gibi! >> >> >> >> Thanks for bringing this up. >> >> >> >> As a distro package maintainer, here's my view. >> >> >> >> On 9/22/21 2:11 PM, Balazs Gibizer wrote: >> >>> Option 1: Bump the major version of the decorator dependency on >> >>> stable. >> >> >> >> Decorator 4.0.11 is even in Debian Stretch (currently oldoldstable), >> for >> >> which I don't even maintain OpenStack anymore (that's OpenStack >> >> Newton...). So I don't see how switching to decorator 4.0.0 is a >> >> problem, and I don't understand how OpenStack could be using 3.4.0 >> which >> >> is in Jessie (ie: 6 years old Debian release). >> >> >> >> PyPi says Decorator 3.4.0 is from 2012: >> >> https://pypi.org/project/decorator/#history >> >> >> >> Do you have your release numbers correct? If so, then switching to >> >> Decorator 4.4.2 (available in Debian Bullseye (shipped with Victoria) >> >> and Ubuntu >=Focal) looks like reasonable to me... Sticking with 3.4.0 >> >> feels a bit crazy (and I wasn't aware of it). >> > >> > Thanks for the info. So from Debian perspective it is OK to bump the >> > decorator version on stable. As others noted in this thread it seems >> > to be more than just decorator that broke. :/ >> > >> >> >> >>> Option 2: Pin the setuptools version during tox installation >> >> >> >> Please don't do this for the master branch, we need OpenStack to stay >> >> current with setuptools (yeah, even if this means breaking changes...). >> > >> > I've no intention to pin it on master. Master needs to work with the >> > latest and greatest. Also on master it is easier to fix / replace the >> > dependencies that become broken with new setuptools. >> > >> >> >> >> For already released OpenStack: I don't mind much if this is done (I >> >> could backport fixes if something breaks). >> > >> > ack >> > >> >> >> >>> Option 3: turn off lower-constraints testing >> >> >> >> I already expressed myself about this: this is dangerous as distros >> rely >> >> on it for setting lower bounds as low as possible (which is always >> >> preferred from a distro point of view). >> >> >> >>> Option 4: utilize pyproject.toml[6] to specify build-time >> requirements >> >> >> >> I don't know about pyproject.toml. >> >> >> >> Just my 2 cents, hoping it's useful, >> > >> > Thanks! >> > >> > Cheers, >> > gibi >> > >> >> Cheers, >> >> >> >> Thomas Goirand (zigo) >> >> >> > >> > >> > >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Mon Oct 4 18:16:52 2021 From: neil at tigera.io (Neil Jerram) Date: Mon, 4 Oct 2021 19:16:52 +0100 Subject: [stable][requirements][zuul] unpinned setuptools dependency on stable In-Reply-To: References: <6J4UZQ.VOBD0LVDTPUX1@est.tech> <827e99c6-99b2-54c8-a627-5153e3b84e6b@est.tech> Message-ID: I can now confirm that https://review.opendev.org/c/openstack/requirements/+/810859 fixes my CI use case. (By temporarily using a fork of the requirements repo that includes that change.) (Fix detail if needed here: https://github.com/projectcalico/networking-calico/pull/64/commits/cbed6282405957f7d60b6e0790c91fb852afe84c ) Best wishes. Neil On Mon, Oct 4, 2021 at 6:28 PM Neil Jerram wrote: > Is anyone helping to progress this? I just checked that stable/ussuri > devstack is still broken. > > Best wishes, > Neil > > > On Tue, Sep 28, 2021 at 9:20 AM Neil Jerram wrote: > >> But I don't think that solution works for devstack, does it? Is there a >> way to pin setuptools in a stable/ussuri devstack run, except by changing >> the stable branch of the requirements project? >> >> >> On Mon, Sep 27, 2021 at 7:50 PM El?d Ill?s wrote: >> >>> Hi again, >>> >>> as I see there is no objection yet about using gibi's solution [1] (as I >>> already summarized the situation in my previous mail [2]) for a fix for >>> similar cases, so with a general stable core hat on, I *suggest* >>> everyone to use that solution to pin the setuptools in tox for every >>> failing cases (so that to avoid similar future errors as well). >>> >>> [1] https://review.opendev.org/810461 >>> [2] >>> >>> http://lists.openstack.org/pipermail/openstack-discuss/2021-September/025059.html >>> >>> El?d >>> >>> >>> On 2021. 09. 27. 14:47, Balazs Gibizer wrote: >>> > >>> > >>> > On Fri, Sep 24 2021 at 10:21:33 PM +0200, Thomas Goirand >>> > wrote: >>> >> Hi Gibi! >>> >> >>> >> Thanks for bringing this up. >>> >> >>> >> As a distro package maintainer, here's my view. >>> >> >>> >> On 9/22/21 2:11 PM, Balazs Gibizer wrote: >>> >>> Option 1: Bump the major version of the decorator dependency on >>> >>> stable. >>> >> >>> >> Decorator 4.0.11 is even in Debian Stretch (currently oldoldstable), >>> for >>> >> which I don't even maintain OpenStack anymore (that's OpenStack >>> >> Newton...). So I don't see how switching to decorator 4.0.0 is a >>> >> problem, and I don't understand how OpenStack could be using 3.4.0 >>> which >>> >> is in Jessie (ie: 6 years old Debian release). >>> >> >>> >> PyPi says Decorator 3.4.0 is from 2012: >>> >> https://pypi.org/project/decorator/#history >>> >> >>> >> Do you have your release numbers correct? If so, then switching to >>> >> Decorator 4.4.2 (available in Debian Bullseye (shipped with Victoria) >>> >> and Ubuntu >=Focal) looks like reasonable to me... Sticking with 3.4.0 >>> >> feels a bit crazy (and I wasn't aware of it). >>> > >>> > Thanks for the info. So from Debian perspective it is OK to bump the >>> > decorator version on stable. As others noted in this thread it seems >>> > to be more than just decorator that broke. :/ >>> > >>> >> >>> >>> Option 2: Pin the setuptools version during tox installation >>> >> >>> >> Please don't do this for the master branch, we need OpenStack to stay >>> >> current with setuptools (yeah, even if this means breaking >>> changes...). >>> > >>> > I've no intention to pin it on master. Master needs to work with the >>> > latest and greatest. Also on master it is easier to fix / replace the >>> > dependencies that become broken with new setuptools. >>> > >>> >> >>> >> For already released OpenStack: I don't mind much if this is done (I >>> >> could backport fixes if something breaks). >>> > >>> > ack >>> > >>> >> >>> >>> Option 3: turn off lower-constraints testing >>> >> >>> >> I already expressed myself about this: this is dangerous as distros >>> rely >>> >> on it for setting lower bounds as low as possible (which is always >>> >> preferred from a distro point of view). >>> >> >>> >>> Option 4: utilize pyproject.toml[6] to specify build-time >>> requirements >>> >> >>> >> I don't know about pyproject.toml. >>> >> >>> >> Just my 2 cents, hoping it's useful, >>> > >>> > Thanks! >>> > >>> > Cheers, >>> > gibi >>> > >>> >> Cheers, >>> >> >>> >> Thomas Goirand (zigo) >>> >> >>> > >>> > >>> > >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Mon Oct 4 18:46:29 2021 From: feilong at catalyst.net.nz (feilong) Date: Tue, 5 Oct 2021 07:46:29 +1300 Subject: [oslo] Propose to EOL stable/queens, stable/rocky on all the oslo scope In-Reply-To: References: Message-ID: Hi Herve, Please correct me, does that mean we have to also EOL stable/queens and stable/rocky for most of the other projects technically? Or it should be OK? Thanks. On 5/10/21 5:09 am, Herve Beraud wrote: > Hi, > > On our last meeting of the oslo team we discussed the problem with > broken stable > branches (rocky and older) in oslo's projects [1]. > > Indeed, almost all these branches are broken. El?d Ill?s kindly > generated a list of periodic-stable errors on Oslo's stable branches [2]. > > Given the lack of active maintainers on Oslo and given the current > status of the CI in those branches, I propose to make them End Of Life. > > I will wait until the end of month for anyone who would like to maybe > step up > as maintainer of those branches and who would at least try to fix CI > of them. > > If no one will volunteer for that, I'll EOLing those branches for all > the projects under the oslo umbrella. > > Let us know your thoughts. > > Thank you for your attention. > > [1] > https://meetings.opendev.org/meetings/oslo/2021/oslo.2021-10-04-15.00.log.txt > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023939.html > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (???) (he/him) Head of Research & Development Catalyst Cloud Aotearoa's own Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Oct 4 19:00:18 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 04 Oct 2021 21:00:18 +0200 Subject: [oslo] Propose to EOL stable/queens, stable/rocky on all the oslo scope In-Reply-To: References: Message-ID: <3055264.zr5fvq113q@p1> Hi, On poniedzia?ek, 4 pa?dziernika 2021 20:46:29 CEST feilong wrote: > Hi Herve, > > Please correct me, does that mean we have to also EOL stable/queens and > stable/rocky for most of the other projects technically? Or it should be > OK? Thanks. I don't think we have to. I think it's not that common that we are using new versions of oslo libs in those stable branches so IMHO if all works fine for some project and it has maintainers, it still can be in EM phase. Or is my understanding wrong here? > > On 5/10/21 5:09 am, Herve Beraud wrote: > > Hi, > > > > On our last meeting of the oslo team we discussed the problem with > > broken stable > > branches (rocky and older) in oslo's projects [1]. > > > > Indeed, almost all these branches are broken. El?d Ill?s kindly > > generated a list of periodic-stable errors on Oslo's stable branches [2]. > > > > Given the lack of active maintainers on Oslo and given the current > > status of the CI in those branches, I propose to make them End Of Life. > > > > I will wait until the end of month for anyone who would like to maybe > > step up > > as maintainer of those branches and who would at least try to fix CI > > of them. > > > > If no one will volunteer for that, I'll EOLing those branches for all > > the projects under the oslo umbrella. > > > > Let us know your thoughts. > > > > Thank you for your attention. > > > > [1] > > https://meetings.opendev.org/meetings/oslo/2021/oslo. 2021-10-04-15.00.log.tx > > t > > [2] > > http://lists.openstack.org/pipermail/openstack-discuss/2021-July/ 023939.html -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From openstack at nemebean.com Mon Oct 4 20:59:23 2021 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 4 Oct 2021 15:59:23 -0500 Subject: [oslo] Propose to EOL stable/queens, stable/rocky on all the oslo scope In-Reply-To: <3055264.zr5fvq113q@p1> References: <3055264.zr5fvq113q@p1> Message-ID: <25b21881-bd0b-f763-9bb5-a66340108455@nemebean.com> On 10/4/21 2:00 PM, Slawek Kaplonski wrote: > Hi, > > On poniedzia?ek, 4 pa?dziernika 2021 20:46:29 CEST feilong wrote: >> Hi Herve, >> >> Please correct me, does that mean we have to also EOL stable/queens and >> stable/rocky for most of the other projects technically? Or it should be >> OK? Thanks. > > I don't think we have to. I think it's not that common that we are using new > versions of oslo libs in those stable branches so IMHO if all works fine for > some project and it has maintainers, it still can be in EM phase. > Or is my understanding wrong here? The Oslo libs released for those versions will continue to work, so you're right that it wouldn't be necessary to EOL all of the consumers of Oslo. The danger would be if a critical bug were found in one of those old releases and a fix needed to be released. However, at this point the likelihood of finding such a serious bug seems pretty low, and in some cases it may be possible to use a newer Oslo release with an older service. > >> >> On 5/10/21 5:09 am, Herve Beraud wrote: >>> Hi, >>> >>> On our last meeting of the oslo team we discussed the problem with >>> broken stable >>> branches (rocky and older) in oslo's projects [1]. >>> >>> Indeed, almost all these branches are broken. El?d Ill?s kindly >>> generated a list of periodic-stable errors on Oslo's stable branches [2]. >>> >>> Given the lack of active maintainers on Oslo and given the current >>> status of the CI in those branches, I propose to make them End Of Life. >>> >>> I will wait until the end of month for anyone who would like to maybe >>> step up >>> as maintainer of those branches and who would at least try to fix CI >>> of them. >>> >>> If no one will volunteer for that, I'll EOLing those branches for all >>> the projects under the oslo umbrella. >>> >>> Let us know your thoughts. >>> >>> Thank you for your attention. >>> >>> [1] >>> https://meetings.opendev.org/meetings/oslo/2021/oslo. > 2021-10-04-15.00.log.tx >>> t >>> [2] >>> http://lists.openstack.org/pipermail/openstack-discuss/2021-July/ > 023939.html > > From rafaelweingartner at gmail.com Mon Oct 4 21:27:52 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 4 Oct 2021 18:27:52 -0300 Subject: [CloudKitty] Virtual PTG October 2021 Message-ID: Hello everyone, As you probably heard our next PTG will be held virtually in October. I've marked October 18, at 14:00-17:00 UTC [1]. We already have a CloudKitty meeting organized for this day. Furthermore, I opened an Etherpad [2] to organize the topics of the meeting. Suggestions are welcome! [1] https://ethercalc.openstack.org/8tum5yl1bx43 [2] https://etherpad.opendev.org/p/cloudkitty-ptg-yoga -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Mon Oct 4 22:13:52 2021 From: sorrison at gmail.com (Sam Morrison) Date: Tue, 5 Oct 2021 09:13:52 +1100 Subject: [kolla] skipping base if already exist In-Reply-To: References: <9217E6F1-22AC-4579-A920-5EFA7DCCAE56@gmail.com> Message-ID: > On 4 Oct 2021, at 6:18 pm, Rados?aw Piliszek wrote: > > On Mon, 4 Oct 2021 at 07:24, Sam Morrison > wrote: >> >> Hi, >> >> We?ve started to use kolla to build container images and trying to figure out if I?m doing it wrong it it?s just not how kolla works. >> >> What I?m trying to do it not rebuild the base and openstack-base images when we build an image for a project. >> >> Example. >> >> We build a horizon image and it builds and pushes up to our registry the following >> >> kolla/ubuntu-source-base >> kolla/ubuntu-source-openstack-base >> kolla/ubuntu-source-horizon >> >> >> Now I can rebuild this without having to again build the base images with >> >> ?skip parents >> >> >> But now I want to build a barbican image and I can?t use skip-parents as the barbican image also requires barbican-base. Which means I need to go and rebuild the ubuntu base and Openstack base images again. >> >> Is there a way to essentially skip parents but only if they don?t exist in the registry already? Or make skip-parents only mean skip base and Openstack-base? > > You might then be interested in --skip-existing Ha, how did I miss that! Thanks, using that and a combination of pre pulling the base images from the registry before building has got what I wanted. Thanks, Sam > > -yoctozepto > >> >> Thanks in advance, >> Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Oct 4 22:43:37 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 04 Oct 2021 17:43:37 -0500 Subject: [all][tc] Technical Committee next weekly meeting on Oct 7th at 1500 UTC Message-ID: <17c4d7a0f6d.1124813d4556887.8685917442078267933@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for Oct 7th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, Oct 6th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From stendulker at gmail.com Tue Oct 5 07:22:44 2021 From: stendulker at gmail.com (Shivanand Tendulker) Date: Tue, 5 Oct 2021 12:52:44 +0530 Subject: [ironic][molteniron][qa] Anyone still using MoltenIron? In-Reply-To: References: Message-ID: Hello Julia MoltenIron is used in HPE Ironic 3rd Party CI to reserve the nodes. Thanks and Regards Shiv On Fri, Oct 1, 2021 at 12:01 AM Julia Kreger wrote: > I could have sworn that zuul had support to be basic > selection/checkout of a resource instead of calling out to something > else. > > Oh well! Good to know. Thanks Eric! > > On Thu, Sep 30, 2021 at 11:23 AM Barrera, Eric > wrote: > > > > Hi Julia, > > > > Yea, the Zuul based Third Party CI I'm building uses Molten Iron to > manage bare metal. I believe other Ironic 3rd Party CI projects are also > using it. > > > > Though, I don't see it as an absolute necessity. > > > > > > Regards, > > Eric > > > > > > > > Internal Use - Confidential > > > > -----Original Message----- > > From: Julia Kreger > > Sent: Thursday, September 30, 2021 11:25 AM > > To: openstack-discuss > > Subject: [ironic][molteniron][qa] Anyone still using MoltenIron? > > > > > > [EXTERNAL EMAIL] > > > > Out of curiosity, is anyone still using MotltenIron? > > > > A little historical context: It was originally tooling that came out of > IBM to reserve physical nodes in a CI cluster in order to perform testing. > It was never intended to be released as it was tooling > > *purely* for CI job usage. > > > > The reason I'm asking is that the ironic team is considering going ahead > and retiring the repository. > > > > Thanks! > > > > -Julia > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Tue Oct 5 08:14:16 2021 From: feilong at catalyst.net.nz (feilong) Date: Tue, 5 Oct 2021 21:14:16 +1300 Subject: [Magnum] Virtual PTG October 2021 Message-ID: <085b4407-de12-957d-9bee-d5ba686b0194@catalyst.net.nz> Hello team, Our Yoga PTG will be held virtually in October. We have booked Oct 18, at 22:00-00:00 UTC and Oct 20 9:00-11:00UTC [1] for our Yoga PTG. I opened an etherpad [2] to organize the topics of the meetings. Please feel free to add your topics! Thank you. [1] https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf [2] https://etherpad.opendev.org/p/magnum-ptg-yoga -- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (???) (he/him) Head of Research & Development Catalyst Cloud Aotearoa's own Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdemaced at redhat.com Tue Oct 5 10:05:56 2021 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Tue, 5 Oct 2021 12:05:56 +0200 Subject: [kuryr] Virtual PTG October 2021 In-Reply-To: References: Message-ID: Hello, With the PTG approaching I would like to remind you that the Kuryr sessions will be held on Oct 19 7-8 UTC and Oct 22 13-14 UTC and in case you're interested in discussing any topic with the Kuryr team to include it to the etherpad[1]. [1] https://etherpad.opendev.org/p/kuryr-yoga-ptg See you on the PTG. Thanks, Maysa Macedo. On Thu, Jul 22, 2021 at 11:36 AM Maysa De Macedo Souza wrote: > Hello, > > I booked the following slots for Kuryr during the Yoga PTG: Oct 19 7-8 > UTC and Oct 22 13-14 UTC. > If you have any topic ideas you would like to discuss, please include them > in the etherpad[1], > also it would be interesting to include your name there if you plan to > attend any Kuryr session. > > See you on the next PTG. > > Cheers, > Maysa. > > [1] https://etherpad.opendev.org/p/kuryr-yoga-ptg > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Oct 5 13:51:28 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 5 Oct 2021 15:51:28 +0200 Subject: [nova][placement] Asia friendly meeting slot on 7th of Oct Message-ID: Hi, This is a reminder that we will hold our monthly Asian-friendly Nova meeting timeslot on next Thursday 8:00 UTC [1]. Feel free to join us in the #openstack-nova IRC channel on the OFTC server [2] so we could discuss some topics like how to help synchronously or asynchronously contributors that are not in the European and American timezones. If you have problems joining us with IRC, please let me know by replying this email. Thanks, -Sylvain [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=2021-10-07T08:00:00 [2] https://docs.openstack.org/contributors/common/irc.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From sshnaidm at redhat.com Tue Oct 5 14:45:19 2021 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Tue, 5 Oct 2021 17:45:19 +0300 Subject: [tripleo][ansible] Openstack Ansible collections (modules) Yoga PTG Message-ID: Hi, all Openstack Ansible collection (modules) project has its Yoga PTG session on Wed 20 Oct 13.00-14.00 UTC in Cactus room. Please add topics for discussion in the etherpad: https://etherpad.opendev.org/p/osac-yoga-ptg Thanks and see you in PTG! -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Tue Oct 5 14:50:55 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Tue, 5 Oct 2021 16:50:55 +0200 Subject: [baremetal-sig][ironic] Tue Oct 12, 2021, 2pm & 6pm UTC: Ironic User & Operator Feedback (Session 2) Message-ID: <74936887-375d-c754-4d1f-70640fb4dd9c@cern.ch> Dear all, Due to popular demand and since we had to cut things short last month, the Bare Metal SIG has scheduled two meetings next week to continue the user/operator/admin feedback: - Tue Oct 12, 2021, at 2pm UTC (EMEA friendly), and - Tue Oct 12, 2021, at 6pm UTC (AMER friendly) So come along, meet other Ironicers and discuss your Ironic successes, pain points, issues, experiences and ideas with the community and in particular the upstream developers! Everyone, in particular not-yet Ironicers, are welcome to join! All details can be found on: - https://etherpad.opendev.org/p/bare-metal-sig Hope to see you there! Julia & Arne (for the Bare Metal SIG) From paspao at gmail.com Tue Oct 5 14:52:16 2021 From: paspao at gmail.com (P. P.) Date: Tue, 5 Oct 2021 16:52:16 +0200 Subject: [install] Install on OVH dedicated servers In-Reply-To: References: Message-ID: Hello all, I know that OVH uses Openstack to offer their public cloud services. I would like to know if someone was able to use their dedicated servers to build a private cloud based on Openstack. Do you think OVH dedicated server hardware + Vrack can provide sufficient requirements for a production environment? Thank you. P. From anyrude10 at gmail.com Tue Oct 5 13:24:35 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Tue, 5 Oct 2021 18:54:35 +0530 Subject: [TripleO] Timeout while introspecting Overcloud Node Message-ID: Hi Team, We were trying to provision Overcloud Nodes using the Tripleo wallaby release. For this, on Undercloud machine (Centos 8.4), we downloaded the ironic-python and overcloud images from the following link: https://images.rdoproject.org/centos8/wallaby/rdo_trunk/current-tripleo/ After untarring, we executed the command *openstack overcloud image upload* This command setted the images at path /var/lib/ironic/images folder successfully. Then we uploaded our instackenv.json file and executed the command *openstack overcloud node introspect --all-manageable* On the overcloud node, we are getting the Timeout error while getting the agent.kernel and agent.ramdisk image. *http://10.0.1.10/8088/agent.kernel......Connection timed out (http://ipxe.org/4c0a6092 )* *http://10.0.1.10/8088/agent.kernel......Connection timed out (http://ipxe.org/4c0a6092 )* However, from another test machine, when I tried *wget http://10.0.1.10/8088/agent.kernel * - It successfully worked Screenshot is attached for the reference Can someone please help in resolving this issue. Regards Anirudh Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture.jpg Type: image/jpeg Size: 120096 bytes Desc: not available URL: From urimeba511 at gmail.com Tue Oct 5 15:12:39 2021 From: urimeba511 at gmail.com (Uriel Medina) Date: Tue, 5 Oct 2021 10:12:39 -0500 Subject: [kolla] Neutron-metering is not creating the bandwidth metric Message-ID: Hello everyone. I'm having issues with the Neutron-Metering component inside Kolla-Ansible, and I was hoping that you guys could help me :) The problem is that Neutron Metering doesn't create/get the bandwidth metric. I create a report of this inside the Neutron Launchpad, thinking that maybe the Metering component had troubles: https://bugs.launchpad.net/neutron/+bug/1945560 With the help of Bence, we've discovered that the messages of the creation of metering labels and metering rules were OK inside RabbitMQ. After that, I deployed a new DevStack environment and, with the right configuration, the Neutron Metering is working as it should, only inside the DevStack environment. That made me think that maybe Kolla Ansible had a flag to avoid the modification of iptables and I found the flag "docker_disable_default_iptables_rules" which I set to "no", as I'm using the Wallaby version. Setting this flag didn't do the trick and I was thinking that maybe there is another flag or component of Kolla Ansible that prevents the modification of iptables, apart from "docker_disable_default_iptables_rules". Thanks in advance. Greetings! From feilong at catalyst.net.nz Tue Oct 5 19:16:48 2021 From: feilong at catalyst.net.nz (feilong) Date: Wed, 6 Oct 2021 08:16:48 +1300 Subject: [Magnum] Virtual PTG October 2021 In-Reply-To: <085b4407-de12-957d-9bee-d5ba686b0194@catalyst.net.nz> References: <085b4407-de12-957d-9bee-d5ba686b0194@catalyst.net.nz> Message-ID: Update links to the correct locations. Sorry for the confusion. [1] https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf [2] https://etherpad.opendev.org/p/magnum-ptg-yoga On 5/10/21 9:14 pm, feilong wrote: > > Hello team, > > Our Yoga PTG will be held virtually in October. We have booked Oct 18, > at 22:00-00:00 UTC and Oct 20 9:00-11:00UTC [1] for our Yoga PTG. I > opened an etherpad [2] to organize the topics of the meetings. Please > feel free to add your topics! Thank you. > > > [1] > https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf > > [2] https://etherpad.opendev.org/p/magnum-ptg-yoga > > > > -- > Cheers & Best regards, > ------------------------------------------------------------------------------ > Feilong Wang (???) (he/him) > Head of Research & Development > > Catalyst Cloud > Aotearoa's own > > Mob: +64 21 0832 6348 | www.catalystcloud.nz > Level 6, 150 Willis Street, Wellington 6011, New Zealand > > CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. > It may contain privileged, confidential or copyright information. If you are > not the named recipient, any use, reliance upon, disclosure or copying of this > email or its attachments is unauthorised. If you have received this email in > error, please reply via email or call +64 21 0832 6348. > ------------------------------------------------------------------------------ -- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (???) (he/him) Head of Research & Development Catalyst Cloud Aotearoa's own Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Tue Oct 5 20:35:44 2021 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 5 Oct 2021 15:35:44 -0500 Subject: [all][ptg] October 2021 Registration & Schedule Message-ID: Hi everyone! The October 2021 Project Teams Gathering is right around the corner and the official schedule is live! You can download it here [0], or find it on the PTG website [1]. The PTGbot should be up to date by the end of the week [2] to reflect what is in the ethercalc which is now locked! The PTGbot is the during-event website to keep track of what's being discussed and any last-minute schedule changes. It is driven from the discussion in the #openinfra-events IRC channel where the PTGbot listens. Friendly reminder that the IRC network has changed from freenode to OFTC. Also, please don't forget to register[3] because that's how you'll receive event details, passwords, and other relevant information about the PTG. Please let us know if you have any questions! Thanks! Ashlee & Kendall (diablo_rojo) [0] Schedule https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf [1] PTG Website www.openstack.org/ptg [2] PTGbot: https://ptgbot.opendev.org/ [3] PTG Registration: https://openinfra-ptg.eventbrite.com From arnaud.morin at gmail.com Tue Oct 5 21:06:56 2021 From: arnaud.morin at gmail.com (Arnaud) Date: Tue, 05 Oct 2021 23:06:56 +0200 Subject: [install] Install on OVH dedicated servers In-Reply-To: References: Message-ID: <6597E673-0547-4D57-9CCD-4BAF632F7246@gmail.com> Hello, That's a hard question. The answer mostly depends on what hardware you will use, how many instances and computes you plan to have, etc. But, there is no reason that prevent you to successfully run an OpenStack infrastructure on OVH servers. As you said, the OVH public cloud offer is based on OpenStack and it works. And even if the hardware used for this offer is different from the one you will find in public catalog, there is no major difference in how they manage the servers (a server is a server ;)). Regards, Arnaud (from ovh / public cloud team) Le 5 octobre 2021 16:52:16 GMT+02:00, "P. P." a ?crit?: >Hello all, > >I know that OVH uses Openstack to offer their public cloud services. > >I would like to know if someone was able to use their dedicated servers to build a private cloud based on Openstack. > >Do you think OVH dedicated server hardware + Vrack can provide sufficient requirements for a production environment? > >Thank you. >P. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akanevsk at redhat.com Tue Oct 5 21:15:22 2021 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Tue, 5 Oct 2021 16:15:22 -0500 Subject: [Swift][Interop] PTG time In-Reply-To: References: <20210902092144.1a44225f@suzdal.zaitcev.lan> Message-ID: Swift team, can we confirm the date and time for a joint meeting between Swift and Interop WG? Will Monday 16:00 or 16:30 UTC work for you? Thanks, Arkady On Thu, Sep 2, 2021 at 10:13 AM Arkady Kanevsky wrote: > Thanks Pete. > > On Thu, Sep 2, 2021 at 9:21 AM Pete Zaitcev wrote: > >> On Fri, 6 Aug 2021 11:32:22 -0500 >> Arkady Kanevsky wrote: >> >> > Interop team would like time on Yoga PTG Monday or Tu between 21-22 UTC >> > to discuss Interop guideline coverage for Swift. >> >> I suspect this fell through the cracks, it's not on Swift PTG Etherpad. >> I'll poke our PTL. The slots you're proposing aren't in conflict with >> existing PTG schedule, so this should work. >> >> -- Pete >> >> > > -- > Arkady Kanevsky, Ph.D. > Phone: 972 707-6456 > Corporate Phone: 919 729-5744 ext. 8176456 > -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed Oct 6 06:37:12 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 6 Oct 2021 12:07:12 +0530 Subject: [glance] Yoga PTG schedule Message-ID: Hello All, Greetings!!! Yoga PTG is around the corner and if you haven't already registered, please do so as soon as possible [1]. I have created an etherpad [2] and also added day wise topics along with timings we are going to discuss. Kindly let me know if you have any concerns with allotted time slots. We also have some slots open on Tuesday, Wednesday and Thursday for unplanned discussions. So please feel free to add your topics if you still haven't added yet. As a reminder, these are the time slots for our discussion. Tuesday 19 October 2021 1400 UTC to 1700 UTC Wednesday 20 October 2021 1400 UTC to 1700 UTC Thursday 21 October 2021 1400 UTC to 1700 UTC Friday 22 October 2021 1400 UTC to 1700 UTC NOTE: At the moment we don't have any sessions scheduled on Friday, if there are any last moment request(s)/topic(s) we will discuss them on Friday else we will conclude our PTG on Thursday 21st October. We will be using bluejeans for our discussion, kindly try to use it once before the actual discussion. The meeting URL is mentioned in etherpad [2] and will be the same throughout the PTG. [1] https://www.eventbrite.com/e/project-teams-gathering-october-2021-tickets-161235669227 [2] https://etherpad.opendev.org/p/yoga-glance-ptg Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Wed Oct 6 08:25:09 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Wed, 6 Oct 2021 08:25:09 +0000 Subject: [cyborg] No meeting on 06th October Message-ID: <83fff7ba09b64b3ea9c9d3f016cb7fee@inspur.com> Hi All, All Cyborg Core contributor are in holiday, so we will cancel this weekly meeting (October 6th). According to schedule will meeting directly on October 13th Thanks. brinzhang Inspur Electronic Information Industry Co.,Ltd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Oct 6 09:55:49 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 6 Oct 2021 11:55:49 +0200 Subject: [masakari] Proposal to cancel weekly meetings In-Reply-To: References: Message-ID: I received no negative feedback so I went ahead and proposed a change to cancel this meeting officially. [1] [1] https://review.opendev.org/c/opendev/irc-meetings/+/812650 -yoctozepto On Wed, 29 Sept 2021 at 18:06, Rados?aw Piliszek wrote: > > Dears, > > Due to low attendance and the current schedule being uncomfortable for > me, I propose to cancel the weekly meetings and suggest we coordinate > via this mailing list and do ad-hoc chats on IRC as I'm simply lurking > there most of the time and answering the messages. > > Kind regards, > > -yoctozepto From paspao at gmail.com Wed Oct 6 10:27:04 2021 From: paspao at gmail.com (P. P.) Date: Wed, 6 Oct 2021 12:27:04 +0200 Subject: [install] Install on OVH dedicated servers In-Reply-To: <6597E673-0547-4D57-9CCD-4BAF632F7246@gmail.com> References: <6597E673-0547-4D57-9CCD-4BAF632F7246@gmail.com> Message-ID: <08932F82-3726-49BA-BC52-B0478C90CE03@gmail.com> Hello Arnaud, thanks for your reply. Yes OVH has pretty large dedicated servers too. My main concern is about networking Vrack speed to choose, they offer 1Gbps on middle class Advance servers to 25Gbps on High end Scale servers. And for sure storage nodes will need higher bandwidth than control nodes. Any suggestion on minimal bandwidth requirement per type of node? Thank you. P. > Il giorno 5 ott 2021, alle ore 23:06, Arnaud ha scritto: > > Hello, > > That's a hard question. The answer mostly depends on what hardware you will use, how many instances and computes you plan to have, etc. > > But, there is no reason that prevent you to successfully run an OpenStack infrastructure on OVH servers. > > As you said, the OVH public cloud offer is based on OpenStack and it works. > And even if the hardware used for this offer is different from the one you will find in public catalog, there is no major difference in how they manage the servers (a server is a server ;)). > > Regards, > Arnaud (from ovh / public cloud team) > > Le 5 octobre 2021 16:52:16 GMT+02:00, "P. P." a ?crit : > Hello all, > > I know that OVH uses Openstack to offer their public cloud services. > > I would like to know if someone was able to use their dedicated servers to build a private cloud based on Openstack. > > Do you think OVH dedicated server hardware + Vrack can provide sufficient requirements for a production environment? > > Thank you. > P. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at bitswalk.com Wed Oct 6 12:52:53 2021 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Wed, 6 Oct 2021 14:52:53 +0200 Subject: [KEYSTONE][POLICIES] - Overrides that don't work? Message-ID: Hi team, I'm having a weird behavior with my Openstack platform that makes me think I may have misunderstood some mechanisms on the way policies are working and especially the overriding. So, long story short, I've few services that get custom policies such as glance that behave as expected, Keystone's one aren't. All in all, here is what I'm understanding of the mechanism: This is the keystone policy that I'm looking to override: https://paste.openstack.org/show/bwuF6jFISscRllWdUURL/ This policy default can be found in here: https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 Here is the policy that I'm testing: https://paste.openstack.org/show/bHQ0PXvOro4lXNTlxlie/ I know, this policy isn't taking care of the admin role but it's not the point. >From my understanding, any user with the project-manager role should be able to add any available user on any available group as long as the project-manager domain is the same as the target. However, when I'm doing that, keystone complains that I'm not authorized to do so because the user token scope is 'PROJECT' where it should be 'SYSTEM' or 'DOMAIN'. Now, I wouldn't be surprised of that message being thrown out with the default policy as it's stated on the code with the following: https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 So the question is, if the custom policy doesn't override the default scope_types how am I supposed to make it work? I hope it was clear enough, but if not, feel free to ask me for more information. PS: I've tried to assign this role with a domain scope to my user and I've still the same issue. Thanks a lot everyone! -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Oct 6 13:09:19 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 6 Oct 2021 15:09:19 +0200 Subject: [monasca][Release-job-failures] Release of openstack/monasca-agent for ref refs/tags/6.0.0 failed In-Reply-To: References: Message-ID: Hello Monasca team, Please have a look at the job error below. Indeed your publish-monasca-agent-docker-images fail with a cryptography error [1]. Indeed cryptography has been recently upgraded from version 3.4.8 to the version 35.0.0 [2]. This could explain the reason why this job fails to build rust. We think that your used docker image needs some updating to solve this issue. For more details about the experienced issue please have a look at the jobs links below (the forwarded email). You should also note that the same problem appears with monasca-notification. Thank you for reading. [1] ``` 2021-10-06 11:22:30.035239 | ubuntu-focal | writing manifest file 'src/cryptography.egg-info/SOURCES.txt' 2021-10-06 11:22:30.035260 | ubuntu-focal | copying src/cryptography/py.typed -> build/lib.linux-x86_64-3.6/cryptography 2021-10-06 11:22:30.035282 | ubuntu-focal | creating build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/_rust 2021-10-06 11:22:30.035303 | ubuntu-focal | copying src/cryptography/hazmat/bindings/_rust/__init__.pyi -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/_rust 2021-10-06 11:22:30.035325 | ubuntu-focal | copying src/cryptography/hazmat/bindings/_rust/asn1.pyi -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/_rust 2021-10-06 11:22:30.035358 | ubuntu-focal | copying src/cryptography/hazmat/bindings/_rust/ocsp.pyi -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/_rust 2021-10-06 11:22:30.035381 | ubuntu-focal | copying src/cryptography/hazmat/bindings/_rust/x509.pyi -> build/lib.linux-x86_64-3.6/cryptography/hazmat/bindings/_rust 2021-10-06 11:22:30.035422 | ubuntu-focal | running build_ext 2021-10-06 11:22:30.035446 | ubuntu-focal | generating cffi module 'build/temp.linux-x86_64-3.6/_openssl.c' 2021-10-06 11:22:30.035467 | ubuntu-focal | creating build/temp.linux-x86_64-3.6 2021-10-06 11:22:30.035489 | ubuntu-focal | running build_rust 2021-10-06 11:22:30.035510 | ubuntu-focal | 2021-10-06 11:22:30.035532 | ubuntu-focal | =============================DEBUG ASSISTANCE============================= 2021-10-06 11:22:30.035553 | ubuntu-focal | If you are seeing a compilation error please try the following steps to 2021-10-06 11:22:30.035575 | ubuntu-focal | successfully install cryptography: 2021-10-06 11:22:30.035596 | ubuntu-focal | 1) Upgrade to the latest pip and try again. This will fix errors for most 2021-10-06 11:22:30.035618 | ubuntu-focal | users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip 2021-10-06 11:22:30.035639 | ubuntu-focal | 2) Read https://cryptography.io/en/latest/installation/ for specific 2021-10-06 11:22:30.035660 | ubuntu-focal | instructions for your platform. 2021-10-06 11:22:30.035682 | ubuntu-focal | 3) Check our frequently asked questions for more information: 2021-10-06 11:22:30.035703 | ubuntu-focal | https://cryptography.io/en/latest/faq/ 2021-10-06 11:22:30.035724 | ubuntu-focal | 4) Ensure you have a recent Rust toolchain installed: 2021-10-06 11:22:30.035763 | ubuntu-focal | https://cryptography.io/en/latest/installation/#rust 2021-10-06 11:22:30.035786 | ubuntu-focal | 2021-10-06 11:22:30.035807 | ubuntu-focal | Python: 3.6.8 2021-10-06 11:22:30.035828 | ubuntu-focal | platform: Linux-5.4.0-88-generic-x86_64-with 2021-10-06 11:22:30.035850 | ubuntu-focal | pip: n/a 2021-10-06 11:22:30.035871 | ubuntu-focal | setuptools: 58.2.0 2021-10-06 11:22:30.035893 | ubuntu-focal | setuptools_rust: 0.12.1 2021-10-06 11:22:30.035914 | ubuntu-focal | =============================DEBUG ASSISTANCE============================= 2021-10-06 11:22:30.035936 | ubuntu-focal | 2021-10-06 11:22:30.035959 | ubuntu-focal | error: can't find Rust compiler 2021-10-06 11:22:30.035981 | ubuntu-focal | 2021-10-06 11:22:30.036009 | ubuntu-focal | If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler. 2021-10-06 11:22:30.036040 | ubuntu-focal | 2021-10-06 11:22:30.036061 | ubuntu-focal | To update pip, run: 2021-10-06 11:22:30.036082 | ubuntu-focal | 2021-10-06 11:22:30.036103 | ubuntu-focal | pip install --upgrade pip 2021-10-06 11:22:30.036124 | ubuntu-focal | 2021-10-06 11:22:30.036145 | ubuntu-focal | and then retry package installation. 2021-10-06 11:22:30.036166 | ubuntu-focal | 2021-10-06 11:22:30.036206 | ubuntu-focal | If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain. 2021-10-06 11:22:30.036237 | ubuntu-focal | 2021-10-06 11:22:30.036259 | ubuntu-focal | This package requires Rust >=1.41.0. 2021-10-06 11:22:30.036280 | ubuntu-focal | ---------------------------------------- 2021-10-06 11:22:30.036305 | ubuntu-focal | [0m [91m ERROR: Failed building wheel for cryptography ``` [2] https://opendev.org/openstack/requirements/commit/1fa22ce584ef8a5f5ec0c0e606e5e0daf38de148 ---------- Forwarded message --------- De : Date: mer. 6 oct. 2021 ? 13:37 Subject: [Release-job-failures] Release of openstack/monasca-agent for ref refs/tags/6.0.0 failed To: Build failed. - openstack-upload-github-mirror https://zuul.opendev.org/t/openstack/build/1bb52ee7d5e74741b8b5f180ba48061f : SUCCESS in 1m 32s - release-openstack-python https://zuul.opendev.org/t/openstack/build/ecd6b609a8f1440f98f521d018aae2bb : SUCCESS in 5m 39s - announce-release https://zuul.opendev.org/t/openstack/build/6022c2ed55bb411498b5359ed606a3a1 : SUCCESS in 7m 43s - propose-update-constraints https://zuul.opendev.org/t/openstack/build/0f767adb04294e05ab2f54175c38c12e : SUCCESS in 7m 46s - publish-monasca-agent-docker-images https://zuul.opendev.org/t/openstack/build/09e6e50adb8649e8a61083e8fa0cc602 : POST_FAILURE in 13m 19s _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Oct 6 13:44:56 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Oct 2021 13:44:56 +0000 Subject: [monasca][Release-job-failures] Release of openstack/monasca-agent for ref refs/tags/6.0.0 failed In-Reply-To: References: Message-ID: <20211006134456.k2fwb3zr5mriuvt2@yuggoth.org> On 2021-10-06 15:09:19 +0200 (+0200), Herve Beraud wrote: [...] > cryptography has been recently upgraded from version 3.4.8 to the > version 35.0.0 [...] Note that was updated on master, not stable/xena, but the container image build seems to have chosen the master branch constraints. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From adivya1.singh at gmail.com Wed Oct 6 14:01:04 2021 From: adivya1.singh at gmail.com (Adivya Singh) Date: Wed, 6 Oct 2021 19:31:04 +0530 Subject: API used for update Image in Glance Message-ID: Hi Team, Can you please list me , what i have to do different while updating Image in Glance using a API call, Can Some body share the Syntax for the same, Do i need to create a JSON file for the same Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Oct 6 14:12:26 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 6 Oct 2021 11:12:26 -0300 Subject: [cinder] Bug deputy report for week of 10-06-2021 Message-ID: This is a bug report from 09-29-2021-15-09 to 10-06-2021. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High - https://bugs.launchpad.net/cinder/+bug/1946059 "NFS: revert to snapshot not working". Assigned to Rajat Dhasmana. Medium - https://bugs.launchpad.net/cinder/+bug/1945824 "[Pure Storage] Clone CG from CG snapshot fails in PowerVC". Assigned to Simon Dodsle. - https://bugs.launchpad.net/cinder/+bug/1945571 "C-bak configure more than one worker issue". Unassigned. Low - https://bugs.launchpad.net/cinder/+bug/1946167 "ddt version incompatibility for victoria branch ". Unassigned. Incomplete - https://bugs.launchpad.net/cinder/+bug/1945500 "[stable/wallaby] filter reserved image properties". Unassigned. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed Oct 6 14:19:05 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 6 Oct 2021 19:49:05 +0530 Subject: API used for update Image in Glance In-Reply-To: References: Message-ID: Hello Adviya, If you are using python-glanceclient then just use command; `glance help image-update` and it will list out the helof text for you. You can use those options to update your image information. General syntax is; $ glance image-update If you want to update visibility of image from public to private then; glance image-update --visibility private Thanks & Best Regards, Abhishek Kekane On Wed, Oct 6, 2021 at 7:35 PM Adivya Singh wrote: > Hi Team, > > Can you please list me , what i have to do different while updating Image > in Glance using a API call, Can Some body share the Syntax for the same, Do > i need to create a JSON file for the same > > Regards > Adivya Singh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Wed Oct 6 14:26:00 2021 From: berndbausch at gmail.com (Bernd Bausch) Date: Wed, 6 Oct 2021 23:26:00 +0900 Subject: API used for update Image in Glance In-Reply-To: References: Message-ID: There is an /Update Image /API, documented at https://docs.openstack.org/api-ref/image/v2/index.html?expanded=update-image-detail#update-image. It does require an HTTP request body in JSON format. However, it updates the /image catalog entry, /not the image data. If you want to replace image data, this is not possible, since the image catalog entry contains a checksum that can't be modified. Modified image data would not correspond to the checksum anymore (see the second note under https://docs.openstack.org/api-ref/image/v2, which also states "images are immutable"). Bernd Bausch. On 2021/10/06 11:01 PM, Adivya Singh wrote: > Can you please list me , what i have to do different while updating > Image in Glance using a API call, Can Some body share the Syntax for > the same, Do i need to create a JSON file for the same -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Oct 6 14:32:21 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 6 Oct 2021 16:32:21 +0200 Subject: OpenStack Xena is officially released! Message-ID: The official OpenStack Xena release announcement has been sent out: http://lists.openstack.org/pipermail/openstack-announce/2021-October/002056.html Thanks to all who were a part of the Xena development cycle! This marks the official opening of the releases repo for Yoga, and freezes are now lifted. Xena is now a fully normal stable branch, and the normal stable policy now applies. Thanks! Herv? Beraud and the Release Management team -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Wed Oct 6 15:01:21 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 6 Oct 2021 17:01:21 +0200 Subject: OpenStack Xena is officially released! In-Reply-To: References: Message-ID: Congratulations for this new release! And thank you all. On Wed, Oct 6, 2021 at 4:40 PM Herve Beraud wrote: > The official OpenStack Xena release announcement has been sent out: > > > http://lists.openstack.org/pipermail/openstack-announce/2021-October/002056.html > > Thanks to all who were a part of the Xena development cycle! > > This marks the official opening of the releases repo for Yoga, and freezes > are now lifted. Xena is now a fully normal stable branch, and the normal > stable policy now applies. > > Thanks! > > Herv? Beraud and the Release Management team > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Wed Oct 6 15:07:46 2021 From: amy at demarco.com (Amy Marrich) Date: Wed, 6 Oct 2021 10:07:46 -0500 Subject: OpenStack Xena is officially released! In-Reply-To: References: Message-ID: Great work everyone! Amy (spotz) On Wed, Oct 6, 2021 at 9:36 AM Herve Beraud wrote: > The official OpenStack Xena release announcement has been sent out: > > > http://lists.openstack.org/pipermail/openstack-announce/2021-October/002056.html > > Thanks to all who were a part of the Xena development cycle! > > This marks the official opening of the releases repo for Yoga, and freezes > are now lifted. Xena is now a fully normal stable branch, and the normal > stable policy now applies. > > Thanks! > > Herv? Beraud and the Release Management team > > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Oct 6 15:26:02 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 6 Oct 2021 17:26:02 +0200 Subject: [masakari] Yoga PTG Message-ID: Hello all, This is a reminder that Masakari Yoga PTG is to happen Tuesday October 19, 2021 06:00 - 08:00 (UTC). Please add your name and discussion topic proposals to the etherpad. [1]. Thank you in advance and see you soon! [1] https://etherpad.opendev.org/p/masakari-yoga-ptg -yoctozepto From jing.c.zhang at nokia.com Wed Oct 6 12:29:42 2021 From: jing.c.zhang at nokia.com (Zhang, Jing C. (Nokia - CA/Ottawa)) Date: Wed, 6 Oct 2021 12:29:42 +0000 Subject: [Octavia] Can not create LB on SRIOV network Message-ID: I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV...). I left a comment under this story, I re-post my questions there, hoping someone knows the answer. Thank you so much Jing https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV Interface Config Guide (Openstack) Hi, In Openstack train release, creating Octavia LB on SRIOV network fails. I come here to search if there is already a plan to add this support, and see this story. This story gives the impression that the capability is already supported, it is a matter of adding user guide. So, my question is, in which Openstack release, creating LB on SRIOV network is supported? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Wed Oct 6 17:20:15 2021 From: arnaud.morin at gmail.com (Arnaud) Date: Wed, 06 Oct 2021 19:20:15 +0200 Subject: [install] Install on OVH dedicated servers In-Reply-To: <08932F82-3726-49BA-BC52-B0478C90CE03@gmail.com> References: <6597E673-0547-4D57-9CCD-4BAF632F7246@gmail.com> <08932F82-3726-49BA-BC52-B0478C90CE03@gmail.com> Message-ID: <688F546C-5172-474F-9C07-DBD232FA053F@gmail.com> More is always better ;) 1g might not be enough for storage, but again it depends on the workload and how you storage will be used. And for compute, 1g might also not be enough. On the other hand, it should be enough for a starting lab growing slowly. Cheers, Arnaud Le 6 octobre 2021 12:27:04 GMT+02:00, "P. P." a ?crit?: >Hello Arnaud, > >thanks for your reply. > >Yes OVH has pretty large dedicated servers too. > >My main concern is about networking Vrack speed to choose, they offer 1Gbps on middle class Advance servers to 25Gbps on High end Scale servers. > >And for sure storage nodes will need higher bandwidth than control nodes. > >Any suggestion on minimal bandwidth requirement per type of node? > >Thank you. >P. > >> Il giorno 5 ott 2021, alle ore 23:06, Arnaud ha scritto: >> >> Hello, >> >> That's a hard question. The answer mostly depends on what hardware you will use, how many instances and computes you plan to have, etc. >> >> But, there is no reason that prevent you to successfully run an OpenStack infrastructure on OVH servers. >> >> As you said, the OVH public cloud offer is based on OpenStack and it works. >> And even if the hardware used for this offer is different from the one you will find in public catalog, there is no major difference in how they manage the servers (a server is a server ;)). >> >> Regards, >> Arnaud (from ovh / public cloud team) >> >> Le 5 octobre 2021 16:52:16 GMT+02:00, "P. P." a ?crit : >> Hello all, >> >> I know that OVH uses Openstack to offer their public cloud services. >> >> I would like to know if someone was able to use their dedicated servers to build a private cloud based on Openstack. >> >> Do you think OVH dedicated server hardware + Vrack can provide sufficient requirements for a production environment? >> >> Thank you. >> P. >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tburke at nvidia.com Wed Oct 6 19:06:08 2021 From: tburke at nvidia.com (Timothy Burke) Date: Wed, 6 Oct 2021 19:06:08 +0000 Subject: [Swift][Interop] PTG time In-Reply-To: References: <20210902092144.1a44225f@suzdal.zaitcev.lan> Message-ID: Sorry for the delay in getting back to you -- yeah, Monday, 16:00 UTC should be fine. What all would you like to discuss? Is there any prep it'd be nice for me to do ahead of the meeting? Tim ________________________________ From: Arkady Kanevsky Sent: Tuesday, October 5, 2021 2:15 PM To: Pete Zaitcev Cc: openstack-discuss Subject: Re: [Swift][Interop] PTG time External email: Use caution opening links or attachments Swift team, can we confirm the date and time for a joint meeting between Swift and Interop WG? Will Monday 16:00 or 16:30 UTC work for you? Thanks, Arkady On Thu, Sep 2, 2021 at 10:13 AM Arkady Kanevsky > wrote: Thanks Pete. On Thu, Sep 2, 2021 at 9:21 AM Pete Zaitcev > wrote: On Fri, 6 Aug 2021 11:32:22 -0500 Arkady Kanevsky > wrote: > Interop team would like time on Yoga PTG Monday or Tu between 21-22 UTC > to discuss Interop guideline coverage for Swift. I suspect this fell through the cracks, it's not on Swift PTG Etherpad. I'll poke our PTL. The slots you're proposing aren't in conflict with existing PTG schedule, so this should work. -- Pete -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From akanevsk at redhat.com Wed Oct 6 20:07:02 2021 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Wed, 6 Oct 2021 15:07:02 -0500 Subject: [Swift][Interop] PTG time In-Reply-To: References: <20210902092144.1a44225f@suzdal.zaitcev.lan> Message-ID: Tim, I want to refresh the Swift team about Interop and what it covers for swift. Then discuss what we are proposing for next guideline for swift and you for feedback on it, and discuss any changes in Tempest coverage and what additional functionality & tests happen in the Xena cycle. Finally, if any of the APis introduced in previous cycles that are not covered by interop guidelines are ready for promotion for the interoperability coverage. I will share a short slide deck a week before the meeting. Thanks, Arkady On Wed, Oct 6, 2021 at 2:06 PM Timothy Burke wrote: > Sorry for the delay in getting back to you -- yeah, Monday, 16:00 UTC > should be fine. What all would you like to discuss? Is there any prep it'd > be nice for me to do ahead of the meeting? > > Tim > ------------------------------ > *From:* Arkady Kanevsky > *Sent:* Tuesday, October 5, 2021 2:15 PM > *To:* Pete Zaitcev > *Cc:* openstack-discuss > *Subject:* Re: [Swift][Interop] PTG time > > *External email: Use caution opening links or attachments* > Swift team, > can we confirm the date and time for a joint meeting between Swift and > Interop WG? > Will Monday 16:00 or 16:30 UTC work for you? > Thanks, > Arkady > > On Thu, Sep 2, 2021 at 10:13 AM Arkady Kanevsky > wrote: > > Thanks Pete. > > On Thu, Sep 2, 2021 at 9:21 AM Pete Zaitcev wrote: > > On Fri, 6 Aug 2021 11:32:22 -0500 > Arkady Kanevsky wrote: > > > Interop team would like time on Yoga PTG Monday or Tu between 21-22 UTC > > to discuss Interop guideline coverage for Swift. > > I suspect this fell through the cracks, it's not on Swift PTG Etherpad. > I'll poke our PTL. The slots you're proposing aren't in conflict with > existing PTG schedule, so this should work. > > -- Pete > > > > -- > Arkady Kanevsky, Ph.D. > Phone: 972 707-6456 > Corporate Phone: 919 729-5744 ext. 8176456 > > > > -- > Arkady Kanevsky, Ph.D. > Phone: 972 707-6456 > Corporate Phone: 919 729-5744 ext. 8176456 > -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed Oct 6 20:47:40 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 6 Oct 2021 13:47:40 -0700 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Hi Jing, To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. I have not tried this and would be interested to hear if it works for you. If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. Michael [1] https://wiki.openstack.org/wiki/Octavia/Roadmap [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html [3] https://docs.openstack.org/octavia/latest/admin/flavors.html [4] https://etherpad.opendev.org/p/yoga-ptg-octavia On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > Thank you so much > > > > Jing > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV Interface Config Guide (Openstack) > > > > Hi, > In Openstack train release, creating Octavia LB on SRIOV network fails. > I come here to search if there is already a plan to add this support, and see this story. > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > Thank you > > > > > > > > From gmann at ghanshyammann.com Thu Oct 7 00:25:52 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 06 Oct 2021 19:25:52 -0500 Subject: [all][tc] Technical Committee next weekly meeting on Oct 7th at 1500 UTC In-Reply-To: <17c4d7a0f6d.1124813d4556887.8685917442078267933@ghanshyammann.com> References: <17c4d7a0f6d.1124813d4556887.8685917442078267933@ghanshyammann.com> Message-ID: <17c58246528.cff0e3a1628401.1133517936975134547@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC meeting schedule at 1500 UTC. yoctozepto will chair tomorrow meeting. This is will be video call on google meet, details are there in below link: https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Project Health checks framework ** https://etherpad.opendev.org/p/health_check ** https://review.opendev.org/c/openstack/governance/+/810037 * Stable team process change ** https://review.opendev.org/c/openstack/governance/+/810721 * Xena Tracker ** https://etherpad.opendev.org/p/tc-xena-tracker * Technical Writing (doc) SIG need a chair and more maintainers ** Current Chair (only maintainer in this SIG) Stephen Finucane will not continue it in the next cycle(Yoga) ** http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025161.html * Place to maintain the external hosted ELK, E-R, O-H services ** https://etherpad.opendev.org/p/elk-service-maintenance-plan * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 04 Oct 2021 17:43:37 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for Oct 7th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, Oct 6th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From sorrison at gmail.com Thu Oct 7 02:28:55 2021 From: sorrison at gmail.com (Sam Morrison) Date: Thu, 7 Oct 2021 13:28:55 +1100 Subject: [kolla] parent tags Message-ID: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> I?m trying to be able to build a projects container without having to rebuild the parents which have different tags. The workflow I?m trying to achieve is: Build base and openstack-base with a tag of wallaby Build a container image for barbican with a tag of the version of barbican that is returned when doing `git describe` Build a container image for nova with a tag of the version of barbican that is returned when doing `git describe` etc.etc. I don?t seem to be able to do this without having to also build a new base and openstack-base with the same tag which is slow and also means a lot of disk space. Just wondering how other people do this sort of stuff? Any ideas? Thanks, Sam From bkslash at poczta.onet.pl Thu Oct 7 06:58:51 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Thu, 7 Oct 2021 08:58:51 +0200 Subject: Neutron VPNaaS - how to change driver from OpenSwan to StrongSwan? [kolla-ansible][neutron] Message-ID: <1BFF5034-4A5D-4AE6-B444-08A26F594F46@poczta.onet.pl> Hi everyone, because I still have problem with growing memory consumption when vpnaas extension is enabled (https://bugs.launchpad.net/neutron/+bug/1940071), I?m trying to test another solutions. And now - while openstack uses strongswan (and this driver is described in documentation) as vpnaas driver, kolla-ansible (v11, victoria) uses openswan? So is there any way to force kolla-ansible to use strongswan? Best regards Adam Toma? From mark at stackhpc.com Thu Oct 7 08:41:55 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 7 Oct 2021 09:41:55 +0100 Subject: [kolla] parent tags In-Reply-To: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> References: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> Message-ID: Hi Sam, I don't generally do that, and Kolla isn't really set up to make it easy. You could tag the base containers with the new tag: docker pull -base:wallaby docker tag -base:wallaby -base: Mark On Thu, 7 Oct 2021 at 03:34, Sam Morrison wrote: > > I?m trying to be able to build a projects container without having to rebuild the parents which have different tags. > > The workflow I?m trying to achieve is: > > Build base and openstack-base with a tag of wallaby > > Build a container image for barbican with a tag of the version of barbican that is returned when doing `git describe` > Build a container image for nova with a tag of the version of barbican that is returned when doing `git describe` > etc.etc. > > I don?t seem to be able to do this without having to also build a new base and openstack-base with the same tag which is slow and also means a lot of disk space. > > Just wondering how other people do this sort of stuff? > Any ideas? > > Thanks, > Sam > > > From amy at demarco.com Thu Oct 7 13:53:32 2021 From: amy at demarco.com (Amy Marrich) Date: Thu, 7 Oct 2021 08:53:32 -0500 Subject: RDO vSocial during the PTG Message-ID: Hi Everyone, I'm pleased to announce that RDO will be sponsoring a virtual social during the PTG, Thursday at 17:00 during the break. Last PTG's Trivia Social was a great success, but to do something different this time around we will be doing a virtual Escape Room. The room is a mixture of text and images so should be bandwidth friendly and we'll use Meetpad for the breakout rooms. We'll be doing an Intermediate level room and the team that finishes first will receive prizes! Because I need to purchase passes you will need to register in advance at: https://eventyay.com/e/e7299da7 There is a team signup page you'll receive after registering where you can add your name to a team of 5-8 people. While we can definitely have more people participating in the teams then passes, the intent is to allow everyone to actively participate. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonanderson at uchicago.edu Thu Oct 7 15:47:24 2021 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Thu, 7 Oct 2021 15:47:24 +0000 Subject: [kolla] parent tags In-Reply-To: References: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> Message-ID: <19227A29-3F33-4EF3-B68B-AC6ABF87FB2B@uchicago.edu> Sam, I think Mark?s idea is in general stronger than what I will describe, if all you?re after is different aliases. It sounds like you are trying to iterate on two images (Barbican and Nova), presumably changing the source of the former frequently, and don?t want to build the entire ancestor chain each time. I had to do something similar because we have a fork of Horizon we work on a lot. Here is my hacky solution: https://github.com/ChameleonCloud/kolla/commit/79611111c03cc86be91a86a9ccd296abc7aa3a3e We are on Train w/ some other Kolla forks so I can?t guarantee that will apply cleanly, but it?s a small change. It involves adding build-args to some Dockerfiles, in your case I suppose barbican-base, but also nova-base. It?s a bit clunky but gets the job done for us. /Jason On Oct 7, 2021, at 3:41 AM, Mark Goddard > wrote: Hi Sam, I don't generally do that, and Kolla isn't really set up to make it easy. You could tag the base containers with the new tag: docker pull -base:wallaby docker tag -base:wallaby -base: Mark On Thu, 7 Oct 2021 at 03:34, Sam Morrison > wrote: I?m trying to be able to build a projects container without having to rebuild the parents which have different tags. The workflow I?m trying to achieve is: Build base and openstack-base with a tag of wallaby Build a container image for barbican with a tag of the version of barbican that is returned when doing `git describe` Build a container image for nova with a tag of the version of barbican that is returned when doing `git describe` etc.etc. I don?t seem to be able to do this without having to also build a new base and openstack-base with the same tag which is slow and also means a lot of disk space. Just wondering how other people do this sort of stuff? Any ideas? Thanks, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu Oct 7 15:52:40 2021 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 7 Oct 2021 17:52:40 +0200 Subject: [neutron] Drivers meeting agenda - 08.10.2021 Message-ID: Hi Neutrinos, The agenda for tomorrow's drivers meeting is at [1]. We have 1 RFE to discuss: * https://bugs.launchpad.net/neutron/+bug/1946251 API: allow to disable anti-spoofing but not SGs [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda See you at the meeting tomorrow. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp.methot at planethoster.info Thu Oct 7 21:37:37 2021 From: jp.methot at planethoster.info (J-P Methot) Date: Thu, 7 Oct 2021 17:37:37 -0400 Subject: [neutron] East-West networking issue on DVR after failed attempt at starting a new instance Message-ID: <96819905-f32e-546a-83f3-33c390631907@planethoster.info> Hi, We use Openstack Wallaby installed through Kolla-ansible on this setup. Here's a quick rundown of the issue we just noticed: -We try popping an instance which fails because of a storage issue. -Nova tries to create the instance on 3 different nodes before failing. -We notice that instances on these 3 nodes and only those instances cannot connect to each other anymore. -Doing Tcpdump tests, we realize that pings are received by each instance, but never replied to. -Restarting the neutron-openvswitch-agent container fixes this issue. I suspect l2population might have something to do with this. Is the ARP table rebuilt when the openvswitch-agent is restarted? -- Jean-Philippe M?thot Senior Openstack system administrator Administrateur syst?me Openstack s?nior PlanetHoster inc. From jing.c.zhang at nokia.com Thu Oct 7 22:18:15 2021 From: jing.c.zhang at nokia.com (Zhang, Jing C. (Nokia - CA/Ottawa)) Date: Thu, 7 Oct 2021 22:18:15 +0000 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Hi Michael, Thank you so much for the information. I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: https://docs.openstack.org/nova/train/admin/pci-passthrough.html https://docs.openstack.org/nova/latest/admin/pci-passthrough.html ========================= Here is the detail: Env: NIC is intel 82599, creating VM with SRIOV direct port works well. Nova.conf passthrough_whitelist={"devname":"ens1f0","physical_network":"physnet5"} passthrough_whitelist={"devname":"ens1f1","physical_network":"physnet6"} Sriov_agent.ini [sriov_nic] physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: alias = { "vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf", "numa_policy": "required" } (2) Used the extra-spec in nova flavor openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" (3) Failed to create VM with this flavor, sriov agent log does not show port event, for sure also failed to create LB, PortBindingFailed (4) Tried multiple formats to add whitelist for PF and VF in nova.conf for nova-compute, and retried, still failed passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","devname":"ens1f0","physical_network":"physnet5"} #PF passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","physical_network":"physnet5"} #VF The sriov agent log does not show port event for any of them. -----Original Message----- From: Michael Johnson Sent: Wednesday, October 6, 2021 4:48 PM To: Zhang, Jing C. (Nokia - CA/Ottawa) Cc: openstack-discuss at lists.openstack.org Subject: Re: [Octavia] Can not create LB on SRIOV network Hi Jing, To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. I have not tried this and would be interested to hear if it works for you. If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. Michael [1] https://wiki.openstack.org/wiki/Octavia/Roadmap [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html [3] https://docs.openstack.org/octavia/latest/admin/flavors.html [4] https://etherpad.opendev.org/p/yoga-ptg-octavia On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > Thank you so much > > > > Jing > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > Interface Config Guide (Openstack) > > > > Hi, > In Openstack train release, creating Octavia LB on SRIOV network fails. > I come here to search if there is already a plan to add this support, and see this story. > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > Thank you > > > > > > > > From jing.c.zhang at nokia.com Fri Oct 8 00:36:08 2021 From: jing.c.zhang at nokia.com (Zhang, Jing C. (Nokia - CA/Ottawa)) Date: Fri, 8 Oct 2021 00:36:08 +0000 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Hi Michael, I made a mistake when creating VM manually, I should use --nic option not --network option. After correcting that, I can create VM with the extra-flavor: $ openstack server create --flavor octavia-flavor --image Centos7 --nic port-id=test-port --security-group demo-secgroup --key-name demo-key test-vm $ nova list --all --fields name,status,host,networks | grep test-vm | 8548400b-725a-405a-aeeb-ed1d208915e2 | test-vm | ACTIVE | overcloud-sriovperformancecompute-201-1.localdomain | ext-net1=10.5.201.149 A 2nd VF interface is seen inside the VM: [centos at test-vm ~]$ ip a ... 3: eth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0a:b2:d4:85:a2:e6 brd ff:ff:ff:ff:ff:ff This MAC is not seen by neutron though: $ openstack port list | grep 0a:b2:d4:85:a2:e6 [empty] ===================== However when I tried to create LB with the same VM flavor, it failed at the same place as before. Looking at worker.log, it seems the error is similar to use --network option to create the VM manually. But you are the expert. "Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52" Here is the full list of command line: $ openstack flavor list | grep octavia-flavor | eb312b9a-d04d-4a88-9db2-7a88ce167cff | octavia-flavor | 4096 | 0 | 0 | 4 | True | openstack loadbalancer flavorprofile create --name ofp1 --provider amphora --flavor-data '{"compute_flavor": "eb312b9a-d04d-4a88-9db2-7a88ce167cff"}' openstack loadbalancer flavor create --name of1 --flavorprofile ofp1 --enable openstack loadbalancer create --name lb1 --flavor of1 --vip-port-id test-port --vip-subnet-id ext-subnet1 |__Flow 'octavia-create-loadbalancer-flow': PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py", line 399, in execute 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker loadbalancer, loadbalancer.vip, amphora, subnet) 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 391, in plug_aap_port 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker interface = self._plug_amphora_vip(amphora, subnet) 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 123, in _plug_amphora_vip 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker raise base.PlugVIPException(message) 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker -----Original Message----- From: Zhang, Jing C. (Nokia - CA/Ottawa) Sent: Thursday, October 7, 2021 6:18 PM To: Michael Johnson Cc: openstack-discuss at lists.openstack.org Subject: RE: [Octavia] Can not create LB on SRIOV network Hi Michael, Thank you so much for the information. I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: https://docs.openstack.org/nova/train/admin/pci-passthrough.html https://docs.openstack.org/nova/latest/admin/pci-passthrough.html ========================= Here is the detail: Env: NIC is intel 82599, creating VM with SRIOV direct port works well. Nova.conf passthrough_whitelist={"devname":"ens1f0","physical_network":"physnet5"} passthrough_whitelist={"devname":"ens1f1","physical_network":"physnet6"} Sriov_agent.ini [sriov_nic] physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: alias = { "vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf", "numa_policy": "required" } (2) Used the extra-spec in nova flavor openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" (3) Failed to create VM with this flavor, sriov agent log does not show port event, for sure also failed to create LB, PortBindingFailed (4) Tried multiple formats to add whitelist for PF and VF in nova.conf for nova-compute, and retried, still failed passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","devname":"ens1f0","physical_network":"physnet5"} #PF passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","physical_network":"physnet5"} #VF The sriov agent log does not show port event for any of them. -----Original Message----- From: Michael Johnson Sent: Wednesday, October 6, 2021 4:48 PM To: Zhang, Jing C. (Nokia - CA/Ottawa) Cc: openstack-discuss at lists.openstack.org Subject: Re: [Octavia] Can not create LB on SRIOV network Hi Jing, To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. I have not tried this and would be interested to hear if it works for you. If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. Michael [1] https://wiki.openstack.org/wiki/Octavia/Roadmap [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html [3] https://docs.openstack.org/octavia/latest/admin/flavors.html [4] https://etherpad.opendev.org/p/yoga-ptg-octavia On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > Thank you so much > > > > Jing > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > Interface Config Guide (Openstack) > > > > Hi, > In Openstack train release, creating Octavia LB on SRIOV network fails. > I come here to search if there is already a plan to add this support, and see this story. > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > Thank you > > > > > > > > From skaplons at redhat.com Fri Oct 8 06:06:11 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 08 Oct 2021 08:06:11 +0200 Subject: [neutron] East-West networking issue on DVR after failed attempt at starting a new instance In-Reply-To: <96819905-f32e-546a-83f3-33c390631907@planethoster.info> References: <96819905-f32e-546a-83f3-33c390631907@planethoster.info> Message-ID: <2649897.lI8ThQJ3AA@p1> Hi, On czwartek, 7 pa?dziernika 2021 23:37:37 CEST J-P Methot wrote: > Hi, > > We use Openstack Wallaby installed through Kolla-ansible on this setup. > Here's a quick rundown of the issue we just noticed: > > -We try popping an instance which fails because of a storage issue. > > -Nova tries to create the instance on 3 different nodes before failing. > > -We notice that instances on these 3 nodes and only those instances > cannot connect to each other anymore. > > -Doing Tcpdump tests, we realize that pings are received by each > instance, but never replied to. > > -Restarting the neutron-openvswitch-agent container fixes this issue. > > I suspect l2population might have something to do with this. Is the ARP > table rebuilt when the openvswitch-agent is restarted? If You are using dvr and l2population, You have arp_reponder enabled so arp replies for tunnel networks are done locally in the ovs bridges. When You restart neutron-openvswitch-agent, it will regenerate all OF rules so yes, if some rules were missing, restart should add them again. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From syedammad83 at gmail.com Fri Oct 8 07:13:38 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Fri, 8 Oct 2021 12:13:38 +0500 Subject: [heat] xena stack deployment failed Message-ID: Hi, I have upgraded my heat from wallaby to xena. When I am trying to create magnum cluster its giving below error in heat engine logs. Currently in whole stack, I have upgraded heat and magnum to latest release. Before upgrading heat from xena to wallaby, the stack deployment was successful. 2021-10-08 12:06:14.107 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g -i -X GET http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [404] Connection: keep-alive Content-Length: 112 Content-Type: application/json Date: Fri, 08 Oct 2021 07:06:14 GMT X-Compute-Request-Id: req-d07e25b0-375f-48e6-ac4b-d76b41848e6a X-Openstack-Request-Id: req-d07e25b0-375f-48e6-ac4b-d76b41848e6a _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: {"message": "The resource could not be found.

\n\n\n", "code": "404 Not Found", "title": "Not Found"} _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to compute for http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f used request id req-d07e25b0-375f-48e6-ac4b-d76b41848e6a request /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g -i -X GET http://controller-khi04.rapid.pk:8774/v2.1/ -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [200] Connection: keep-alive Content-Length: 399 Content-Type: application/json Date: Fri, 08 Oct 2021 07:06:14 GMT Openstack-Api-Version: compute 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-9b1f2443-fdee-41dd-9139-57376afd7bef X-Openstack-Nova-Api-Version: 2.1 X-Openstack-Request-Id: req-9b1f2443-fdee-41dd-9139-57376afd7bef _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: {"version": {"id": "v2.1", "status": "CURRENT", "version": "2.88", "min_version": "2.1", "updated": "2013-07-23T11:33:21Z", "links": [{"rel": "self", "href": "http://controller-khi04.rapid.pk:8774/v2.1/"}, {"rel": "describedby", "type": "text/html", "href": "http://docs.openstack.org/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1"}]}} _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to compute for http://controller-khi04.rapid.pk:8774/v2.1/ used request id req-9b1f2443-fdee-41dd-9139-57376afd7bef request /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 2021-10-08 12:06:14.129 2064 INFO heat.engine.resource [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] CREATE: ServerGroup "worker_nodes_server_group" Stack "k8s-cluster6-7cnnuz4hrfrz" [5501d6a4-59a6-4f76-b25e-ec43e0822361] 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource Traceback (most recent call last): 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 916, in _action_recorder 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 1028, in _do_action 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield from self.action_handler_task(action, args=handler_args) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 970, in action_handler_task 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource handler_data = handler(*args) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resources/openstack/nova/server_group.py", line 98, in handle_create 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource server_group = client.server_groups.create(name=name, 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/novaclient/api_versions.py", line 393, in substitution 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource return methods[-1].func(obj, *args, **kwargs) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource TypeError: create() got an unexpected keyword argument 'policies' 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource 2021-10-08 12:06:14.143 2064 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Stack CREATE FAILED (k8s-cluster6-7cnnuz4hrfrz): Resource CREATE failed: TypeError: resources.worker_nodes_server_group: create() got an unexpected keyword argument 'policies' 2021-10-08 12:06:14.146 2064 DEBUG heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Persisting stack k8s-cluster6-7cnnuz4hrfrz status CREATE FAILED _send_notification_and_add_event /usr/lib/python3/dist-packages/heat/engine/stack.py:1109 2021-10-08 12:06:15.009 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2434 for update 2021-10-08 12:06:15.040 2063 DEBUG heat.engine.worker [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 2021-10-08 12:06:16.016 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2437 for update 2021-10-08 12:06:16.048 2061 DEBUG heat.engine.worker [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 2021-10-08 12:06:17.026 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2443 for update 2021-10-08 12:06:17.066 2062 DEBUG heat.engine.worker [req-61cb3eba-0cf0-47f7-8fdb-8e9375888dc4 - - - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From ueha.ayumu at fujitsu.com Fri Oct 8 07:59:52 2021 From: ueha.ayumu at fujitsu.com (ueha.ayumu at fujitsu.com) Date: Fri, 8 Oct 2021 07:59:52 +0000 Subject: [heat] xena stack deployment failed In-Reply-To: References: Message-ID: Hi Ammad It seems to be the same as the cause of the bug report I issued to Heat, but there has been no response from the Heat team. https://storyboard.openstack.org/#!/story/2009164 To: Heat team Could you please confirm this problem? Thanks. Regards, Ueha From: Ammad Syed Sent: Friday, October 8, 2021 4:14 PM To: openstack-discuss Subject: [heat] xena stack deployment failed Hi, I have upgraded my heat from wallaby to xena. When I am trying to create magnum cluster its giving below error in heat engine logs. Currently in whole stack, I have upgraded heat and magnum to latest release. Before upgrading heat from xena to wallaby, the stack deployment was successful. 2021-10-08 12:06:14.107 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g -i -X GET http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [404] Connection: keep-alive Content-Length: 112 Content-Type: application/json Date: Fri, 08 Oct 2021 07:06:14 GMT X-Compute-Request-Id: req-d07e25b0-375f-48e6-ac4b-d76b41848e6a X-Openstack-Request-Id: req-d07e25b0-375f-48e6-ac4b-d76b41848e6a _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: {"message": "The resource could not be found.

\n\n\n", "code": "404 Not Found", "title": "Not Found"} _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to compute for http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f used request id req-d07e25b0-375f-48e6-ac4b-d76b41848e6a request /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g -i -X GET http://controller-khi04.rapid.pk:8774/v2.1/ -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [200] Connection: keep-alive Content-Length: 399 Content-Type: application/json Date: Fri, 08 Oct 2021 07:06:14 GMT Openstack-Api-Version: compute 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-9b1f2443-fdee-41dd-9139-57376afd7bef X-Openstack-Nova-Api-Version: 2.1 X-Openstack-Request-Id: req-9b1f2443-fdee-41dd-9139-57376afd7bef _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: {"version": {"id": "v2.1", "status": "CURRENT", "version": "2.88", "min_version": "2.1", "updated": "2013-07-23T11:33:21Z", "links": [{"rel": "self", "href": "http://controller-khi04.rapid.pk:8774/v2.1/"}, {"rel": "describedby", "type": "text/html", "href": "http://docs.openstack.org/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1"}]}} _http_log_response /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to compute for http://controller-khi04.rapid.pk:8774/v2.1/ used request id req-9b1f2443-fdee-41dd-9139-57376afd7bef request /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 2021-10-08 12:06:14.129 2064 INFO heat.engine.resource [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] CREATE: ServerGroup "worker_nodes_server_group" Stack "k8s-cluster6-7cnnuz4hrfrz" [5501d6a4-59a6-4f76-b25e-ec43e0822361] 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource Traceback (most recent call last): 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 916, in _action_recorder 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 1028, in _do_action 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield from self.action_handler_task(action, args=handler_args) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 970, in action_handler_task 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource handler_data = handler(*args) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/heat/engine/resources/openstack/nova/server_group.py", line 98, in handle_create 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource server_group = client.server_groups.create(name=name, 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File "/usr/lib/python3/dist-packages/novaclient/api_versions.py", line 393, in substitution 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource return methods[-1].func(obj, *args, **kwargs) 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource TypeError: create() got an unexpected keyword argument 'policies' 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource 2021-10-08 12:06:14.143 2064 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Stack CREATE FAILED (k8s-cluster6-7cnnuz4hrfrz): Resource CREATE failed: TypeError: resources.worker_nodes_server_group: create() got an unexpected keyword argument 'policies' 2021-10-08 12:06:14.146 2064 DEBUG heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Persisting stack k8s-cluster6-7cnnuz4hrfrz status CREATE FAILED _send_notification_and_add_event /usr/lib/python3/dist-packages/heat/engine/stack.py:1109 2021-10-08 12:06:15.009 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2434 for update 2021-10-08 12:06:15.040 2063 DEBUG heat.engine.worker [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 2021-10-08 12:06:16.016 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2437 for update 2021-10-08 12:06:16.048 2061 DEBUG heat.engine.worker [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 2021-10-08 12:06:17.026 2061 INFO heat.engine.stack [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering resource 2443 for update 2021-10-08 12:06:17.066 2062 DEBUG heat.engine.worker [req-61cb3eba-0cf0-47f7-8fdb-8e9375888dc4 - - - - -] [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Fri Oct 8 09:01:04 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Fri, 8 Oct 2021 14:31:04 +0530 Subject: [heat] xena stack deployment failed In-Reply-To: References: Message-ID: On Fri, Oct 8, 2021 at 1:31 PM ueha.ayumu at fujitsu.com < ueha.ayumu at fujitsu.com> wrote: > Hi Ammad > > > > It seems to be the same as the cause of the bug report I issued to Heat, > but there has been no response from the Heat team. > > https://storyboard.openstack.org/#!/story/2009164 > > > > To: Heat team > > Could you please confirm this problem? Thanks. > > > Yeah there is a regression. I've proposed a fix[1] now. [1] https://review.opendev.org/c/openstack/heat/+/813124 Regards, > > Ueha > > > > *From:* Ammad Syed > *Sent:* Friday, October 8, 2021 4:14 PM > *To:* openstack-discuss > *Subject:* [heat] xena stack deployment failed > > > > Hi, > > > > I have upgraded my heat from wallaby to xena. When I am trying to create > magnum cluster its giving below error in heat engine logs. > > > > Currently in whole stack, I have upgraded heat and magnum to latest > release. Before upgrading heat from xena to wallaby, the stack > deployment was successful. > > > > 2021-10-08 12:06:14.107 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g > -i -X GET > http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f > -H "Accept: application/json" -H "User-Agent: python-novaclient" -H > "X-Auth-Token: > {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" > -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request > /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 > 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [404] > Connection: keep-alive Content-Length: 112 Content-Type: application/json > Date: Fri, 08 Oct 2021 07:06:14 GMT X-Compute-Request-Id: > req-d07e25b0-375f-48e6-ac4b-d76b41848e6a X-Openstack-Request-Id: > req-d07e25b0-375f-48e6-ac4b-d76b41848e6a _http_log_response > /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 > 2021-10-08 12:06:14.121 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: > {"message": "The resource could not be found.

\n\n\n", "code": > "404 Not Found", "title": "Not Found"} _http_log_response > /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 > 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to > compute for > http://controller-khi04.rapid.pk:8774/v2.1/98687873a146418eaeeb54a01693669f > used request id req-d07e25b0-375f-48e6-ac4b-d76b41848e6a request > /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 > 2021-10-08 12:06:14.122 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] REQ: curl -g > -i -X GET http://controller-khi04.rapid.pk:8774/v2.1/ -H "Accept: > application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: > {SHA256}ef71182888b7ad3b8719ee8d48e37c6cd526e1f25f8395f733770917aead9b7b" > -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request > /usr/lib/python3/dist-packages/keystoneauth1/session.py:519 > 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP: [200] > Connection: keep-alive Content-Length: 399 Content-Type: application/json > Date: Fri, 08 Oct 2021 07:06:14 GMT Openstack-Api-Version: compute 2.1 > Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version > X-Compute-Request-Id: req-9b1f2443-fdee-41dd-9139-57376afd7bef > X-Openstack-Nova-Api-Version: 2.1 X-Openstack-Request-Id: > req-9b1f2443-fdee-41dd-9139-57376afd7bef _http_log_response > /usr/lib/python3/dist-packages/keystoneauth1/session.py:550 > 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] RESP BODY: > {"version": {"id": "v2.1", "status": "CURRENT", "version": "2.88", > "min_version": "2.1", "updated": "2013-07-23T11:33:21Z", "links": [{"rel": > "self", "href": "http://controller-khi04.rapid.pk:8774/v2.1/"}, {"rel": > "describedby", "type": "text/html", "href": "http://docs.openstack.org/"}], > "media-types": [{"base": "application/json", "type": > "application/vnd.openstack.compute+json;version=2.1"}]}} _http_log_response > /usr/lib/python3/dist-packages/keystoneauth1/session.py:582 > 2021-10-08 12:06:14.128 2064 DEBUG novaclient.v2.client > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] GET call to > compute for http://controller-khi04.rapid.pk:8774/v2.1/ used request id > req-9b1f2443-fdee-41dd-9139-57376afd7bef request > /usr/lib/python3/dist-packages/keystoneauth1/session.py:954 > 2021-10-08 12:06:14.129 2064 INFO heat.engine.resource > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] CREATE: > ServerGroup "worker_nodes_server_group" Stack "k8s-cluster6-7cnnuz4hrfrz" > [5501d6a4-59a6-4f76-b25e-ec43e0822361] > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource Traceback (most > recent call last): > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File > "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 916, in > _action_recorder > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File > "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 1028, in > _do_action > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource yield from > self.action_handler_task(action, args=handler_args) > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File > "/usr/lib/python3/dist-packages/heat/engine/resource.py", line 970, in > action_handler_task > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource handler_data = > handler(*args) > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File > "/usr/lib/python3/dist-packages/heat/engine/resources/openstack/nova/server_group.py", > line 98, in handle_create > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource server_group = > client.server_groups.create(name=name, > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource File > "/usr/lib/python3/dist-packages/novaclient/api_versions.py", line 393, in > substitution > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource return > methods[-1].func(obj, *args, **kwargs) > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource TypeError: > create() got an unexpected keyword argument 'policies' > 2021-10-08 12:06:14.129 2064 ERROR heat.engine.resource > 2021-10-08 12:06:14.143 2064 INFO heat.engine.stack > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Stack CREATE > FAILED (k8s-cluster6-7cnnuz4hrfrz): Resource CREATE failed: TypeError: > resources.worker_nodes_server_group: create() got an unexpected keyword > argument 'policies' > 2021-10-08 12:06:14.146 2064 DEBUG heat.engine.stack > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Persisting > stack k8s-cluster6-7cnnuz4hrfrz status CREATE FAILED > _send_notification_and_add_event > /usr/lib/python3/dist-packages/heat/engine/stack.py:1109 > 2021-10-08 12:06:15.009 2061 INFO heat.engine.stack > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering > resource 2434 for update > 2021-10-08 12:06:15.040 2063 DEBUG heat.engine.worker > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] > [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. > check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 > 2021-10-08 12:06:16.016 2061 INFO heat.engine.stack > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering > resource 2437 for update > 2021-10-08 12:06:16.048 2061 DEBUG heat.engine.worker > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] > [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. > check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 > 2021-10-08 12:06:17.026 2061 INFO heat.engine.stack > [req-31d11ee8-fc47-4399-b424-97adabf31331 admin admin - - -] Triggering > resource 2443 for update > 2021-10-08 12:06:17.066 2062 DEBUG heat.engine.worker > [req-61cb3eba-0cf0-47f7-8fdb-8e9375888dc4 - - - - -] > [88ec8f6f-7e10-4c6a-be97-03de82938eb2] Traversal cancelled; re-trigerring. > check_resource /usr/lib/python3/dist-packages/heat/engine/worker.py:193 > > > > - Ammad > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Oct 8 11:17:10 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 8 Oct 2021 13:17:10 +0200 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images Message-ID: Hello, I've just updated my kolla wallaby with latest images. When I create volume from image on ceph it works. When I create volume from image on nfs netapp ontap, it does not work. The following is reported in cinder-volume.log: 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", line 950, in _create_from_image_cache_or_download 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server model_update = self._create_from_image_download( 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", line 766, in _create_from_image_download 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server volume_utils.copy_image_to_volume(self.driver, context, volume, 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", line 1158, in copy_image_to_volume 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise exception.ImageCopyFailure(reason=ex.stderr) 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server cinder.exception.ImageCopyFailure: Failed to copy image to volume: qemu-img: /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: error while converting raw: Failed to lock byte 101 Any help please ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From midhunlaln66 at gmail.com Fri Oct 8 12:42:23 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Fri, 8 Oct 2021 18:12:23 +0530 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors Message-ID: Hi team, -->Successfully I installed Openstack ansible 23.1.0.dev35. --->I logged in to horizon and created a new network and launched a vm but I am getting an error. Error: Failed to perform requested operation on instance "hope", the instance has an error status: Please try again later [Error: Build of instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the network(s), not rescheduling.]. -->Then I checked log | fault | {'code': 500, 'created': '2021-10-08T12:26:44Z', 'message': 'Build of instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the network(s), not rescheduling.', 'details': 'Traceback (most recent call last):\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7235, in _create_guest_with_network\n post_xml_callback=post_xml_callback)\n File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n next(self.gen)\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 479, in wait_for_instance_event\n actual_event = event.wait()\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", line 125, in wait\n result = hub.switch()\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 313, in switch\n return self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 4200, in spawn\n cleanup_instance_disks=created_disks)\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7258, in _create_guest_with_network\n raise exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 2219, in _do_build_and_run_instance\n filter_properties, request_spec, accel_uuids)\n File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 2458, in _build_and_run_instance\n reason=msg)\nnova.exception.BuildAbortException: Build of instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the network(s), not rescheduling.\n'} | Please help me with this error. Thanks & Regards Midhunlal N B -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 12:45:54 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 08:45:54 -0400 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: What options are you using for the NFS client on the controllers side? There are some recommended settings that Netapp can provide. On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano wrote: > Hello, > I've just updated my kolla wallaby with latest images. When I create > volume from image on ceph it works. > When I create volume from image on nfs netapp ontap, it does not work. > The following is reported in cinder-volume.log: > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", > line 950, in _create_from_image_cache_or_download > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server > model_update = self._create_from_image_download( > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", > line 766, in _create_from_image_download > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server > volume_utils.copy_image_to_volume(self.driver, context, volume, > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File > "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", > line 1158, in copy_image_to_volume > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise > exception.ImageCopyFailure(reason=ex.stderr) > 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server > cinder.exception.ImageCopyFailure: Failed to copy image to volume: > qemu-img: > /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: > error while converting raw: Failed to lock byte 101 > > Any help please ? > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 12:49:42 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 08:49:42 -0400 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: You will need to look at the neutron-server logs + the ovs/libviirt agent logs on the compute. The error returned from the VM creation is not useful most of the time. Was this a vxlan or vlan network? On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb wrote: > Hi team, > -->Successfully I installed Openstack ansible 23.1.0.dev35. > --->I logged in to horizon and created a new network and launched a vm > but I am getting an error. > > Error: Failed to perform requested operation on instance "hope", the > instance has an error status: Please try again later [Error: Build of > instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate > the network(s), not rescheduling.]. > > -->Then I checked log > > | fault | {'code': 500, 'created': > '2021-10-08T12:26:44Z', 'message': 'Build of instance > b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the > network(s), not rescheduling.', 'details': 'Traceback (most recent call > last):\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 7235, in _create_guest_with_network\n > post_xml_callback=post_xml_callback)\n File > "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n > next(self.gen)\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 479, in wait_for_instance_event\n actual_event = event.wait()\n > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", > line 125, in wait\n result = hub.switch()\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", > line 313, in switch\n return > self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring > handling of the above exception, another exception occurred:\n\nTraceback > (most recent call last):\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 4200, in spawn\n cleanup_instance_disks=created_disks)\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 7258, in _create_guest_with_network\n raise > exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: > Virtual Interface creation failed\n\nDuring handling of the above > exception, another exception occurred:\n\nTraceback (most recent call > last):\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 2219, in _do_build_and_run_instance\n filter_properties, > request_spec, accel_uuids)\n File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 2458, in _build_and_run_instance\n > reason=msg)\nnova.exception.BuildAbortException: Build of instance > b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the > network(s), not rescheduling.\n'} | > > Please help me with this error. > > > Thanks & Regards > Midhunlal N B > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Oct 8 12:51:24 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 8 Oct 2021 14:51:24 +0200 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: Hello Laurent, I am using nfs_mount_options = nfsvers=3,lookupcache=pos I always use the above options. I have this issue only with the last cinder images of wallaby Thanks Ignazio Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < laurentfdumont at gmail.com> ha scritto: > What options are you using for the NFS client on the controllers side? > There are some recommended settings that Netapp can provide. > > On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano > wrote: > >> Hello, >> I've just updated my kolla wallaby with latest images. When I create >> volume from image on ceph it works. >> When I create volume from image on nfs netapp ontap, it does not work. >> The following is reported in cinder-volume.log: >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >> line 950, in _create_from_image_cache_or_download >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >> model_update = self._create_from_image_download( >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >> line 766, in _create_from_image_download >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >> volume_utils.copy_image_to_volume(self.driver, context, volume, >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >> line 1158, in copy_image_to_volume >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >> exception.ImageCopyFailure(reason=ex.stderr) >> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >> qemu-img: >> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >> error while converting raw: Failed to lock byte 101 >> >> Any help please ? >> Ignazio >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From midhunlaln66 at gmail.com Fri Oct 8 13:05:03 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Fri, 8 Oct 2021 18:35:03 +0530 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: Hi Laurent, Thank you very much for your reply.we configured our network as per official document .Please take a look at below details. --->Controller node configured with below interfaces bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan ---> Compute node bond1,bond0,br-mgmt,br-vxlan,br-storage I don't have much more experience in openstack,I think here we used vlan network. Thanks & Regards Midhunlal N B +918921245637 On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont wrote: > You will need to look at the neutron-server logs + the ovs/libviirt agent > logs on the compute. The error returned from the VM creation is not useful > most of the time. > > Was this a vxlan or vlan network? > > On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb > wrote: > >> Hi team, >> -->Successfully I installed Openstack ansible 23.1.0.dev35. >> --->I logged in to horizon and created a new network and launched a vm >> but I am getting an error. >> >> Error: Failed to perform requested operation on instance "hope", the >> instance has an error status: Please try again later [Error: Build of >> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >> the network(s), not rescheduling.]. >> >> -->Then I checked log >> >> | fault | {'code': 500, 'created': >> '2021-10-08T12:26:44Z', 'message': 'Build of instance >> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >> network(s), not rescheduling.', 'details': 'Traceback (most recent call >> last):\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7235, in _create_guest_with_network\n >> post_xml_callback=post_xml_callback)\n File >> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >> next(self.gen)\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >> line 125, in wait\n result = hub.switch()\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >> line 313, in switch\n return >> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >> handling of the above exception, another exception occurred:\n\nTraceback >> (most recent call last):\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7258, in _create_guest_with_network\n raise >> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >> Virtual Interface creation failed\n\nDuring handling of the above >> exception, another exception occurred:\n\nTraceback (most recent call >> last):\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 2219, in _do_build_and_run_instance\n filter_properties, >> request_spec, accel_uuids)\n File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 2458, in _build_and_run_instance\n >> reason=msg)\nnova.exception.BuildAbortException: Build of instance >> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >> network(s), not rescheduling.\n'} | >> >> Please help me with this error. >> >> >> Thanks & Regards >> Midhunlal N B >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 13:07:06 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 09:07:06 -0400 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: You can try a few options to see if it helps. It might be a question of NFSv3 or V4 or the Netapp driver changes themselves. https://forum.opennebula.io/t/nfs-v3-datastore-and-failed-to-lock-byte-100/7482 On Fri, Oct 8, 2021 at 8:51 AM Ignazio Cassano wrote: > Hello Laurent, > I am using nfs_mount_options = nfsvers=3,lookupcache=pos > I always use the above options. > I have this issue only with the last cinder images of wallaby > Thanks > Ignazio > > Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < > laurentfdumont at gmail.com> ha scritto: > >> What options are you using for the NFS client on the controllers side? >> There are some recommended settings that Netapp can provide. >> >> On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano >> wrote: >> >>> Hello, >>> I've just updated my kolla wallaby with latest images. When I create >>> volume from image on ceph it works. >>> When I create volume from image on nfs netapp ontap, it does not work. >>> The following is reported in cinder-volume.log: >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>> line 950, in _create_from_image_cache_or_download >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>> model_update = self._create_from_image_download( >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>> line 766, in _create_from_image_download >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>> volume_utils.copy_image_to_volume(self.driver, context, volume, >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >>> line 1158, in copy_image_to_volume >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >>> exception.ImageCopyFailure(reason=ex.stderr) >>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >>> qemu-img: >>> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >>> error while converting raw: Failed to lock byte 101 >>> >>> Any help please ? >>> Ignazio >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 13:14:18 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 09:14:18 -0400 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: There are essentially two types of networks, vlan and vxlan, that can be attached to a VM. Ideally, you want to look at the logs on the controllers and the compute node. Openstack-ansible seems to send stuff here https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F . On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb wrote: > Hi Laurent, > Thank you very much for your reply.we configured our network as per > official document .Please take a look at below details. > --->Controller node configured with below interfaces > bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan > > ---> Compute node > bond1,bond0,br-mgmt,br-vxlan,br-storage > > I don't have much more experience in openstack,I think here we used vlan > network. > > Thanks & Regards > Midhunlal N B > +918921245637 > > > On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont > wrote: > >> You will need to look at the neutron-server logs + the ovs/libviirt agent >> logs on the compute. The error returned from the VM creation is not useful >> most of the time. >> >> Was this a vxlan or vlan network? >> >> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >> wrote: >> >>> Hi team, >>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>> --->I logged in to horizon and created a new network and launched a vm >>> but I am getting an error. >>> >>> Error: Failed to perform requested operation on instance "hope", the >>> instance has an error status: Please try again later [Error: Build of >>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>> the network(s), not rescheduling.]. >>> >>> -->Then I checked log >>> >>> | fault | {'code': 500, 'created': >>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>> last):\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7235, in _create_guest_with_network\n >>> post_xml_callback=post_xml_callback)\n File >>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>> next(self.gen)\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>> line 125, in wait\n result = hub.switch()\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>> line 313, in switch\n return >>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>> handling of the above exception, another exception occurred:\n\nTraceback >>> (most recent call last):\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7258, in _create_guest_with_network\n raise >>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>> Virtual Interface creation failed\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 2219, in _do_build_and_run_instance\n filter_properties, >>> request_spec, accel_uuids)\n File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 2458, in _build_and_run_instance\n >>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>> network(s), not rescheduling.\n'} | >>> >>> Please help me with this error. >>> >>> >>> Thanks & Regards >>> Midhunlal N B >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From midhunlaln66 at gmail.com Fri Oct 8 13:53:17 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Fri, 8 Oct 2021 19:23:17 +0530 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: Hi, This is the log i am getting while launching a new vm Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds to destroy the instance on the hypervisor. Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds to detach 1 volumes for instance. Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to allocate network(s): nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Traceback (most recent call last): 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7235, in _create_guest_with_network 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] post_xml_callback=post_xml_callback) 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] next(self.gen) 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 479, in wait_for_instance_event 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] actual_event = event.wait() 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", line 125, in wait 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] result = hub.switch() 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 313, in switch 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] return self.greenlet.switch() 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] eventlet.timeout.Timeout: 300 seconds 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] During handling of the above exception, another exception occurred: 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Traceback (most recent call last): 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", line 2397, in _build_and_run_instance 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] accel_info=accel_info) 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 4200, in spawn 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] cleanup_instance_disks=created_disks) 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] File "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7258, in _create_guest_with_network 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] raise exception.VirtualInterfaceCreateException() 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed 2021-10-08 19:11:21.562 7324 ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the network(s), not rescheduling.: nova.exception.BuildAbortException: Build of instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the network(s), not rescheduling. Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] Successfully unplugged vif VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds to deallocate network for instance. Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume 07041181-318b-4fae-b71e-02ac7b11bca3 Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to the instance being registered to the remote host None.: nova.exception.BuildAbortException: Build of instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the network(s), not rescheduling. Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] Delete attachment failed for attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: 404: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found.: nova.exception.VolumeAttachmentNotFound: Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default default] Deleted allocation for instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 INFO nova.compute.manager [-] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 Thanks & Regards Midhunlal N B On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont wrote: > There are essentially two types of networks, vlan and vxlan, that can be > attached to a VM. Ideally, you want to look at the logs on the controllers > and the compute node. > > Openstack-ansible seems to send stuff here > https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F > . > > On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb > wrote: > >> Hi Laurent, >> Thank you very much for your reply.we configured our network as per >> official document .Please take a look at below details. >> --->Controller node configured with below interfaces >> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >> >> ---> Compute node >> bond1,bond0,br-mgmt,br-vxlan,br-storage >> >> I don't have much more experience in openstack,I think here we used vlan >> network. >> >> Thanks & Regards >> Midhunlal N B >> +918921245637 >> >> >> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont >> wrote: >> >>> You will need to look at the neutron-server logs + the ovs/libviirt >>> agent logs on the compute. The error returned from the VM creation is not >>> useful most of the time. >>> >>> Was this a vxlan or vlan network? >>> >>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>> wrote: >>> >>>> Hi team, >>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>> --->I logged in to horizon and created a new network and launched a vm >>>> but I am getting an error. >>>> >>>> Error: Failed to perform requested operation on instance "hope", the >>>> instance has an error status: Please try again later [Error: Build of >>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>> the network(s), not rescheduling.]. >>>> >>>> -->Then I checked log >>>> >>>> | fault | {'code': 500, 'created': >>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>> last):\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>> line 7235, in _create_guest_with_network\n >>>> post_xml_callback=post_xml_callback)\n File >>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>> next(self.gen)\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>> File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>> line 125, in wait\n result = hub.switch()\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>> line 313, in switch\n return >>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>> handling of the above exception, another exception occurred:\n\nTraceback >>>> (most recent call last):\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>> line 7258, in _create_guest_with_network\n raise >>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>> Virtual Interface creation failed\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>> request_spec, accel_uuids)\n File >>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>> line 2458, in _build_and_run_instance\n >>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>> network(s), not rescheduling.\n'} | >>>> >>>> Please help me with this error. >>>> >>>> >>>> Thanks & Regards >>>> Midhunlal N B >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.kavanagh at canonical.com Fri Oct 8 14:07:04 2021 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Fri, 8 Oct 2021 15:07:04 +0100 Subject: [charms] Yoga PTG Message-ID: Hi all, The OpenStack charms PTG sessions are booked as: - Wednesday 20th October - 14.00 - 17.00 UTC in the Icehouse room - Thursday 21st October - 14.00 - 17.00 UTC also in the Icehouse room Please add your name and discussion topic proposals to the etherpad. [1]. The etherpad also has links to the PTG main site, schedule and Charms. Thank you in advance and see you soon! Alex (tinwood) [1] https://etherpad.opendev.org/p/charms-yoga-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Oct 8 14:18:25 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 8 Oct 2021 16:18:25 +0200 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: I will try next week and write test results. Thanks Il giorno ven 8 ott 2021 alle ore 15:07 Laurent Dumont < laurentfdumont at gmail.com> ha scritto: > You can try a few options to see if it helps. It might be a question of > NFSv3 or V4 or the Netapp driver changes themselves. > > > https://forum.opennebula.io/t/nfs-v3-datastore-and-failed-to-lock-byte-100/7482 > > On Fri, Oct 8, 2021 at 8:51 AM Ignazio Cassano > wrote: > >> Hello Laurent, >> I am using nfs_mount_options = nfsvers=3,lookupcache=pos >> I always use the above options. >> I have this issue only with the last cinder images of wallaby >> Thanks >> Ignazio >> >> Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < >> laurentfdumont at gmail.com> ha scritto: >> >>> What options are you using for the NFS client on the controllers side? >>> There are some recommended settings that Netapp can provide. >>> >>> On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano >>> wrote: >>> >>>> Hello, >>>> I've just updated my kolla wallaby with latest images. When I create >>>> volume from image on ceph it works. >>>> When I create volume from image on nfs netapp ontap, it does not work. >>>> The following is reported in cinder-volume.log: >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>> line 950, in _create_from_image_cache_or_download >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>> model_update = self._create_from_image_download( >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>> line 766, in _create_from_image_download >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>> volume_utils.copy_image_to_volume(self.driver, context, volume, >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >>>> line 1158, in copy_image_to_volume >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >>>> exception.ImageCopyFailure(reason=ex.stderr) >>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >>>> qemu-img: >>>> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >>>> error while converting raw: Failed to lock byte 101 >>>> >>>> Any help please ? >>>> Ignazio >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 14:56:25 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 10:56:25 -0400 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: These are the nova-compute logs but I think it just catches the error from the neutron component. Any logs from neutron-server, ovs-agent, libvirt-agent? Can you share the "openstack network show NETWORK_ID_HERE" of the network you are attaching the VM to? On Fri, Oct 8, 2021 at 9:53 AM Midhunlal Nb wrote: > Hi, > This is the log i am getting while launching a new vm > > > Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 > INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds > to destroy the instance on the hypervisor. > Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 > INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds > to detach 1 volumes for instance. > Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to > allocate network(s): nova.exception.VirtualInterfaceCreateException: > Virtual Interface creation failed > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > Traceback (most recent call last): > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 7235, in _create_guest_with_network > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > post_xml_callback=post_xml_callback) > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > next(self.gen) > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 479, in wait_for_instance_event > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > actual_event = event.wait() > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", > line 125, in wait > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > result = hub.switch() > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", > line 313, in switch > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > return self.greenlet.switch() > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > eventlet.timeout.Timeout: 300 seconds > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > During handling of the above exception, another exception occurred: > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > Traceback (most recent call last): > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", > line 2397, in _build_and_run_instance > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > accel_info=accel_info) > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 4200, in spawn > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > cleanup_instance_disks=created_disks) > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > File > "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > line 7258, in _create_guest_with_network > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > raise exception.VirtualInterfaceCreateException() > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > nova.exception.VirtualInterfaceCreateException: Virtual Interface creation > failed > 2021-10-08 19:11:21.562 7324 > ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] > Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 > ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance > 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the > network(s), not rescheduling.: nova.exception.BuildAbortException: Build of > instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate > the network(s), not rescheduling. > Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 > INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] Successfully unplugged vif > VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') > Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 > INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds > to deallocate network for instance. > Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 > INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume > 07041181-318b-4fae-b71e-02ac7b11bca3 > Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 > ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call > for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to > the instance being registered to the remote host None.: > nova.exception.BuildAbortException: Build of instance > 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the > network(s), not rescheduling. > Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 > ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] Delete attachment failed for attachment > 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be > found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. > (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: > 404: cinderclient.exceptions.NotFound: Volume attachment could not be found > with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP > 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) > Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 > WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due > to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be > found.: nova.exception.VolumeAttachmentNotFound: Volume attachment > 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. > Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 > INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 > 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default > default] Deleted allocation for instance > 364564c2-bfa6-4354-a4da-a18a3fef43c3 > Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 > INFO nova.compute.manager [-] [instance: > 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) > Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 > WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 > > > Thanks & Regards > Midhunlal N B > > > > On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont > wrote: > >> There are essentially two types of networks, vlan and vxlan, that can be >> attached to a VM. Ideally, you want to look at the logs on the controllers >> and the compute node. >> >> Openstack-ansible seems to send stuff here >> https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F >> . >> >> On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb >> wrote: >> >>> Hi Laurent, >>> Thank you very much for your reply.we configured our network as per >>> official document .Please take a look at below details. >>> --->Controller node configured with below interfaces >>> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >>> >>> ---> Compute node >>> bond1,bond0,br-mgmt,br-vxlan,br-storage >>> >>> I don't have much more experience in openstack,I think here we used vlan >>> network. >>> >>> Thanks & Regards >>> Midhunlal N B >>> +918921245637 >>> >>> >>> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont >>> wrote: >>> >>>> You will need to look at the neutron-server logs + the ovs/libviirt >>>> agent logs on the compute. The error returned from the VM creation is not >>>> useful most of the time. >>>> >>>> Was this a vxlan or vlan network? >>>> >>>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>>> wrote: >>>> >>>>> Hi team, >>>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>>> --->I logged in to horizon and created a new network and launched a >>>>> vm but I am getting an error. >>>>> >>>>> Error: Failed to perform requested operation on instance "hope", the >>>>> instance has an error status: Please try again later [Error: Build of >>>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>>> the network(s), not rescheduling.]. >>>>> >>>>> -->Then I checked log >>>>> >>>>> | fault | {'code': 500, 'created': >>>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>>> last):\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>> line 7235, in _create_guest_with_network\n >>>>> post_xml_callback=post_xml_callback)\n File >>>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>>> next(self.gen)\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>>> File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>>> line 125, in wait\n result = hub.switch()\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>>> line 313, in switch\n return >>>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>> (most recent call last):\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n >>>>> File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>> line 7258, in _create_guest_with_network\n raise >>>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>>> Virtual Interface creation failed\n\nDuring handling of the above >>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>> last):\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>>> request_spec, accel_uuids)\n File >>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>> line 2458, in _build_and_run_instance\n >>>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>> network(s), not rescheduling.\n'} | >>>>> >>>>> Please help me with this error. >>>>> >>>>> >>>>> Thanks & Regards >>>>> Midhunlal N B >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Fri Oct 8 15:57:25 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Fri, 8 Oct 2021 17:57:25 +0200 Subject: [ironic] Yoga PTG schedule Message-ID: Hello Ironicers! In our etherpad [1] we have 18 topics for this PTG and we have a total of 11 slots. This is the proposed schedule (we will discuss in our upstream meeting on Monday). *Monday (18 Oct) - Room Juno 15:00 - 17:00 UTC* * Support OpenBMC * Persistent memory Support * Redfish Host Connection Interface * Boot from Volume + UEFI *Tuesday (19 Oct) - Room Juno 14:00 - 17:00 UTC* * Posting to placement ourselves * The rise of compossible hardware, again * Self-configuring Ironic Service * Is there any way we can drive a co-operative use mode of ironic amongst some of the users? *Wednesday (18 Oct) - Room Juno 14:00 - 16:00 UTC* * Prioritize 3rd party CI in a box * Secure RBAC items in Yoga * Bulk operations *Thursday (18 Oct) - Room Kilo 14:00 - 16:00 UTC* * having to go look at logs is an antipattern * pxe-grub * Remove instance (non-BFV, non-ramdisk) networking booting * Direct SDN Integrations *Friday (22 Oct) - Room Kilo 14:00 - 16:00 UTC* * Eliminate manual commands * Certificate Management * Stopping use of wiki.openstack.org In case we don't have enough time we can book more slots if the community is ok and the slots are available. We will also have a section in the etherpad for last-minute topics =) [1] https://etherpad.opendev.org/p/ironic-yoga-ptg -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.andre at redhat.com Fri Oct 8 16:14:55 2021 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Fri, 8 Oct 2021 18:14:55 +0200 Subject: Introducing Hubtty, a Gertty fork for Github code reviews Message-ID: Hi all, First off, apologies if this isn't the right forum, this has nothing to do with OpenStack development. I'm trying to reach out to the many Gertty users hiding here who might want a similar tool for their Github code reviews. I'm happy to announce the first release of Hubtty [1], a fork of Gertty that I adapted to the Github API and workflow. It has the same look and feel but differs in a few things I detailed in the release changelog [2]. This first version focuses on porting Gertty to Github and works reasonably well. Myself and other intrepid developers already use it every day and I personally find it very convenient for managing incoming PR reviews. In the coming versions I'd like to integrate better with the Github features and improve UX. Try it with `pip install hubtty` and let me know what you think of it. Note that Hubtty can't submit reviews to repositories for which the parent organization has enabled third-party application restrictions without explicitly allowing hubtty [3]. I'm working around the issue by using a token generated by the `gh` app. Martin [1] https://github.com/hubtty/hubtty [2] https://github.com/hubtty/hubtty/blob/v0.1/CHANGELOG.md [3] https://github.com/hubtty/hubtty/issues/20 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Oct 8 18:00:36 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 Oct 2021 13:00:36 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 8th Oct, 21: Reading: 5 min Message-ID: <17c611064bc.128403fc0750677.6030027808869137231@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * TC this week video meeting held on Oct 7th Thursday. * Most of the meeting discussions are summarized below (Completed or in-progress activities section). We forgot to record the meeting, an apology for that. I am summarizing each topic discussion, or you can also check some summary and transcript (these are autogenerated and not so perfect, though) @ - https://meetings.opendev.org/meetings/tc/2021/tc.2021-10-07-15.01.log.html - https://review.opendev.org/c/openstack/governance/+/813112/1/reference/tc-meeting-transcripts/OpenStack+Technical+Committee+Video+meeting+Transcript(2021-10-07).txt * We will have next week's IRC meeting on Oct 14th, Thursday 15:00 UTC, feel free the topic in agenda[1] by Oct 13th. 2. What we completed this week: ========================= * Added the cinder-netapp charm to Openstack charms[2] * Retired puppet-freezer[3] 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ * TC is using the etherpad[4] for Xena cycle working item. We will be checking and updating the status biweekly on the same etherpad. * Current status is: 9 completed, 3 to-be-discussed in PTG, 1 in-progress Open Reviews ----------------- * Four open reviews for ongoing activities[5]. Place to maintain the external hosted ELK, E-R, O-H services ------------------------------------------------------------------------- * We continue the discussion[6] on final checks and updates from Allison. Below is the summary: * Allison updated on the offer of the donated credits (45k/year from AWS) to run the ELK services. * Current ELK stack will be migrated on OpenSearch (an open source fork of Elasticsearch) cluster on AWS managed services. * We discussed and agreed to start/move these ELK service work under TACT SIG with the help of Daniel Pawlik, Ross Tomlinson, and Allison (along with Jeremy, Clark as backup/helping in migration). * Allison will continue work on setting up the accounts. * A huge thanks to Allison for driving and arranging the required resources. Add project health check tool ----------------------------------- * Using stats of review/code should not be the only criteria to decide the project health as it depends on many other factors, including the nature of project. * We agreed to use the generated stats only as an early warning tool and not to publish those anywhere which can be wrongly interpreted as project health or so. * We will continue discussing it in PTG for the next steps on this and what to do on TC liaison things. * Meanwhile, we are reviewing Rico proposal on collecting stats tool[7]. Stable Core team process change --------------------------------------- * Current proposal is under review[8]. Feel free to provide early feedback if you have any. Call for 'Technical Writing' SIG Chair/Maintainers ---------------------------------------------------------- * As you might have read in the email from Elod[9], Stephen, who is the current chair for this SIG is not planning to continue to chair. * This SIG has accomplished the work for what it was formed, and now most of the documentation are managed on projects side. TC agreed to move this SIG to complete state and move the repos under TC (tc members will be added to core group in those repos). * Any advisory work on documentation which is what this SIG was doing, will be handle in TC. TC tags analysis ------------------- * Operator feedback is asked on open infra newsletter too, and we will continue the discussion in PTG and will take the final decision based on feedback we receive, if any[10]. Project updates ------------------- * Retiring js-openstack-lib [11] Yoga release community-wide goal ----------------------------------------- * Please add the possible candidates in this etherpad [12]. * Current status: "Secure RBAC" is selected for Yoga cycle[13]. PTG planning ---------------- * We are collecting the PTG topics in etherpad[14], feel free to add any topic you would like to discuss. * We discussed the live stream of one of the TC PTG sessions like we did last time. Once we have more topics in etherpad, then we can select the appropriate one. Test support for TLS default: ---------------------------------- * Rico has started a separate email thread over testing with tls-proxy enabled[15], we encourage projects to participate in that testing and help to enable the tls-proxy in gate testing. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[16]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [17] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [18] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/809011 [3] https://review.opendev.org/c/openstack/governance/+/808679 [4] https://etherpad.opendev.org/p/tc-xena-tracke [5] https://review.opendev.org/q/projects:openstack/governance+status:open [6] https://etherpad.opendev.org/p/elk-service-maintenance-plan [7] https://review.opendev.org/c/openstack/governance/+/810037 [8] https://review.opendev.org/c/openstack/governance/+/810721 [9] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025161.html [10] https://governance.openstack.org/tc/reference/tags/index.html [11] https://review.opendev.org/c/openstack/governance/+/798540 [12] https://review.opendev.org/c/openstack/governance/+/807163 [13] https://etherpad.opendev.org/p/y-series-goals [14] https://etherpad.opendev.org/p/tc-yoga-ptg [15] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023000.html [16] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [17] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [18] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From ignaziocassano at gmail.com Fri Oct 8 18:09:28 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 8 Oct 2021 20:09:28 +0200 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: Hello, I soved my problem. Since I have multiple cinder backends I had to set scheduler_default_filters = DriverFilter in default section of cinder.conf This solved. Ignazio Il giorno ven 8 ott 2021 alle ore 16:59 Laurent Dumont < laurentfdumont at gmail.com> ha scritto: > These are the nova-compute logs but I think it just catches the error from > the neutron component. Any logs from neutron-server, ovs-agent, > libvirt-agent? > > Can you share the "openstack network show NETWORK_ID_HERE" of the network > you are attaching the VM to? > > On Fri, Oct 8, 2021 at 9:53 AM Midhunlal Nb > wrote: > >> Hi, >> This is the log i am getting while launching a new vm >> >> >> Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds >> to destroy the instance on the hypervisor. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds >> to detach 1 volumes for instance. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to >> allocate network(s): nova.exception.VirtualInterfaceCreateException: >> Virtual Interface creation failed >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Traceback (most recent call last): >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7235, in _create_guest_with_network >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> post_xml_callback=post_xml_callback) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> next(self.gen) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 479, in wait_for_instance_event >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> actual_event = event.wait() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >> line 125, in wait >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> result = hub.switch() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >> line 313, in switch >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> return self.greenlet.switch() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> eventlet.timeout.Timeout: 300 seconds >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> During handling of the above exception, another exception occurred: >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Traceback (most recent call last): >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 2397, in _build_and_run_instance >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> accel_info=accel_info) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 4200, in spawn >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> cleanup_instance_disks=created_disks) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7258, in _create_guest_with_network >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> raise exception.VirtualInterfaceCreateException() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> nova.exception.VirtualInterfaceCreateException: Virtual Interface creation >> failed >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 >> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >> network(s), not rescheduling.: nova.exception.BuildAbortException: Build of >> instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate >> the network(s), not rescheduling. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 >> INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Successfully unplugged vif >> VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds >> to deallocate network for instance. >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume >> 07041181-318b-4fae-b71e-02ac7b11bca3 >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 >> ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call >> for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to >> the instance being registered to the remote host None.: >> nova.exception.BuildAbortException: Build of instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >> network(s), not rescheduling. >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 >> ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Delete attachment failed for attachment >> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be >> found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. >> (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: >> 404: cinderclient.exceptions.NotFound: Volume attachment could not be found >> with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP >> 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 >> WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due >> to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be >> found.: nova.exception.VolumeAttachmentNotFound: Volume attachment >> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. >> Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 >> INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Deleted allocation for instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 >> Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 >> INFO nova.compute.manager [-] [instance: >> 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) >> Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 >> WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 >> >> >> Thanks & Regards >> Midhunlal N B >> >> >> >> On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont >> wrote: >> >>> There are essentially two types of networks, vlan and vxlan, that can be >>> attached to a VM. Ideally, you want to look at the logs on the controllers >>> and the compute node. >>> >>> Openstack-ansible seems to send stuff here >>> https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F >>> . >>> >>> On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb >>> wrote: >>> >>>> Hi Laurent, >>>> Thank you very much for your reply.we configured our network as per >>>> official document .Please take a look at below details. >>>> --->Controller node configured with below interfaces >>>> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >>>> >>>> ---> Compute node >>>> bond1,bond0,br-mgmt,br-vxlan,br-storage >>>> >>>> I don't have much more experience in openstack,I think here we used >>>> vlan network. >>>> >>>> Thanks & Regards >>>> Midhunlal N B >>>> +918921245637 >>>> >>>> >>>> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont >>>> wrote: >>>> >>>>> You will need to look at the neutron-server logs + the ovs/libviirt >>>>> agent logs on the compute. The error returned from the VM creation is not >>>>> useful most of the time. >>>>> >>>>> Was this a vxlan or vlan network? >>>>> >>>>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>>>> wrote: >>>>> >>>>>> Hi team, >>>>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>>>> --->I logged in to horizon and created a new network and launched a >>>>>> vm but I am getting an error. >>>>>> >>>>>> Error: Failed to perform requested operation on instance "hope", the >>>>>> instance has an error status: Please try again later [Error: Build of >>>>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>>>> the network(s), not rescheduling.]. >>>>>> >>>>>> -->Then I checked log >>>>>> >>>>>> | fault | {'code': 500, 'created': >>>>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>>>> last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 7235, in _create_guest_with_network\n >>>>>> post_xml_callback=post_xml_callback)\n File >>>>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>>>> next(self.gen)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>>>> File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>>>> line 125, in wait\n result = hub.switch()\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>>>> line 313, in switch\n return >>>>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>> (most recent call last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n >>>>>> File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 7258, in _create_guest_with_network\n raise >>>>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>>>> Virtual Interface creation failed\n\nDuring handling of the above >>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>> last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>>>> request_spec, accel_uuids)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2458, in _build_and_run_instance\n >>>>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>> network(s), not rescheduling.\n'} | >>>>>> >>>>>> Please help me with this error. >>>>>> >>>>>> >>>>>> Thanks & Regards >>>>>> Midhunlal N B >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 21:00:12 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 17:00:12 -0400 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: I think that's on the wrong thread Ignazio :D On Fri, Oct 8, 2021 at 2:09 PM Ignazio Cassano wrote: > Hello, I soved my problem. > Since I have multiple cinder backends I had to set > scheduler_default_filters = DriverFilter > in default section of cinder.conf > This solved. > Ignazio > > Il giorno ven 8 ott 2021 alle ore 16:59 Laurent Dumont < > laurentfdumont at gmail.com> ha scritto: > >> These are the nova-compute logs but I think it just catches the error >> from the neutron component. Any logs from neutron-server, ovs-agent, >> libvirt-agent? >> >> Can you share the "openstack network show NETWORK_ID_HERE" of the network >> you are attaching the VM to? >> >> On Fri, Oct 8, 2021 at 9:53 AM Midhunlal Nb >> wrote: >> >>> Hi, >>> This is the log i am getting while launching a new vm >>> >>> >>> Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds >>> to destroy the instance on the hypervisor. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds >>> to detach 1 volumes for instance. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to >>> allocate network(s): nova.exception.VirtualInterfaceCreateException: >>> Virtual Interface creation failed >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Traceback (most recent call last): >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7235, in _create_guest_with_network >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> post_xml_callback=post_xml_callback) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> next(self.gen) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 479, in wait_for_instance_event >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> actual_event = event.wait() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>> line 125, in wait >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> result = hub.switch() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>> line 313, in switch >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> return self.greenlet.switch() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> eventlet.timeout.Timeout: 300 seconds >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> During handling of the above exception, another exception occurred: >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Traceback (most recent call last): >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 2397, in _build_and_run_instance >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> accel_info=accel_info) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 4200, in spawn >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> cleanup_instance_disks=created_disks) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7258, in _create_guest_with_network >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> raise exception.VirtualInterfaceCreateException() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> nova.exception.VirtualInterfaceCreateException: Virtual Interface creation >>> failed >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 >>> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >>> network(s), not rescheduling.: nova.exception.BuildAbortException: Build of >>> instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate >>> the network(s), not rescheduling. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 >>> INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Successfully unplugged vif >>> VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds >>> to deallocate network for instance. >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume >>> 07041181-318b-4fae-b71e-02ac7b11bca3 >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 >>> ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call >>> for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to >>> the instance being registered to the remote host None.: >>> nova.exception.BuildAbortException: Build of instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >>> network(s), not rescheduling. >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 >>> ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Delete attachment failed for attachment >>> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be >>> found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. >>> (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: >>> 404: cinderclient.exceptions.NotFound: Volume attachment could not be found >>> with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP >>> 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 >>> WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due >>> to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be >>> found.: nova.exception.VolumeAttachmentNotFound: Volume attachment >>> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. >>> Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 >>> INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Deleted allocation for instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 >>> Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 >>> INFO nova.compute.manager [-] [instance: >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) >>> Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 >>> WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 >>> >>> >>> Thanks & Regards >>> Midhunlal N B >>> >>> >>> >>> On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont >>> wrote: >>> >>>> There are essentially two types of networks, vlan and vxlan, that can >>>> be attached to a VM. Ideally, you want to look at the logs on the >>>> controllers and the compute node. >>>> >>>> Openstack-ansible seems to send stuff here >>>> https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F >>>> . >>>> >>>> On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb >>>> wrote: >>>> >>>>> Hi Laurent, >>>>> Thank you very much for your reply.we configured our network as per >>>>> official document .Please take a look at below details. >>>>> --->Controller node configured with below interfaces >>>>> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >>>>> >>>>> ---> Compute node >>>>> bond1,bond0,br-mgmt,br-vxlan,br-storage >>>>> >>>>> I don't have much more experience in openstack,I think here we used >>>>> vlan network. >>>>> >>>>> Thanks & Regards >>>>> Midhunlal N B >>>>> +918921245637 >>>>> >>>>> >>>>> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont < >>>>> laurentfdumont at gmail.com> wrote: >>>>> >>>>>> You will need to look at the neutron-server logs + the ovs/libviirt >>>>>> agent logs on the compute. The error returned from the VM creation is not >>>>>> useful most of the time. >>>>>> >>>>>> Was this a vxlan or vlan network? >>>>>> >>>>>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>>>>> wrote: >>>>>> >>>>>>> Hi team, >>>>>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>>>>> --->I logged in to horizon and created a new network and launched a >>>>>>> vm but I am getting an error. >>>>>>> >>>>>>> Error: Failed to perform requested operation on instance "hope", the >>>>>>> instance has an error status: Please try again later [Error: Build of >>>>>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>>>>> the network(s), not rescheduling.]. >>>>>>> >>>>>>> -->Then I checked log >>>>>>> >>>>>>> | fault | {'code': 500, 'created': >>>>>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>>>>> last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 7235, in _create_guest_with_network\n >>>>>>> post_xml_callback=post_xml_callback)\n File >>>>>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>>>>> next(self.gen)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>>>>> File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>>>>> line 125, in wait\n result = hub.switch()\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>>>>> line 313, in switch\n return >>>>>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>> (most recent call last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n >>>>>>> File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 7258, in _create_guest_with_network\n raise >>>>>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>>>>> Virtual Interface creation failed\n\nDuring handling of the above >>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>> last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>>>>> request_spec, accel_uuids)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2458, in _build_and_run_instance\n >>>>>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>>> network(s), not rescheduling.\n'} | >>>>>>> >>>>>>> Please help me with this error. >>>>>>> >>>>>>> >>>>>>> Thanks & Regards >>>>>>> Midhunlal N B >>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Fri Oct 8 21:00:30 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 8 Oct 2021 17:00:30 -0400 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: Pingback from other thread Does it work with a Netapp/Ontap NFS volume now? On Fri, Oct 8, 2021 at 10:18 AM Ignazio Cassano wrote: > I will try next week and write test results. > Thanks > > > Il giorno ven 8 ott 2021 alle ore 15:07 Laurent Dumont < > laurentfdumont at gmail.com> ha scritto: > >> You can try a few options to see if it helps. It might be a question of >> NFSv3 or V4 or the Netapp driver changes themselves. >> >> >> https://forum.opennebula.io/t/nfs-v3-datastore-and-failed-to-lock-byte-100/7482 >> >> On Fri, Oct 8, 2021 at 8:51 AM Ignazio Cassano >> wrote: >> >>> Hello Laurent, >>> I am using nfs_mount_options = nfsvers=3,lookupcache=pos >>> I always use the above options. >>> I have this issue only with the last cinder images of wallaby >>> Thanks >>> Ignazio >>> >>> Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < >>> laurentfdumont at gmail.com> ha scritto: >>> >>>> What options are you using for the NFS client on the controllers side? >>>> There are some recommended settings that Netapp can provide. >>>> >>>> On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> Hello, >>>>> I've just updated my kolla wallaby with latest images. When I create >>>>> volume from image on ceph it works. >>>>> When I create volume from image on nfs netapp ontap, it does not work. >>>>> The following is reported in cinder-volume.log: >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>> line 950, in _create_from_image_cache_or_download >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>> model_update = self._create_from_image_download( >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>> line 766, in _create_from_image_download >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>> volume_utils.copy_image_to_volume(self.driver, context, volume, >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >>>>> line 1158, in copy_image_to_volume >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >>>>> exception.ImageCopyFailure(reason=ex.stderr) >>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >>>>> qemu-img: >>>>> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >>>>> error while converting raw: Failed to lock byte 101 >>>>> >>>>> Any help please ? >>>>> Ignazio >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraden at verisign.com Sat Oct 9 02:26:15 2021 From: abraden at verisign.com (Braden, Albert) Date: Sat, 9 Oct 2021 02:26:15 +0000 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Message-ID: Hello everyone. It's great to be back working on OpenStack again. I'm at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. Before applying the change, we see the DNS record in the recordset: $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | $ and we can pull it from the DNS server on the controllers: $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 After applying the change, we don't see it: $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | $ $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra $ We see this in the logs: 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] It appears that Designate is trying to create the new record before the deletion of the old one finishes. Is anyone else seeing this on Train? The same set of actions doesn't cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Oct 9 08:07:28 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 9 Oct 2021 10:07:28 +0200 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: Yes, I removed cinder containers. I deployed cinder again. After the above procedure, netapp volume changed the error: cannot find valid backend. So after the reinstallarion I was not able to create netapp volume at all. I inserted the parameter I mentioned in my previous email, and now I am able to create both empty and from image netapp volumes. Ignazio Il Ven 8 Ott 2021, 23:00 Laurent Dumont ha scritto: > Pingback from other thread > > Does it work with a Netapp/Ontap NFS volume now? > > On Fri, Oct 8, 2021 at 10:18 AM Ignazio Cassano > wrote: > >> I will try next week and write test results. >> Thanks >> >> >> Il giorno ven 8 ott 2021 alle ore 15:07 Laurent Dumont < >> laurentfdumont at gmail.com> ha scritto: >> >>> You can try a few options to see if it helps. It might be a question of >>> NFSv3 or V4 or the Netapp driver changes themselves. >>> >>> >>> https://forum.opennebula.io/t/nfs-v3-datastore-and-failed-to-lock-byte-100/7482 >>> >>> On Fri, Oct 8, 2021 at 8:51 AM Ignazio Cassano >>> wrote: >>> >>>> Hello Laurent, >>>> I am using nfs_mount_options = nfsvers=3,lookupcache=pos >>>> I always use the above options. >>>> I have this issue only with the last cinder images of wallaby >>>> Thanks >>>> Ignazio >>>> >>>> Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < >>>> laurentfdumont at gmail.com> ha scritto: >>>> >>>>> What options are you using for the NFS client on the controllers side? >>>>> There are some recommended settings that Netapp can provide. >>>>> >>>>> On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano < >>>>> ignaziocassano at gmail.com> wrote: >>>>> >>>>>> Hello, >>>>>> I've just updated my kolla wallaby with latest images. When I create >>>>>> volume from image on ceph it works. >>>>>> When I create volume from image on nfs netapp ontap, it does not work. >>>>>> The following is reported in cinder-volume.log: >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>>> line 950, in _create_from_image_cache_or_download >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>> model_update = self._create_from_image_download( >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>>> line 766, in _create_from_image_download >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>> volume_utils.copy_image_to_volume(self.driver, context, volume, >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >>>>>> line 1158, in copy_image_to_volume >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >>>>>> exception.ImageCopyFailure(reason=ex.stderr) >>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >>>>>> qemu-img: >>>>>> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >>>>>> error while converting raw: Failed to lock byte 101 >>>>>> >>>>>> Any help please ? >>>>>> Ignazio >>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Oct 9 08:12:37 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 9 Oct 2021 10:12:37 +0200 Subject: [kolla][wallaby][cinder] netapp driver failed do create volume from images In-Reply-To: References: Message-ID: Sorry, I sent the solution in a wrong thread. I solved inserting default_filter=Driver_Filters in cinder.conf. Probably it solved because I have both netapp and ceph backends. Ignazio Il Sab 9 Ott 2021, 10:07 Ignazio Cassano ha scritto: > Yes, I removed cinder containers. > I deployed cinder again. > After the above procedure, netapp volume changed the error: cannot find > valid backend. > So after the reinstallarion I was not able to create netapp volume at all. > I inserted the parameter I mentioned in my previous email, and now I am > able to create both empty and from image netapp volumes. > Ignazio > > > > Il Ven 8 Ott 2021, 23:00 Laurent Dumont ha > scritto: > >> Pingback from other thread >> >> Does it work with a Netapp/Ontap NFS volume now? >> >> On Fri, Oct 8, 2021 at 10:18 AM Ignazio Cassano >> wrote: >> >>> I will try next week and write test results. >>> Thanks >>> >>> >>> Il giorno ven 8 ott 2021 alle ore 15:07 Laurent Dumont < >>> laurentfdumont at gmail.com> ha scritto: >>> >>>> You can try a few options to see if it helps. It might be a question of >>>> NFSv3 or V4 or the Netapp driver changes themselves. >>>> >>>> >>>> https://forum.opennebula.io/t/nfs-v3-datastore-and-failed-to-lock-byte-100/7482 >>>> >>>> On Fri, Oct 8, 2021 at 8:51 AM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> Hello Laurent, >>>>> I am using nfs_mount_options = nfsvers=3,lookupcache=pos >>>>> I always use the above options. >>>>> I have this issue only with the last cinder images of wallaby >>>>> Thanks >>>>> Ignazio >>>>> >>>>> Il giorno ven 8 ott 2021 alle ore 14:46 Laurent Dumont < >>>>> laurentfdumont at gmail.com> ha scritto: >>>>> >>>>>> What options are you using for the NFS client on the controllers >>>>>> side? There are some recommended settings that Netapp can provide. >>>>>> >>>>>> On Fri, Oct 8, 2021 at 7:20 AM Ignazio Cassano < >>>>>> ignaziocassano at gmail.com> wrote: >>>>>> >>>>>>> Hello, >>>>>>> I've just updated my kolla wallaby with latest images. When I create >>>>>>> volume from image on ceph it works. >>>>>>> When I create volume from image on nfs netapp ontap, it does not >>>>>>> work. >>>>>>> The following is reported in cinder-volume.log: >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>>>> line 950, in _create_from_image_cache_or_download >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>>> model_update = self._create_from_image_download( >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/flows/manager/create_volume.py", >>>>>>> line 766, in _create_from_image_download >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>>> volume_utils.copy_image_to_volume(self.driver, context, volume, >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server File >>>>>>> "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/volume_utils.py", >>>>>>> line 1158, in copy_image_to_volume >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server raise >>>>>>> exception.ImageCopyFailure(reason=ex.stderr) >>>>>>> 2021-10-08 13:11:20.590 26 ERROR oslo_messaging.rpc.server >>>>>>> cinder.exception.ImageCopyFailure: Failed to copy image to volume: >>>>>>> qemu-img: >>>>>>> /var/lib/cinder/mnt/55fcb53b884ca983ed3d6fa8aac57810/volume-68fd6214-c24a-4104-ae98-e9d0b60aa136: >>>>>>> error while converting raw: Failed to lock byte 101 >>>>>>> >>>>>>> Any help please ? >>>>>>> Ignazio >>>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Oct 9 09:45:15 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 9 Oct 2021 11:45:15 +0200 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: I am sorry. Ignazio Il Ven 8 Ott 2021, 20:09 Ignazio Cassano ha scritto: > Hello, I soved my problem. > Since I have multiple cinder backends I had to set > scheduler_default_filters = DriverFilter > in default section of cinder.conf > This solved. > Ignazio > > Il giorno ven 8 ott 2021 alle ore 16:59 Laurent Dumont < > laurentfdumont at gmail.com> ha scritto: > >> These are the nova-compute logs but I think it just catches the error >> from the neutron component. Any logs from neutron-server, ovs-agent, >> libvirt-agent? >> >> Can you share the "openstack network show NETWORK_ID_HERE" of the network >> you are attaching the VM to? >> >> On Fri, Oct 8, 2021 at 9:53 AM Midhunlal Nb >> wrote: >> >>> Hi, >>> This is the log i am getting while launching a new vm >>> >>> >>> Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds >>> to destroy the instance on the hypervisor. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds >>> to detach 1 volumes for instance. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to >>> allocate network(s): nova.exception.VirtualInterfaceCreateException: >>> Virtual Interface creation failed >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Traceback (most recent call last): >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7235, in _create_guest_with_network >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> post_xml_callback=post_xml_callback) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> next(self.gen) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 479, in wait_for_instance_event >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> actual_event = event.wait() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>> line 125, in wait >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> result = hub.switch() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>> line 313, in switch >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> return self.greenlet.switch() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> eventlet.timeout.Timeout: 300 seconds >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> During handling of the above exception, another exception occurred: >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Traceback (most recent call last): >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>> line 2397, in _build_and_run_instance >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> accel_info=accel_info) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 4200, in spawn >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> cleanup_instance_disks=created_disks) >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> File >>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>> line 7258, in _create_guest_with_network >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> raise exception.VirtualInterfaceCreateException() >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> nova.exception.VirtualInterfaceCreateException: Virtual Interface creation >>> failed >>> 2021-10-08 19:11:21.562 7324 >>> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 >>> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >>> network(s), not rescheduling.: nova.exception.BuildAbortException: Build of >>> instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate >>> the network(s), not rescheduling. >>> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 >>> INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Successfully unplugged vif >>> VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds >>> to deallocate network for instance. >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 >>> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume >>> 07041181-318b-4fae-b71e-02ac7b11bca3 >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 >>> ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call >>> for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to >>> the instance being registered to the remote host None.: >>> nova.exception.BuildAbortException: Build of instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >>> network(s), not rescheduling. >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 >>> ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Delete attachment failed for attachment >>> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be >>> found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. >>> (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: >>> 404: cinderclient.exceptions.NotFound: Volume attachment could not be found >>> with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP >>> 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) >>> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 >>> WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due >>> to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be >>> found.: nova.exception.VolumeAttachmentNotFound: Volume attachment >>> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. >>> Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 >>> INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >>> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >>> default] Deleted allocation for instance >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3 >>> Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 >>> INFO nova.compute.manager [-] [instance: >>> 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) >>> Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 >>> WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 >>> >>> >>> Thanks & Regards >>> Midhunlal N B >>> >>> >>> >>> On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont >>> wrote: >>> >>>> There are essentially two types of networks, vlan and vxlan, that can >>>> be attached to a VM. Ideally, you want to look at the logs on the >>>> controllers and the compute node. >>>> >>>> Openstack-ansible seems to send stuff here >>>> https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F >>>> . >>>> >>>> On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb >>>> wrote: >>>> >>>>> Hi Laurent, >>>>> Thank you very much for your reply.we configured our network as per >>>>> official document .Please take a look at below details. >>>>> --->Controller node configured with below interfaces >>>>> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >>>>> >>>>> ---> Compute node >>>>> bond1,bond0,br-mgmt,br-vxlan,br-storage >>>>> >>>>> I don't have much more experience in openstack,I think here we used >>>>> vlan network. >>>>> >>>>> Thanks & Regards >>>>> Midhunlal N B >>>>> +918921245637 >>>>> >>>>> >>>>> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont < >>>>> laurentfdumont at gmail.com> wrote: >>>>> >>>>>> You will need to look at the neutron-server logs + the ovs/libviirt >>>>>> agent logs on the compute. The error returned from the VM creation is not >>>>>> useful most of the time. >>>>>> >>>>>> Was this a vxlan or vlan network? >>>>>> >>>>>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>>>>> wrote: >>>>>> >>>>>>> Hi team, >>>>>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>>>>> --->I logged in to horizon and created a new network and launched a >>>>>>> vm but I am getting an error. >>>>>>> >>>>>>> Error: Failed to perform requested operation on instance "hope", the >>>>>>> instance has an error status: Please try again later [Error: Build of >>>>>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>>>>> the network(s), not rescheduling.]. >>>>>>> >>>>>>> -->Then I checked log >>>>>>> >>>>>>> | fault | {'code': 500, 'created': >>>>>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>>>>> last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 7235, in _create_guest_with_network\n >>>>>>> post_xml_callback=post_xml_callback)\n File >>>>>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>>>>> next(self.gen)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>>>>> File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>>>>> line 125, in wait\n result = hub.switch()\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>>>>> line 313, in switch\n return >>>>>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>> (most recent call last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n >>>>>>> File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>>> line 7258, in _create_guest_with_network\n raise >>>>>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>>>>> Virtual Interface creation failed\n\nDuring handling of the above >>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>> last):\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>>>>> request_spec, accel_uuids)\n File >>>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>>> line 2458, in _build_and_run_instance\n >>>>>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>>> network(s), not rescheduling.\n'} | >>>>>>> >>>>>>> Please help me with this error. >>>>>>> >>>>>>> >>>>>>> Thanks & Regards >>>>>>> Midhunlal N B >>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Fri Oct 8 11:18:01 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Fri, 8 Oct 2021 16:48:01 +0530 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command Message-ID: Hi Team, I am installing Tripleo using the below link https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html In the Introspect section, When I executed the command openstack tripleo validator run --group pre-introspection I got the following error: +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu | PASSED | localhost | localhost | | 0:00:01.261 | | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space | PASSED | localhost | localhost | | 0:00:04.480 | | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram | PASSED | localhost | localhost | | 0:00:02.173 | | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode | PASSED | localhost | localhost | | 0:00:01.546 | | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway | FAILED | undercloud | No host matched | | | | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space | FAILED | undercloud | No host matched | | | | 2f0239db-d530-48eb-b606-f82179e72e50 | undercloud-neutron-sanity-check | FAILED | undercloud | No host matched | | | | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range | FAILED | undercloud | No host matched | | | | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection | FAILED | undercloud | No host matched | | | | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush | FAILED | undercloud | No host matched | | | +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ Then I created the following inventory file: [Undercloud] undercloud Passed this command while running the pre-introspection command. It then executed successfully. But with Pre-deployment, it is still failing even after passing the inventory +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e | PASSED | localhost | localhost | | 0:00:00.504 | | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns | PASSED | localhost | localhost | | 0:00:00.481 | | 93611c13-49a2-4cae-ad87-099546459481 | service-status | PASSED | all | undercloud | | 0:00:06.942 | | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux | PASSED | all | undercloud | | 0:00:02.433 | | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version | FAILED | all | undercloud | | 0:00:03.576 | | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed | PASSED | undercloud | undercloud | | 0:00:02.850 | | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed | FAILED | allovercloud | No host matched | | | | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment | FAILED | undercloud | undercloud | | 0:00:31.559 | | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug | FAILED | undercloud | undercloud | | 0:00:02.057 | | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud | | 0:00:00.884 | | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted | FAILED | undercloud | undercloud | | 0:00:02.138 | | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count | PASSED | undercloud | undercloud | | 0:00:06.164 | | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count | FAILED | undercloud | undercloud | | 0:00:00.934 | | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning | FAILED | undercloud | undercloud | | 0:00:02.456 | | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration | FAILED | undercloud | undercloud | | 0:00:00.882 | | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment | FAILED | undercloud | undercloud | | 0:00:00.880 | | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks | FAILED | undercloud | undercloud | | 0:00:01.934 | | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans | FAILED | undercloud | undercloud | | 0:00:01.931 | | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding | PASSED | all | undercloud | | 0:00:00.366 | +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ Also this step of passing the inventory file is not mentioned anywhere in the document. Is there anything I am missing? Regards Anirudh Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmeng at uvic.ca Fri Oct 8 17:36:27 2021 From: dmeng at uvic.ca (dmeng) Date: Fri, 08 Oct 2021 10:36:27 -0700 Subject: [sdk]: Remove volumes stuck in error deleting status Message-ID: Hello there, Hope everything is going well. I would like to know if there is any method that could remove the volume stuck in the "error-deleting" status? We are using the chameleon openstack cloud but we are not the admin user there, so couldn't use the "cinder force-delete" or "cinder reset-state" command. Wondering if there is any other method we could use to remove those volumes in our own project? And also wondering what might cause this "error-deleting" problem? We use openstacksdk block storage service, "cinder.delete_volume()" method to remove volumes, and it works fine before. Thanks and have a great day! Catherine -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuscoyu at gmail.com Sat Oct 9 12:19:17 2021 From: fuscoyu at gmail.com (fusco lu) Date: Sat, 9 Oct 2021 20:19:17 +0800 Subject: =?UTF-8?Q?Why_is_kolla=2Dkubernetes_not_maintained_anymore=EF=BC=9F?= Message-ID: hi,everyone Can you tell me the reason for not being maintained? -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Sat Oct 9 21:17:56 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 9 Oct 2021 17:17:56 -0400 Subject: [sdk]: Remove volumes stuck in error deleting status In-Reply-To: References: Message-ID: Is this the NSF Openstack cloud? Usually, if a delete fails, you'll need to get an Openstack admin to have a look. It's not a good sign most of the time. On Sat, Oct 9, 2021 at 5:12 PM dmeng wrote: > Hello there, > > Hope everything is going well. > > > > I would like to know if there is any method that could remove the volume > stuck in the "error-deleting" status? We are using the chameleon openstack > cloud but we are not the admin user there, so couldn't use the "cinder > force-delete" or "cinder reset-state" command. Wondering if there is any > other method we could use to remove those volumes in our own project? And > also wondering what might cause this "error-deleting" problem? We use > openstacksdk block storage service, "cinder.delete_volume()" method to > remove volumes, and it works fine before. > > Thanks and have a great day! > Catherine > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Mon Oct 11 03:11:22 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Mon, 11 Oct 2021 08:41:22 +0530 Subject: [Horizon] Yoga PTG Schedule Message-ID: Hello everyone, I have booked the below slots for Horizon Yoga PTG: Monday, October 18, 15:00 - 17:00 UTC Tuesday, October 19, 15:00 - 17:00 UTC Wednesday, October 20, 16:00 - 17:00 UTC I have also created Etherpad to collect topics for ptg discussion [1]. Feel free to add your topics. Please Let me know in case you have any topics to discuss and the above schedule doesn't work for you and We will see how to manage that. Also Please register for PTG if you haven't done it yet [2]. Thank you, Vishal Manchanda(irc: vishalmanchanda) [1] https://etherpad.opendev.org/p/horizon-yoga-ptg [2] https://www.eventbrite.com/e/project-teams-gathering-october-2021-tickets-161235669227 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Mon Oct 11 04:39:39 2021 From: sorrison at gmail.com (Sam Morrison) Date: Mon, 11 Oct 2021 15:39:39 +1100 Subject: [kolla] parent tags In-Reply-To: <19227A29-3F33-4EF3-B68B-AC6ABF87FB2B@uchicago.edu> References: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> <19227A29-3F33-4EF3-B68B-AC6ABF87FB2B@uchicago.edu> Message-ID: Thank Jason and Mark, I think just adding another tag at the end of the build process is what we are going to do. On a related note doe anyone have any tips on how to version a horizon container because it has multiple repos inside. Eg. We have the source for horizon and then source for each plugin which have different versions. With Debian they are all separate debs and installed differently with separate version and makes tracking things really easy. In the container world it makes it a bit harder. I?m thinking we need to have our kolla-build.conf specify specific git refs and then when we update this file incorporate that somehow into the versioning. Sam > On 8 Oct 2021, at 2:47 am, Jason Anderson wrote: > > Sam, I think Mark?s idea is in general stronger than what I will describe, if all you?re after is different aliases. It sounds like you are trying to iterate on two images (Barbican and Nova), presumably changing the source of the former frequently, and don?t want to build the entire ancestor chain each time. > > I had to do something similar because we have a fork of Horizon we work on a lot. Here is my hacky solution: https://github.com/ChameleonCloud/kolla/commit/79611111c03cc86be91a86a9ccd296abc7aa3a3e > > We are on Train w/ some other Kolla forks so I can?t guarantee that will apply cleanly, but it?s a small change. It involves adding build-args to some Dockerfiles, in your case I suppose barbican-base, but also nova-base. It?s a bit clunky but gets the job done for us. > > /Jason > >> On Oct 7, 2021, at 3:41 AM, Mark Goddard > wrote: >> >> Hi Sam, >> >> I don't generally do that, and Kolla isn't really set up to make it >> easy. You could tag the base containers with the new tag: >> >> docker pull -base:wallaby >> docker tag -base:wallaby -base: >> >> Mark >> >> On Thu, 7 Oct 2021 at 03:34, Sam Morrison > wrote: >>> >>> I?m trying to be able to build a projects container without having to rebuild the parents which have different tags. >>> >>> The workflow I?m trying to achieve is: >>> >>> Build base and openstack-base with a tag of wallaby >>> >>> Build a container image for barbican with a tag of the version of barbican that is returned when doing `git describe` >>> Build a container image for nova with a tag of the version of barbican that is returned when doing `git describe` >>> etc.etc. >>> >>> I don?t seem to be able to do this without having to also build a new base and openstack-base with the same tag which is slow and also means a lot of disk space. >>> >>> Just wondering how other people do this sort of stuff? >>> Any ideas? >>> >>> Thanks, >>> Sam >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Oct 11 08:13:13 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 11 Oct 2021 09:13:13 +0100 Subject: [kolla] parent tags In-Reply-To: References: <85729552-4A35-4769-A0C3-DDF6286B8071@gmail.com> <19227A29-3F33-4EF3-B68B-AC6ABF87FB2B@uchicago.edu> Message-ID: On Mon, 11 Oct 2021 at 05:39, Sam Morrison wrote: > > Thank Jason and Mark, > > I think just adding another tag at the end of the build process is what we are going to do. > > On a related note doe anyone have any tips on how to version a horizon container because it has multiple repos inside. > > Eg. We have the source for horizon and then source for each plugin which have different versions. > With Debian they are all separate debs and installed differently with separate version and makes tracking things really easy. > > In the container world it makes it a bit harder. > I?m thinking we need to have our kolla-build.conf specify specific git refs and then when we update this file incorporate that somehow into the versioning. This is probably one reason why kolla doesn't do it this way - there isn't always a single versioned thing that's being deployed. Every service has dependencies. In this case I'd suggest going with the version of horizon. > > Sam > > > > > On 8 Oct 2021, at 2:47 am, Jason Anderson wrote: > > Sam, I think Mark?s idea is in general stronger than what I will describe, if all you?re after is different aliases. It sounds like you are trying to iterate on two images (Barbican and Nova), presumably changing the source of the former frequently, and don?t want to build the entire ancestor chain each time. > > I had to do something similar because we have a fork of Horizon we work on a lot. Here is my hacky solution: https://github.com/ChameleonCloud/kolla/commit/79611111c03cc86be91a86a9ccd296abc7aa3a3e > > We are on Train w/ some other Kolla forks so I can?t guarantee that will apply cleanly, but it?s a small change. It involves adding build-args to some Dockerfiles, in your case I suppose barbican-base, but also nova-base. It?s a bit clunky but gets the job done for us. > > /Jason > > On Oct 7, 2021, at 3:41 AM, Mark Goddard wrote: > > Hi Sam, > > I don't generally do that, and Kolla isn't really set up to make it > easy. You could tag the base containers with the new tag: > > docker pull -base:wallaby > docker tag -base:wallaby -base: > > Mark > > On Thu, 7 Oct 2021 at 03:34, Sam Morrison wrote: > > > I?m trying to be able to build a projects container without having to rebuild the parents which have different tags. > > The workflow I?m trying to achieve is: > > Build base and openstack-base with a tag of wallaby > > Build a container image for barbican with a tag of the version of barbican that is returned when doing `git describe` > Build a container image for nova with a tag of the version of barbican that is returned when doing `git describe` > etc.etc. > > I don?t seem to be able to do this without having to also build a new base and openstack-base with the same tag which is slow and also means a lot of disk space. > > Just wondering how other people do this sort of stuff? > Any ideas? > > Thanks, > Sam > > > > > > From tjoen at dds.nl Mon Oct 11 08:49:38 2021 From: tjoen at dds.nl (tjoen) Date: Mon, 11 Oct 2021 10:49:38 +0200 Subject: [Xena] It works! Message-ID: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> Just testing every release since Train on an LFS system with Python-3.9 cryptography-35.0.0 is necessary Thank you all From ralonsoh at redhat.com Mon Oct 11 13:30:44 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 11 Oct 2021 15:30:44 +0200 Subject: [neutron] Bug deputy, report of week 2021-10-4 Message-ID: Hello Neutrinos: This is the last week report: High: - https://bugs.launchpad.net/neutron/+bug/1946187: HA routers not going to be "primary" at all. Unassigned. - https://bugs.launchpad.net/neutron/+bug/1931696. ovs offload broken from neutron 16.3.0 onwards. Assigned. - https://review.opendev.org/c/openstack/neutron/+/812641 - https://bugs.launchpad.net/neutron/+bug/1946318: [ovn] Memory consumption grows over time due to MAC_Binding entries in SB database. Assigned. - https://review.opendev.org/c/openstack/neutron/+/812805 - https://bugs.launchpad.net/neutron/+bug/1946456: [OVN] Scheduling of HA Chassis Group for external port does not work when no chassis has 'enable-chassis-as-gw' option set. Unassigned. - https://bugs.launchpad.net/neutron/+bug/1946588: [OVN]Metadata get warn logs after boot instance server about "MetadataServiceReadyWaitTimeoutException". Assigned. - https://review.opendev.org/c/openstack/neutron/+/813376 Medium: - https://bugs.launchpad.net/neutron/+bug/1946186: Fullstack test neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_router_fip_qos_after_admin_state_down_up failing intermittently. Unassigned. - https://bugs.launchpad.net/neutron/+bug/1946479: [OVN migration] qr- interfaces and trunk subports aren't cleaned after migration to ML2/OVN. Assigned. - https://review.opendev.org/c/openstack/neutron/+/813186 - https://review.opendev.org/c/openstack/neutron/+/813187 - https://bugs.launchpad.net/neutron/+bug/1946589: [OVN] localport might not be updated when create multiple subnets for its network. Unassigned. Low: - https://bugs.launchpad.net/neutron/+bug/1945954: [os-ken] Missing subclass for SUBTYPE_RIB_*_MULTICAST in mrtlib. Assigned. - https://review.opendev.org/c/openstack/os-ken/+/812293 - https://bugs.launchpad.net/neutron/+bug/1946023: [OVN] Check OVN Port_Group compatibility. Assigned. - https://review.opendev.org/c/openstack/neutron/+/812176 - https://bugs.launchpad.net/neutron/+bug/1946250: Neutron API reference should explain the intended behavior of port security extension. Unassigned. Whishlist: - https://bugs.launchpad.net/neutron/+bug/1946251: [RFE] API: allow to disable anti-spoofing but not SGs. Assigned. Duplicated: - https://bugs.launchpad.net/neutron/+bug/1945646: Nova fails to live migrate instance with upper-case port MAC Incomplete: - https://bugs.launchpad.net/neutron/+bug/1946535: Segment plugin disabled delete network will raise exception. - Maybe ?segments? plugin is loaded in this deployment. - https://bugs.launchpad.net/neutron/+bug/1946624: OVSDB Error: Transaction causes multiple rows in "Port_Group" table to have identical values. - Maybe duplicated of https://bugs.launchpad.net/neutron/+bug/1938766. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonykarera at gmail.com Mon Oct 11 13:35:06 2021 From: tonykarera at gmail.com (Karera Tony) Date: Mon, 11 Oct 2021 15:35:06 +0200 Subject: Restarting Openstack Victoria using kolla-ansible Message-ID: Hello Team, I am trying to deploy openstack Victoria ..... Below I install kolla-ansible on the deployment server , I first clone the * git clone --branch stable/victoria https://opendev.org/openstack/kolla-ansible* but when I run the deployment without uncommenting openstack_release .. By default it deploys wallaby And when I uncomment and type victoria ...Some of the containers keep restarting esp Horizon Any idea on how to resolve this ? Even the kolla content that I use for deployment, I get it from the kolla-ansible directory that I cloned Regards Tony Karera -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Mon Oct 11 13:39:10 2021 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 11 Oct 2021 13:39:10 +0000 Subject: [neutron] openflow rules tools Message-ID: Hello, When using native ovs in neutron, we endup with a lot of openflow rules on ovs side. Debugging it with regular ovs-ofctl --color dump-flows is kind of painful. Is there any tool that the community is using to manage that? Thanks in advance! Arnaud. From fungi at yuggoth.org Mon Oct 11 14:20:09 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 11 Oct 2021 14:20:09 +0000 Subject: [Xena] It works! In-Reply-To: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> References: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> Message-ID: <20211011142008.heixx43ckfsaoyd4@yuggoth.org> On 2021-10-11 10:49:38 +0200 (+0200), tjoen wrote: > Just testing every release since Train on an LFS system with Python-3.9 > cryptography-35.0.0 is necessary Thanks for testing! Just be aware that Python 3.8 is the most recent interpreter targeted by Xena: https://governance.openstack.org/tc/reference/runtimes/xena.html Discussion is underway at the PTG next week to determine what the tested runtimes should be for Yoga, but testing with 3.9 is being suggested (or maybe even 3.10): https://etherpad.opendev.org/p/tc-yoga-ptg -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From aschultz at redhat.com Mon Oct 11 14:25:51 2021 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 11 Oct 2021 08:25:51 -0600 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: On Sat, Oct 9, 2021 at 3:11 PM Anirudh Gupta wrote: > > Hi Team, > > I am installing Tripleo using the below link > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html > > In the Introspect section, When I executed the command > openstack tripleo validator run --group pre-introspection > > I got the following error: > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu | PASSED | localhost | localhost | | 0:00:01.261 | > | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space | PASSED | localhost | localhost | | 0:00:04.480 | > | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram | PASSED | localhost | localhost | | 0:00:02.173 | > | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode | PASSED | localhost | localhost | | 0:00:01.546 | > | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway | FAILED | undercloud | No host matched | | | > | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space | FAILED | undercloud | No host matched | | | > | 2f0239db-d530-48eb-b606-f82179e72e50 | undercloud-neutron-sanity-check | FAILED | undercloud | No host matched | | | > | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range | FAILED | undercloud | No host matched | | | > | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection | FAILED | undercloud | No host matched | | | > | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush | FAILED | undercloud | No host matched | | | > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > > > Then I created the following inventory file: > [Undercloud] > undercloud > > Passed this command while running the pre-introspection command. > It then executed successfully. > > > But with Pre-deployment, it is still failing even after passing the inventory > > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e | PASSED | localhost | localhost | | 0:00:00.504 | > | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns | PASSED | localhost | localhost | | 0:00:00.481 | > | 93611c13-49a2-4cae-ad87-099546459481 | service-status | PASSED | all | undercloud | | 0:00:06.942 | > | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux | PASSED | all | undercloud | | 0:00:02.433 | > | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version | FAILED | all | undercloud | | 0:00:03.576 | > | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed | PASSED | undercloud | undercloud | | 0:00:02.850 | > | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed | FAILED | allovercloud | No host matched | | | > | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment | FAILED | undercloud | undercloud | | 0:00:31.559 | > | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug | FAILED | undercloud | undercloud | | 0:00:02.057 | > | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud | | 0:00:00.884 | > | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted | FAILED | undercloud | undercloud | | 0:00:02.138 | > | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count | PASSED | undercloud | undercloud | | 0:00:06.164 | > | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count | FAILED | undercloud | undercloud | | 0:00:00.934 | > | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning | FAILED | undercloud | undercloud | | 0:00:02.456 | > | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration | FAILED | undercloud | undercloud | | 0:00:00.882 | > | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment | FAILED | undercloud | undercloud | | 0:00:00.880 | > | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks | FAILED | undercloud | undercloud | | 0:00:01.934 | > | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans | FAILED | undercloud | undercloud | | 0:00:01.931 | > | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding | PASSED | all | undercloud | | 0:00:00.366 | > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > > Also this step of passing the inventory file is not mentioned anywhere in the document. Is there anything I am missing? > It's likely that the documentation is out of date for the validation calls. I don't believe we test this in CI so it's probably broken. The validation calls are generally optional so you should be ok to proceed with introspection > Regards > Anirudh Gupta > From openstack at nemebean.com Mon Oct 11 15:25:29 2021 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Oct 2021 10:25:29 -0500 Subject: [KEYSTONE][POLICIES] - Overrides that don't work? In-Reply-To: References: Message-ID: I don't believe it's possible to override the scope of a policy rule. In this case it sounds like the user should request a domain-scoped token to perform this operation. For details on who to do that, see https://docs.openstack.org/keystone/wallaby/admin/tokens-overview.html#authorization-scopes On 10/6/21 7:52 AM, Ga?l THEROND wrote: > Hi team, > > I'm having a weird behavior with my Openstack platform that makes me > think I may have misunderstood some mechanisms on the way policies are > working and especially the overriding. > > So, long story short, I've few services that get custom policies such as > glance that behave as expected, Keystone's one aren't. > > All in all, here is what I'm understanding of the mechanism: > > This is the keystone policy that I'm looking to override: > https://paste.openstack.org/show/bwuF6jFISscRllWdUURL/ > > > This policy default can be found in here: > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > > Here is the policy that I'm testing: > https://paste.openstack.org/show/bHQ0PXvOro4lXNTlxlie/ > > > I know, this policy isn't taking care of the admin role but it's not the > point. > > From my understanding, any user with the project-manager role should be > able to add any available user on any available group as long as the > project-manager domain is the same as the target. > > However, when I'm doing that, keystone complains that I'm not authorized > to do so because the user token scope is 'PROJECT' where it should be > 'SYSTEM' or 'DOMAIN'. > > Now, I wouldn't be surprised of that message being thrown?out with the > default policy as it's stated on the code with the following: > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > > So the question is, if the custom policy doesn't override the default > scope_types how am I supposed to make it work? > > I hope it was clear enough, but if not, feel free to ask me for more > information. > > PS: I've tried to assign this role with a domain scope to my user and > I've still the same issue. > > Thanks a lot everyone! > > From openstack at nemebean.com Mon Oct 11 15:27:18 2021 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Oct 2021 10:27:18 -0500 Subject: [docs] Keystone docs missing for Xena Message-ID: <8e9b9ddd-48c4-c144-cad6-63ec0682c5e8@nemebean.com> Hey, I was just looking for the Keystone docs and discovered that they are not listed on https://docs.openstack.org/xena/projects.html. If I s/wallaby/xena/ on the wallaby version then it resolves, so it looks like the docs are published they just aren't included in the index for some reason. -Ben From gmann at ghanshyammann.com Mon Oct 11 15:34:42 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 11 Oct 2021 10:34:42 -0500 Subject: [all][tc] Technical Committee next weekly meeting on Oct 14th at 1500 UTC Message-ID: <17c6ffde3e7.116bfff55951568.8663051282986901805@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for Oct 14th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, Oct 13th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From laurentfdumont at gmail.com Mon Oct 11 15:52:52 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 11 Oct 2021 11:52:52 -0400 Subject: [neutron] openflow rules tools In-Reply-To: References: Message-ID: Also interested in this. Reading rules in dump-flows is an absolute pain. In an ideal world, I would have never have to. We some stuff on our side that I'll see if I can share. On Mon, Oct 11, 2021 at 9:41 AM Arnaud Morin wrote: > Hello, > > When using native ovs in neutron, we endup with a lot of openflow rules > on ovs side. > > Debugging it with regular ovs-ofctl --color dump-flows is kind of > painful. > > Is there any tool that the community is using to manage that? > > Thanks in advance! > > Arnaud. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Mon Oct 11 16:26:18 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Mon, 11 Oct 2021 18:26:18 +0200 Subject: [ptl][release][stable][EM] Extended Maintenance - Ussuri Message-ID: Hi, As Xena was released last week and we are in a less busy period, now it is a good time to call your attention to the following: In a month Ussuri is planned to transition to Extended Maintenance phase [1] (planned date: 2021-11-12). I have generated the list of the current *open* and *unreleased* changes in stable/ussuri for the follows-policy tagged repositories [2] (where there are such patches). These lists could help the teams who are planning to do a *final* release on Ussuri before moving stable/ussuri branches to Extended Maintenance. Feel free to edit and extend these lists to track your progress! * At the transition date the Release Team will tag the *latest* Ussuri releases of repositories with *ussuri-em* tag. * After the transition stable/ussuri will be still open for bug fixes, but there won't be official releases anymore. *NOTE*: teams, please focus on wrapping up your libraries first if there is any concern about the changes, in order to avoid broken (final!) releases! Thanks, El?d [1] https://releases.openstack.org/ [2] https://etherpad.opendev.org/p/ussuri-final-release-before-em From ashlee at openstack.org Mon Oct 11 16:41:51 2021 From: ashlee at openstack.org (Ashlee Ferguson) Date: Mon, 11 Oct 2021 11:41:51 -0500 Subject: [all][PTG] October 2021 - PTGbot, Etherpads, & IRC Message-ID: <14361607-09BD-43B6-BB8D-7CCC2053F576@openstack.org> Hello! We just wanted to take a second to point out a couple things that have changed since the last PTG as we all get ready for the next PTG. Firstly, the PTGbot is up to date and ready to go *at it's new URL[1]*-- as are the autogenerated etherpads! There you can find the schedule page, etherpads, etc. If you/your team have already created an etherpad, please feel free to use the PTGbot to override the default, auto-generated one[2]. Secondly, just a reminder that with the migration to being more inclusive of all Open Infrastructure Foundation projects we will be using the #openinfra-events IRC channel on the OFTC network! And again, if you haven't yet, please register[3]! Its free and important for getting the zoom information, etc. Thanks! Ashlee (ashferg) and Kendall (diablo_rojo) [1] PTGbot: https://ptg.opendev.org/ [2] PTGbot Etherpad Override Command: https://opendev.org/openstack/ptgbot/src/branch/master/README.rst#etherpad [3] PTG Registration: https://openinfra-ptg.eventbrite.com From tjoen at dds.nl Mon Oct 11 16:49:51 2021 From: tjoen at dds.nl (tjoen) Date: Mon, 11 Oct 2021 18:49:51 +0200 Subject: [Xena] It works! In-Reply-To: <20211011142008.heixx43ckfsaoyd4@yuggoth.org> References: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> <20211011142008.heixx43ckfsaoyd4@yuggoth.org> Message-ID: <69f236d2-0441-c688-562d-19c818b8030a@dds.nl> On 10/11/21 16:20, Jeremy Stanley wrote: > On 2021-10-11 10:49:38 +0200 (+0200), tjoen wrote: >> Just testing every release since Train on an LFS system with Python-3.9 >> cryptography-35.0.0 is necessary Forgotten to mention that that new cryptography only applies to opensl-3.0.0 > Thanks for testing! Just be aware that Python 3.8 is the most recent > interpreter targeted by Xena: > > https://governance.openstack.org/tc/reference/runtimes/xena.html Thx for that link. I'll consult it at next release > Discussion is underway at the PTG next week to determine what the > tested runtimes should be for Yoga, but testing with 3.9 is being > suggested (or maybe even 3.10): I have put 2022-03-30 in my agenda > https://etherpad.opendev.org/p/tc-yoga-ptg > From ashlee at openstack.org Mon Oct 11 17:02:04 2021 From: ashlee at openstack.org (Ashlee Ferguson) Date: Mon, 11 Oct 2021 12:02:04 -0500 Subject: OpenInfra Live - October 14, 2021 at 9am CT Message-ID: <82053AAC-6A32-488D-A531-16BD80100091@openstack.org> Hi everyone, This week?s OpenInfra Live episode is brought to you by the OpenStack community. Networking is complex, and Neutron is one of the most difficult parts of OpenStack to scale. In this episode of the Large Scale OpenStack show, we will explore early architectural choices you can make, recommended drivers, features to avoid if your ultimate goal is to scale to a very large deployment. Join OpenStack developers and operators as they share their Neutron scaling best practices. Episode: Large Scale OpenStack: Neutron scaling best practices Date and time: October 14, 2021 at 9am CT (1400 UTC) You can watch us live on: YouTube: https://www.youtube.com/watch?v=4ZLqILbLIpQ LinkedIn: https://www.linkedin.com/video/event/urn:li:ugcPost:6851936962715222016/ Facebook: https://www.facebook.com/104139126308032/posts/4407685335953368/ WeChat: recording will be posted on OpenStack WeChat after the live stream Speakers: Thierry Carrez (OpenInfra Foundation) David Comay (Bloomberg) Ibrahim Derraz (Exaion) Slawek Kaplonski (Red Hat) Lajos Katona (Ericsson) Mohammed Naser (VEXXHOST) Michal Nasiadka (StackHPC) Have an idea for a future episode? Share it now at ideas.openinfra.live . Register now for OpenInfra Live: Keynotes, a special edition of OpenInfra Live on November 17-18th starting at 1500 UTC: https://openinfralivekeynotes.eventbrite.com/ Thanks! Ashlee -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon Oct 11 17:06:40 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 11 Oct 2021 10:06:40 -0700 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Interesting, thank you for trying that out. We call the nova "interface_attach" and pass in the port_id you provided on the load balancer create command line. In the worker log, above the "tree" log lines, is there another ERROR log line that includes the exception returned from nova? Also, I would be interested to see what nova logged as to why it was unable to attach the port. That may be in the main nova logs, or possibly on the compute host nova logs. Michael On Thu, Oct 7, 2021 at 5:36 PM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > Hi Michael, > > I made a mistake when creating VM manually, I should use --nic option not --network option. After correcting that, I can create VM with the extra-flavor: > > $ openstack server create --flavor octavia-flavor --image Centos7 --nic port-id=test-port --security-group demo-secgroup --key-name demo-key test-vm > > $ nova list --all --fields name,status,host,networks | grep test-vm > | 8548400b-725a-405a-aeeb-ed1d208915e2 | test-vm | ACTIVE | overcloud-sriovperformancecompute-201-1.localdomain | ext-net1=10.5.201.149 > > A 2nd VF interface is seen inside the VM: > > [centos at test-vm ~]$ ip a > ... > 3: eth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 0a:b2:d4:85:a2:e6 brd ff:ff:ff:ff:ff:ff > > This MAC is not seen by neutron though: > > $ openstack port list | grep 0a:b2:d4:85:a2:e6 > > [empty] > > ===================== > However when I tried to create LB with the same VM flavor, it failed at the same place as before. > > Looking at worker.log, it seems the error is similar to use --network option to create the VM manually. But you are the expert. > > "Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52" > > Here is the full list of command line: > > $ openstack flavor list | grep octavia-flavor > | eb312b9a-d04d-4a88-9db2-7a88ce167cff | octavia-flavor | 4096 | 0 | 0 | 4 | True | > > openstack loadbalancer flavorprofile create --name ofp1 --provider amphora --flavor-data '{"compute_flavor": "eb312b9a-d04d-4a88-9db2-7a88ce167cff"}' > openstack loadbalancer flavor create --name of1 --flavorprofile ofp1 --enable > openstack loadbalancer create --name lb1 --flavor of1 --vip-port-id test-port --vip-subnet-id ext-subnet1 > > > |__Flow 'octavia-create-loadbalancer-flow': PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py", line 399, in execute > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker loadbalancer, loadbalancer.vip, amphora, subnet) > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 391, in plug_aap_port > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker interface = self._plug_amphora_vip(amphora, subnet) > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 123, in _plug_amphora_vip > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker raise base.PlugVIPException(message) > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker > > > -----Original Message----- > From: Zhang, Jing C. (Nokia - CA/Ottawa) > Sent: Thursday, October 7, 2021 6:18 PM > To: Michael Johnson > Cc: openstack-discuss at lists.openstack.org > Subject: RE: [Octavia] Can not create LB on SRIOV network > > Hi Michael, > > Thank you so much for the information. > > I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. > > However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: > https://docs.openstack.org/nova/train/admin/pci-passthrough.html > https://docs.openstack.org/nova/latest/admin/pci-passthrough.html > > ========================= > Here is the detail: > > Env: NIC is intel 82599, creating VM with SRIOV direct port works well. > > Nova.conf > > passthrough_whitelist={"devname":"ens1f0","physical_network":"physnet5"} > passthrough_whitelist={"devname":"ens1f1","physical_network":"physnet6"} > > Sriov_agent.ini > > [sriov_nic] > physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 > > (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: > > alias = { "vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf", "numa_policy": "required" } > > (2) Used the extra-spec in nova flavor > > openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" > > (3) Failed to create VM with this flavor, sriov agent log does not show port event, for sure also failed to create LB, PortBindingFailed > > > (4) Tried multiple formats to add whitelist for PF and VF in nova.conf for nova-compute, and retried, still failed > > passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","devname":"ens1f0","physical_network":"physnet5"} #PF passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","physical_network":"physnet5"} #VF > > The sriov agent log does not show port event for any of them. > > > > > -----Original Message----- > From: Michael Johnson > Sent: Wednesday, October 6, 2021 4:48 PM > To: Zhang, Jing C. (Nokia - CA/Ottawa) > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [Octavia] Can not create LB on SRIOV network > > Hi Jing, > > To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. > > It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. > > You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. > This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. > > I have not tried this and would be interested to hear if it works for you. > > If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. > > Michael > > [1] https://wiki.openstack.org/wiki/Octavia/Roadmap > [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html > [3] https://docs.openstack.org/octavia/latest/admin/flavors.html > [4] https://etherpad.opendev.org/p/yoga-ptg-octavia > > On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > > > > > Thank you so much > > > > > > > > Jing > > > > > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > > Interface Config Guide (Openstack) > > > > > > > > Hi, > > In Openstack train release, creating Octavia LB on SRIOV network fails. > > I come here to search if there is already a plan to add this support, and see this story. > > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > > Thank you > > > > > > > > > > > > > > > > From johnsomor at gmail.com Mon Oct 11 17:14:52 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 11 Oct 2021 10:14:52 -0700 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: References: Message-ID: Hi Albert, Have you configured your distributed lock manager for Designate? [coordination] backend_url = Michael On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > Before applying the change, we see the DNS record in the recordset: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > > > and we can pull it from the DNS server on the controllers: > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > After applying the change, we don?t see it: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > $ > > > > We see this in the logs: > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From gael.therond at bitswalk.com Mon Oct 11 17:18:53 2021 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Mon, 11 Oct 2021 19:18:53 +0200 Subject: [KEYSTONE][POLICIES] - Overrides that don't work? In-Reply-To: References: Message-ID: Hi ben! Thanks a lot for the answer! Ok I?ll get a look at that, but if I correctly understand a user with a role of project-admin attached to him as a scoped to domain he should be able to add users to a group once the policy update right? Once again thanks a lot for your answer! Le lun. 11 oct. 2021 ? 17:25, Ben Nemec a ?crit : > I don't believe it's possible to override the scope of a policy rule. In > this case it sounds like the user should request a domain-scoped token > to perform this operation. For details on who to do that, see > > https://docs.openstack.org/keystone/wallaby/admin/tokens-overview.html#authorization-scopes > > On 10/6/21 7:52 AM, Ga?l THEROND wrote: > > Hi team, > > > > I'm having a weird behavior with my Openstack platform that makes me > > think I may have misunderstood some mechanisms on the way policies are > > working and especially the overriding. > > > > So, long story short, I've few services that get custom policies such as > > glance that behave as expected, Keystone's one aren't. > > > > All in all, here is what I'm understanding of the mechanism: > > > > This is the keystone policy that I'm looking to override: > > https://paste.openstack.org/show/bwuF6jFISscRllWdUURL/ > > > > > > This policy default can be found in here: > > > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > > > > > Here is the policy that I'm testing: > > https://paste.openstack.org/show/bHQ0PXvOro4lXNTlxlie/ > > > > > > I know, this policy isn't taking care of the admin role but it's not the > > point. > > > > From my understanding, any user with the project-manager role should be > > able to add any available user on any available group as long as the > > project-manager domain is the same as the target. > > > > However, when I'm doing that, keystone complains that I'm not authorized > > to do so because the user token scope is 'PROJECT' where it should be > > 'SYSTEM' or 'DOMAIN'. > > > > Now, I wouldn't be surprised of that message being thrown out with the > > default policy as it's stated on the code with the following: > > > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > > > > > So the question is, if the custom policy doesn't override the default > > scope_types how am I supposed to make it work? > > > > I hope it was clear enough, but if not, feel free to ask me for more > > information. > > > > PS: I've tried to assign this role with a domain scope to my user and > > I've still the same issue. > > > > Thanks a lot everyone! > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Oct 11 17:21:53 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 11 Oct 2021 18:21:53 +0100 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: On Mon, Oct 11, 2021 at 6:12 PM Michael Johnson wrote: > > Interesting, thank you for trying that out. > > We call the nova "interface_attach" and pass in the port_id you > provided on the load balancer create command line. > > In the worker log, above the "tree" log lines, is there another ERROR > log line that includes the exception returned from nova? until very recently nova did not support interface attach for sriov interfaces. https://specs.openstack.org/openstack/nova-specs/specs/victoria/implemented/sriov-interface-attach-detach.html today we do allow it but we do not guarentee it will work. if there are not enoch pci slots in the vm or there are not enough VF on the host that are attached to the correct phsynet the attach will fail. the most comon reason the attach fails is either numa affintiy cannot be acived or there is an issue in the guest/qemu the guest kernel need to repond to the hotplug event when qemu tries to add the device if it does not it will fail. keeping all of tha tin mind for sriov attach to work octavia will have to create the port with vnic_type=driect or one of the other valid options like macvtap or direct phsyical. you cannot attach sriov device that can be used with octavia using flavor extra specs. > > Also, I would be interested to see what nova logged as to why it was > unable to attach the port. That may be in the main nova logs, or > possibly on the compute host nova logs. > > Michael > > On Thu, Oct 7, 2021 at 5:36 PM Zhang, Jing C. (Nokia - CA/Ottawa) > wrote: > > > > Hi Michael, > > > > I made a mistake when creating VM manually, I should use --nic option not --network option. After correcting that, I can create VM with the extra-flavor: > > > > $ openstack server create --flavor octavia-flavor --image Centos7 --nic port-id=test-port --security-group demo-secgroup --key-name demo-key test-vm > > > > $ nova list --all --fields name,status,host,networks | grep test-vm > > | 8548400b-725a-405a-aeeb-ed1d208915e2 | test-vm | ACTIVE | overcloud-sriovperformancecompute-201-1.localdomain | ext-net1=10.5.201.149 > > > > A 2nd VF interface is seen inside the VM: > > > > [centos at test-vm ~]$ ip a > > ... > > 3: eth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 > > link/ether 0a:b2:d4:85:a2:e6 brd ff:ff:ff:ff:ff:ff > > > > This MAC is not seen by neutron though: > > > > $ openstack port list | grep 0a:b2:d4:85:a2:e6 > > > > [empty] > > > > ===================== > > However when I tried to create LB with the same VM flavor, it failed at the same place as before. > > > > Looking at worker.log, it seems the error is similar to use --network option to create the VM manually. But you are the expert. > > > > "Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52" > > > > Here is the full list of command line: > > > > $ openstack flavor list | grep octavia-flavor > > | eb312b9a-d04d-4a88-9db2-7a88ce167cff | octavia-flavor | 4096 | 0 | 0 | 4 | True | > > > > openstack loadbalancer flavorprofile create --name ofp1 --provider amphora --flavor-data '{"compute_flavor": "eb312b9a-d04d-4a88-9db2-7a88ce167cff"}' > > openstack loadbalancer flavor create --name of1 --flavorprofile ofp1 --enable > > openstack loadbalancer create --name lb1 --flavor of1 --vip-port-id test-port --vip-subnet-id ext-subnet1 > > > > > > |__Flow 'octavia-create-loadbalancer-flow': PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py", line 399, in execute > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker loadbalancer, loadbalancer.vip, amphora, subnet) > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 391, in plug_aap_port > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker interface = self._plug_amphora_vip(amphora, subnet) > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 123, in _plug_amphora_vip > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker raise base.PlugVIPException(message) > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker > > > > > > -----Original Message----- > > From: Zhang, Jing C. (Nokia - CA/Ottawa) > > Sent: Thursday, October 7, 2021 6:18 PM > > To: Michael Johnson > > Cc: openstack-discuss at lists.openstack.org > > Subject: RE: [Octavia] Can not create LB on SRIOV network > > > > Hi Michael, > > > > Thank you so much for the information. > > > > I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. > > > > However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: > > https://docs.openstack.org/nova/train/admin/pci-passthrough.html > > https://docs.openstack.org/nova/latest/admin/pci-passthrough.html > > > > ========================= > > Here is the detail: > > > > Env: NIC is intel 82599, creating VM with SRIOV direct port works well. > > > > Nova.conf > > > > passthrough_whitelist={"devname":"ens1f0","physical_network":"physnet5"} > > passthrough_whitelist={"devname":"ens1f1","physical_network":"physnet6"} > > > > Sriov_agent.ini > > > > [sriov_nic] > > physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 > > > > (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: > > > > alias = { "vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf", "numa_policy": "required" } > > > > (2) Used the extra-spec in nova flavor > > > > openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" > > > > (3) Failed to create VM with this flavor, sriov agent log does not show port event, for sure also failed to create LB, PortBindingFailed > > > > > > (4) Tried multiple formats to add whitelist for PF and VF in nova.conf for nova-compute, and retried, still failed > > > > passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","devname":"ens1f0","physical_network":"physnet5"} #PF passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","physical_network":"physnet5"} #VF > > > > The sriov agent log does not show port event for any of them. > > > > > > > > > > -----Original Message----- > > From: Michael Johnson > > Sent: Wednesday, October 6, 2021 4:48 PM > > To: Zhang, Jing C. (Nokia - CA/Ottawa) > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: [Octavia] Can not create LB on SRIOV network > > > > Hi Jing, > > > > To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. > > > > It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. > > > > You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. > > This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. > > > > I have not tried this and would be interested to hear if it works for you. > > > > If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. > > > > Michael > > > > [1] https://wiki.openstack.org/wiki/Octavia/Roadmap > > [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html > > [3] https://docs.openstack.org/octavia/latest/admin/flavors.html > > [4] https://etherpad.opendev.org/p/yoga-ptg-octavia > > > > On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > > > > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > > > > > > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > > > > > > > > > Thank you so much > > > > > > > > > > > > Jing > > > > > > > > > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > > > Interface Config Guide (Openstack) > > > > > > > > > > > > Hi, > > > In Openstack train release, creating Octavia LB on SRIOV network fails. > > > I come here to search if there is already a plan to add this support, and see this story. > > > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > > > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > > > Thank you > > > > > > > > > > > > > > > > > > > > > > > > > From james.slagle at gmail.com Mon Oct 11 17:35:24 2021 From: james.slagle at gmail.com (James Slagle) Date: Mon, 11 Oct 2021 13:35:24 -0400 Subject: [TripleO] PTG proposed schedule Message-ID: I have put up a tentative schedule in the etherpad for each of our proposed sessions: https://etherpad.opendev.org/p/tripleo-yoga-topics If there are any scheduling conflicts with other sessions, please let me know and we will do our best to adjust. We also have time to add a 4th session on Monday, Tuesday, Wednesday, so if you have some last minute topics, feel free to add them. Thanks, and looking forward to seeing (virtually) everyone next week! -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon Oct 11 18:00:13 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 11 Oct 2021 11:00:13 -0700 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Ah, so that is probably the issue. Nova doesn't support the interface attach for SRIOV in Train. We do currently require that the port be hot plugged after boot. I would still be interested in seeing the log messages, just to confirm that is the issue or if we have other work to do. The vnic_type=direct should not be an issue as the port is being passed into Octavia pre-created. I think it was already mentioned that the port was successful when used during boot via the --nic option. Thanks for the pointer Sean. Michael On Mon, Oct 11, 2021 at 10:22 AM Sean Mooney wrote: > > On Mon, Oct 11, 2021 at 6:12 PM Michael Johnson wrote: > > > > Interesting, thank you for trying that out. > > > > We call the nova "interface_attach" and pass in the port_id you > > provided on the load balancer create command line. > > > > In the worker log, above the "tree" log lines, is there another ERROR > > log line that includes the exception returned from nova? > until very recently nova did not support interface attach for sriov interfaces. > https://specs.openstack.org/openstack/nova-specs/specs/victoria/implemented/sriov-interface-attach-detach.html > today we do allow it but we do not guarentee it will work. > > if there are not enoch pci slots in the vm or there are not enough VF > on the host > that are attached to the correct phsynet the attach will fail. > the most comon reason the attach fails is either numa affintiy cannot > be acived or there is an issue in the guest/qemu > the guest kernel need to repond to the hotplug event when qemu tries > to add the device if it does not it will fail. > > keeping all of tha tin mind for sriov attach to work octavia will have > to create the port with vnic_type=driect or one of the other valid > options like macvtap or direct phsyical. > you cannot attach sriov device that can be used with octavia using > flavor extra specs. > > > > > Also, I would be interested to see what nova logged as to why it was > > unable to attach the port. That may be in the main nova logs, or > > possibly on the compute host nova logs. > > > > Michael > > > > On Thu, Oct 7, 2021 at 5:36 PM Zhang, Jing C. (Nokia - CA/Ottawa) > > wrote: > > > > > > Hi Michael, > > > > > > I made a mistake when creating VM manually, I should use --nic option not --network option. After correcting that, I can create VM with the extra-flavor: > > > > > > $ openstack server create --flavor octavia-flavor --image Centos7 --nic port-id=test-port --security-group demo-secgroup --key-name demo-key test-vm > > > > > > $ nova list --all --fields name,status,host,networks | grep test-vm > > > | 8548400b-725a-405a-aeeb-ed1d208915e2 | test-vm | ACTIVE | overcloud-sriovperformancecompute-201-1.localdomain | ext-net1=10.5.201.149 > > > > > > A 2nd VF interface is seen inside the VM: > > > > > > [centos at test-vm ~]$ ip a > > > ... > > > 3: eth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 > > > link/ether 0a:b2:d4:85:a2:e6 brd ff:ff:ff:ff:ff:ff > > > > > > This MAC is not seen by neutron though: > > > > > > $ openstack port list | grep 0a:b2:d4:85:a2:e6 > > > > > > [empty] > > > > > > ===================== > > > However when I tried to create LB with the same VM flavor, it failed at the same place as before. > > > > > > Looking at worker.log, it seems the error is similar to use --network option to create the VM manually. But you are the expert. > > > > > > "Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52" > > > > > > Here is the full list of command line: > > > > > > $ openstack flavor list | grep octavia-flavor > > > | eb312b9a-d04d-4a88-9db2-7a88ce167cff | octavia-flavor | 4096 | 0 | 0 | 4 | True | > > > > > > openstack loadbalancer flavorprofile create --name ofp1 --provider amphora --flavor-data '{"compute_flavor": "eb312b9a-d04d-4a88-9db2-7a88ce167cff"}' > > > openstack loadbalancer flavor create --name of1 --flavorprofile ofp1 --enable > > > openstack loadbalancer create --name lb1 --flavor of1 --vip-port-id test-port --vip-subnet-id ext-subnet1 > > > > > > > > > |__Flow 'octavia-create-loadbalancer-flow': PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py", line 399, in execute > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker loadbalancer, loadbalancer.vip, amphora, subnet) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 391, in plug_aap_port > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker interface = self._plug_amphora_vip(amphora, subnet) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 123, in _plug_amphora_vip > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker raise base.PlugVIPException(message) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker > > > > > > > > > -----Original Message----- > > > From: Zhang, Jing C. (Nokia - CA/Ottawa) > > > Sent: Thursday, October 7, 2021 6:18 PM > > > To: Michael Johnson > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: RE: [Octavia] Can not create LB on SRIOV network > > > > > > Hi Michael, > > > > > > Thank you so much for the information. > > > > > > I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. > > > > > > However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: > > > https://docs.openstack.org/nova/train/admin/pci-passthrough.html > > > https://docs.openstack.org/nova/latest/admin/pci-passthrough.html > > > > > > ========================= > > > Here is the detail: > > > > > > Env: NIC is intel 82599, creating VM with SRIOV direct port works well. > > > > > > Nova.conf > > > > > > passthrough_whitelist={"devname":"ens1f0","physical_network":"physnet5"} > > > passthrough_whitelist={"devname":"ens1f1","physical_network":"physnet6"} > > > > > > Sriov_agent.ini > > > > > > [sriov_nic] > > > physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 > > > > > > (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: > > > > > > alias = { "vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf", "numa_policy": "required" } > > > > > > (2) Used the extra-spec in nova flavor > > > > > > openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" > > > > > > (3) Failed to create VM with this flavor, sriov agent log does not show port event, for sure also failed to create LB, PortBindingFailed > > > > > > > > > (4) Tried multiple formats to add whitelist for PF and VF in nova.conf for nova-compute, and retried, still failed > > > > > > passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","devname":"ens1f0","physical_network":"physnet5"} #PF passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","physical_network":"physnet5"} #VF > > > > > > The sriov agent log does not show port event for any of them. > > > > > > > > > > > > > > > -----Original Message----- > > > From: Michael Johnson > > > Sent: Wednesday, October 6, 2021 4:48 PM > > > To: Zhang, Jing C. (Nokia - CA/Ottawa) > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: Re: [Octavia] Can not create LB on SRIOV network > > > > > > Hi Jing, > > > > > > To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. > > > > > > It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. > > > > > > You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. > > > This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. > > > > > > I have not tried this and would be interested to hear if it works for you. > > > > > > If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. > > > > > > Michael > > > > > > [1] https://wiki.openstack.org/wiki/Octavia/Roadmap > > > [2] https://docs.openstack.org/nova/xena/configuration/extra-specs.html > > > [3] https://docs.openstack.org/octavia/latest/admin/flavors.html > > > [4] https://etherpad.opendev.org/p/yoga-ptg-octavia > > > > > > On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > > > > > > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > > > > > > > > > > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > > > > > > > > > > > > > Thank you so much > > > > > > > > > > > > > > > > Jing > > > > > > > > > > > > > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > > > > Interface Config Guide (Openstack) > > > > > > > > > > > > > > > > Hi, > > > > In Openstack train release, creating Octavia LB on SRIOV network fails. > > > > I come here to search if there is already a plan to add this support, and see this story. > > > > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > > > > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > > > > Thank you > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From ihrachys at redhat.com Mon Oct 11 18:05:10 2021 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 11 Oct 2021 14:05:10 -0400 Subject: [neutron] openflow rules tools In-Reply-To: References: Message-ID: On 10/11/21 9:39 AM, Arnaud Morin wrote: > Hello, > > When using native ovs in neutron, we endup with a lot of openflow rules > on ovs side. > > Debugging it with regular ovs-ofctl --color dump-flows is kind of > painful. > > Is there any tool that the community is using to manage that? You can check SB Logical_Flow table with ovn-sbctl lflow-list. You can also use ovn-trace(8) to inspect OVN pipeline behavior. Ihar From arnaud.morin at gmail.com Mon Oct 11 18:05:40 2021 From: arnaud.morin at gmail.com (Arnaud) Date: Mon, 11 Oct 2021 20:05:40 +0200 Subject: [neutron] openflow rules tools In-Reply-To: References: Message-ID: That would be awesome! We also built a tool which is looking for openflow rules related to a tap interface, but since we upgraded and enabled security rules in ovs, the tool isn't working anymore. So before rewriting everything from scratch, I was wondering if the community was also dealing with the same issue. So I am glad to here from you! Let me know :) Cheers Le 11 octobre 2021 17:52:52 GMT+02:00, Laurent Dumont a ?crit?: >Also interested in this. Reading rules in dump-flows is an absolute pain. >In an ideal world, I would have never have to. > >We some stuff on our side that I'll see if I can share. > >On Mon, Oct 11, 2021 at 9:41 AM Arnaud Morin wrote: > >> Hello, >> >> When using native ovs in neutron, we endup with a lot of openflow rules >> on ovs side. >> >> Debugging it with regular ovs-ofctl --color dump-flows is kind of >> painful. >> >> Is there any tool that the community is using to manage that? >> >> Thanks in advance! >> >> Arnaud. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajiv.mucheli at gmail.com Mon Oct 11 16:55:16 2021 From: rajiv.mucheli at gmail.com (rajiv mucheli) Date: Mon, 11 Oct 2021 22:25:16 +0530 Subject: [Barbican] HSM integration with FIPS Operation Enabled Message-ID: Hi, I looked into the available documentation and article but i had no luck validating if Openstack Barbican integration with FIPS Operation mode Enabled works. Any suggestions? The below barbican backend guide shares the available plugins with HSM : https://docs.openstack.org/security-guide/secrets-management/barbican.html Does Barbican now support module generated IV ? which is required for FIPS support in Thales A790 HSM. Regards, Rajiv -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraden at verisign.com Mon Oct 11 18:48:07 2021 From: abraden at verisign.com (Braden, Albert) Date: Mon, 11 Oct 2021 18:48:07 +0000 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: References: Message-ID: <7b85e6646792469aaa7e513ecfda8551@verisign.com> I think so. I see this: ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" Did anything with the distributed lock manager between Queens and Train? -----Original Message----- From: Michael Johnson Sent: Monday, October 11, 2021 1:15 PM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi Albert, Have you configured your distributed lock manager for Designate? [coordination] backend_url = Michael On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > Before applying the change, we see the DNS record in the recordset: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > > > and we can pull it from the DNS server on the controllers: > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > After applying the change, we don?t see it: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > $ > > > > We see this in the logs: > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From abraden at verisign.com Mon Oct 11 18:57:06 2021 From: abraden at verisign.com (Braden, Albert) Date: Mon, 11 Oct 2021 18:57:06 +0000 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: <7b85e6646792469aaa7e513ecfda8551@verisign.com> References: <7b85e6646792469aaa7e513ecfda8551@verisign.com> Message-ID: After investigating further, I realized that we're not running redis, and I think that means that redis_connection_string doesn't get set. Does this mean that we must run redis, or is there a workaround? -----Original Message----- From: Braden, Albert Sent: Monday, October 11, 2021 2:48 PM To: 'johnsomor at gmail.com' Cc: 'openstack-discuss at lists.openstack.org' Subject: RE: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail I think so. I see this: ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" Did anything with the distributed lock manager between Queens and Train? -----Original Message----- From: Michael Johnson Sent: Monday, October 11, 2021 1:15 PM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi Albert, Have you configured your distributed lock manager for Designate? [coordination] backend_url = Michael On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > Before applying the change, we see the DNS record in the recordset: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > > > and we can pull it from the DNS server on the controllers: > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > After applying the change, we don?t see it: > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > $ > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > $ > > > > We see this in the logs: > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From gmann at ghanshyammann.com Mon Oct 11 19:02:32 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 11 Oct 2021 14:02:32 -0500 Subject: [Xena] It works! In-Reply-To: <69f236d2-0441-c688-562d-19c818b8030a@dds.nl> References: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> <20211011142008.heixx43ckfsaoyd4@yuggoth.org> <69f236d2-0441-c688-562d-19c818b8030a@dds.nl> Message-ID: <17c70bc2a63.ede9811c850136.3432835365508929383@ghanshyammann.com> ---- On Mon, 11 Oct 2021 11:49:51 -0500 tjoen wrote ---- > On 10/11/21 16:20, Jeremy Stanley wrote: > > On 2021-10-11 10:49:38 +0200 (+0200), tjoen wrote: > >> Just testing every release since Train on an LFS system with Python-3.9 > >> cryptography-35.0.0 is necessary > > Forgotten to mention that that new cryptography only applies to > opensl-3.0.0 Just to note, we did Xena testing with py3.9 but as non-voting jobs with cryptography===3.4.8. Now cryptography 35.0.0 is used for current master branch testing (non voting py3.9). -gmann > > > Thanks for testing! Just be aware that Python 3.8 is the most recent > > interpreter targeted by Xena: > > > > https://governance.openstack.org/tc/reference/runtimes/xena.html > > Thx for that link. I'll consult it at next release > > > Discussion is underway at the PTG next week to determine what the > > tested runtimes should be for Yoga, but testing with 3.9 is being > > suggested (or maybe even 3.10): > > I have put 2022-03-30 in my agenda > > > https://etherpad.opendev.org/p/tc-yoga-ptg > > > > > From johnsomor at gmail.com Mon Oct 11 20:24:24 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 11 Oct 2021 13:24:24 -0700 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: References: <7b85e6646792469aaa7e513ecfda8551@verisign.com> Message-ID: You will need one of the Tooz supported distributed lock managers: Consul, Memcacded, Redis, or zookeeper. Michael On Mon, Oct 11, 2021 at 11:57 AM Braden, Albert wrote: > > After investigating further, I realized that we're not running redis, and I think that means that redis_connection_string doesn't get set. Does this mean that we must run redis, or is there a workaround? > > -----Original Message----- > From: Braden, Albert > Sent: Monday, October 11, 2021 2:48 PM > To: 'johnsomor at gmail.com' > Cc: 'openstack-discuss at lists.openstack.org' > Subject: RE: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > I think so. I see this: > > ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} > > ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" > > Did anything with the distributed lock manager between Queens and Train? > > -----Original Message----- > From: Michael Johnson > Sent: Monday, October 11, 2021 1:15 PM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > Hi Albert, > > Have you configured your distributed lock manager for Designate? > > [coordination] > backend_url = > > Michael > > On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > > > > > Before applying the change, we see the DNS record in the recordset: > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > $ > > > > > > > > and we can pull it from the DNS server on the controllers: > > > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > > > After applying the change, we don?t see it: > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > $ > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > $ > > > > > > > > We see this in the logs: > > > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From skaplons at redhat.com Mon Oct 11 20:40:25 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 11 Oct 2021 22:40:25 +0200 Subject: [neutron] openflow rules tools In-Reply-To: References: Message-ID: <6212551.lOV4Wx5bFT@p1> Hi, For OVN with have small tool ml2ovn-trace: https://docs.openstack.org/neutron/ latest/ovn/ml2ovn_trace.html in the neutron repo https://docs.openstack.org/ neutron/latest/ovn/ml2ovn_trace.html but that will not be helpful for ML2/OVS at all. On poniedzia?ek, 11 pa?dziernika 2021 20:05:40 CEST Arnaud wrote: > That would be awesome! > > We also built a tool which is looking for openflow rules related to a tap > interface, but since we upgraded and enabled security rules in ovs, the tool > isn't working anymore. Yes, for ML2/OVS with ovs firewall driver it is really painful to debug all those OF rules. > > So before rewriting everything from scratch, I was wondering if the community > was also dealing with the same issue. If You will have anything like that, please share with community :) > > So I am glad to here from you! > Let me know :) > Cheers > > Le 11 octobre 2021 17:52:52 GMT+02:00, Laurent Dumont a ?crit : > >Also interested in this. Reading rules in dump-flows is an absolute pain. > >In an ideal world, I would have never have to. > > > >We some stuff on our side that I'll see if I can share. > > > >On Mon, Oct 11, 2021 at 9:41 AM Arnaud Morin wrote: > >> Hello, > >> > >> When using native ovs in neutron, we endup with a lot of openflow rules > >> on ovs side. > >> > >> Debugging it with regular ovs-ofctl --color dump-flows is kind of > >> painful. > >> > >> Is there any tool that the community is using to manage that? > >> > >> Thanks in advance! > >> > >> Arnaud. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ildiko.vancsa at gmail.com Mon Oct 11 22:45:20 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 11 Oct 2021 15:45:20 -0700 Subject: Edge sessions at the upcoming PTG In-Reply-To: References: Message-ID: <045A602F-C8A8-467A-83E8-A1546C3197D0@gmail.com> Hi, As it?s less than one week now until the PTG, I wanted to put the edge sessions the OpenInfra Edge Computing Group is planning for the PTG on your radar: https://superuser.openstack.org/articles/the-project-teams-gathering-is-coming-lets-talk-edge/ We have topics such as networking, APIs, and automation in edge infrastructures that are relevant for OpenStack as well and it would be great to have the community?s input on these. Our etherpad for the sessions: https://etherpad.opendev.org/p/ecg-ptg-october-2021 Please let me know if you have any questions about the agenda or topics. Thanks and Best Regards, Ildik? > On Sep 27, 2021, at 18:31, Ildiko Vancsa wrote: > > Hi, > > It is a friendly reminder to please check out the edge session the OpenInfra Edge Computing Group is planning for the PTG: https://superuser.openstack.org/articles/the-project-teams-gathering-is-coming-lets-talk-edge/ > > We have topics such as networking, APIs, and automation in edge infrastructures that are relevant for OpenStack as well and it would be great to have the community?s input on these. > > Our etherpad for the sessions: https://etherpad.opendev.org/p/ecg-ptg-october-2021 > > Please let me know if you have any questions about the agenda or topics. > > Thanks and Best Regards, > Ildik? > > >> On Sep 7, 2021, at 16:22, Ildiko Vancsa wrote: >> >> Hi, >> >> I?m reaching out to you to share the agenda of the OpenInfra Edge Computing Group that we put together for the upcoming PTG. I would like to invite everyone who is interested in discussing edge challenges and finding solutions! >> >> We summarized our plans for the event in a short blog post to give some context to each of the topic that we picked to discuss in details. We picked key areas like security, networking, automation and tools, containers and more: https://superuser.openstack.org/articles/the-project-teams-gathering-is-coming-lets-talk-edge/ >> >> Our etherpad for the sessions: https://etherpad.opendev.org/p/ecg-ptg-october-2021 >> >> Please let me know if you have any questions about the agenda or topics. >> >> Thanks and Best Regards, >> Ildik? >> >> > From tjoen at dds.nl Tue Oct 12 06:44:55 2021 From: tjoen at dds.nl (tjoen) Date: Tue, 12 Oct 2021 08:44:55 +0200 Subject: [Xena] It works! In-Reply-To: <17c70bc2a63.ede9811c850136.3432835365508929383@ghanshyammann.com> References: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> <20211011142008.heixx43ckfsaoyd4@yuggoth.org> <69f236d2-0441-c688-562d-19c818b8030a@dds.nl> <17c70bc2a63.ede9811c850136.3432835365508929383@ghanshyammann.com> Message-ID: On 10/11/21 21:02, Ghanshyam Mann wrote: > ---- On Mon, 11 Oct 2021 11:49:51 -0500 tjoen wrote ---- > > > On 2021-10-11 10:49:38 +0200 (+0200), tjoen wrote: > > >> Just testing every release since Train on an LFS system with Python-3.9 > > >> cryptography-35.0.0 is necessary > > > > Forgotten to mention that that new cryptography only applies to > > opensl-3.0.0 > > Just to note, we did Xena testing with py3.9 but as non-voting jobs with > cryptography===3.4.8. That was the version causing segfaults with openssl-3 Worked in Wallaby with openssl-1.1.1l > Now cryptography 35.0.0 is used for current master branch testing (non > voting py3.9). With openssl-3 I hope From mark at stackhpc.com Tue Oct 12 07:37:06 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 12 Oct 2021 08:37:06 +0100 Subject: Restarting Openstack Victoria using kolla-ansible In-Reply-To: References: Message-ID: On Mon, 11 Oct 2021 at 14:39, Karera Tony wrote: > > Hello Team, > I am trying to deploy openstack Victoria ..... > Below I install kolla-ansible on the deployment server , I first clone the * git clone --branch stable/victoria https://opendev.org/openstack/kolla-ansible* but when I run the deployment without uncommenting openstack_release .. By default it deploys wallaby > And when I uncomment and type victoria ...Some of the containers keep restarting esp Horizon > Any idea on how to resolve this ? > Even the kolla content that I use for deployment, I get it from the kolla-ansible directory that I cloned > Regards Hi Tony, kolla-ansible deploys the tag in the openstack_release variable - the default is victoria in the stable/victoria branch. Perhaps you have overridden this via globals.yml, or are accidentally using the stable/wallaby branch? Mark > > Tony Karera > > From tonykarera at gmail.com Tue Oct 12 07:45:45 2021 From: tonykarera at gmail.com (Karera Tony) Date: Tue, 12 Oct 2021 09:45:45 +0200 Subject: Restarting Openstack Victoria using kolla-ansible In-Reply-To: References: Message-ID: Hello Goddard, Actually when you just install kolla-ansible. It defaults to wallaby so what I did is to clone the victoria packages with git clone --branch stable/victoria https://opendev.org/openstack/kolla-ansible and then install kolla-ansible and used the packages for victoria Regards Tony Karera On Tue, Oct 12, 2021 at 9:37 AM Mark Goddard wrote: > On Mon, 11 Oct 2021 at 14:39, Karera Tony wrote: > > > > Hello Team, > > I am trying to deploy openstack Victoria ..... > > Below I install kolla-ansible on the deployment server , I first clone > the * git clone --branch stable/victoria > https://opendev.org/openstack/kolla-ansible* but when I run the > deployment without uncommenting openstack_release .. By default it deploys > wallaby > > And when I uncomment and type victoria ...Some of the containers keep > restarting esp Horizon > > Any idea on how to resolve this ? > > Even the kolla content that I use for deployment, I get it from the > kolla-ansible directory that I cloned > > Regards > > Hi Tony, > kolla-ansible deploys the tag in the openstack_release variable - the > default is victoria in the stable/victoria branch. Perhaps you have > overridden this via globals.yml, or are accidentally using the > stable/wallaby branch? > Mark > > > > > Tony Karera > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Tue Oct 12 08:27:56 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 12 Oct 2021 13:27:56 +0500 Subject: [wallaby][neutron] Distributed floating IP Message-ID: Hi, I am using openstack wallaby. I have seen an issue not sure if its a bug or configuration related issue. I am using ml2/ovn backend with distributed floating IP enabled. I have made my compute node 1 as a gateway chassis where the routers are scheduled. I have then created an instance and NATed a public IP. The instance deployed on compute 2. When I see the IP address via curl ipinfo.io it shows the floating IP that I have NATed. Then I migrated the instance to compute node 1. I had many ping drops for a couple of seconds then its back to normal. I have then seen the IP address via curl ipinfo.io. It showed me the SNAT IP address of router. Then I migrated the instance back to compute node 2, I had ping drops for 20 seconds and then the instance came back. I have seen the IP via curl, it showed the floating IP that I have nated with instance. Is it the expected behavior ? Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From jazeltq at gmail.com Tue Oct 12 08:31:42 2021 From: jazeltq at gmail.com (Jaze Lee) Date: Tue, 12 Oct 2021 16:31:42 +0800 Subject: [nova & libvirt] about attach disk on aarch64 Message-ID: Hello, We run stein openstack in our environment. And already set libvirt value in nova.conf hw_machine_type=aarch64=virt num_pcie_ports = 15 We test and find sometimes disks can not be attached correctly. For example, Built vm with 6 disks, only three disk be there. The others will be inactive in virsh. No obvious error can be found in nova-compute, libvirt, vm os logs. libvirt:5.0.0 qemu:2.12.0-44 openstack-nova: 19.3.2 librbd1:ibrbd1-14.2.16-1.el7.aarch64 Any Suggestions? Thanks a lot From mdemaced at redhat.com Tue Oct 12 10:04:34 2021 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Tue, 12 Oct 2021 12:04:34 +0200 Subject: [kuryr] Virtual PTG October 2021 In-Reply-To: References: Message-ID: Hello, Small update on the Kuryr session: the session that would happen on Oct 22 at 13-14 UTC got moved to Oct 20 13-14 UTC in the *Bexar* room. See you there. Cheers, Maysa Macedo. On Tue, Oct 5, 2021 at 12:05 PM Maysa De Macedo Souza wrote: > Hello, > > With the PTG approaching I would like to remind you that the Kuryr > sessions will be held on Oct 19 7-8 UTC and Oct 22 13-14 UTC > and in case you're interested in discussing any topic with the Kuryr team > to include it to the etherpad[1]. > > [1] https://etherpad.opendev.org/p/kuryr-yoga-ptg > > See you on the PTG. > > Thanks, > Maysa Macedo. > > On Thu, Jul 22, 2021 at 11:36 AM Maysa De Macedo Souza < > mdemaced at redhat.com> wrote: > >> Hello, >> >> I booked the following slots for Kuryr during the Yoga PTG: Oct 19 7-8 >> UTC and Oct 22 13-14 UTC. >> If you have any topic ideas you would like to discuss, please include >> them in the etherpad[1], >> also it would be interesting to include your name there if you plan to >> attend any Kuryr session. >> >> See you on the next PTG. >> >> Cheers, >> Maysa. >> >> [1] https://etherpad.opendev.org/p/kuryr-yoga-ptg >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Oct 12 10:18:05 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 12 Oct 2021 11:18:05 +0100 Subject: [docs] Keystone docs missing for Xena In-Reply-To: <8e9b9ddd-48c4-c144-cad6-63ec0682c5e8@nemebean.com> References: <8e9b9ddd-48c4-c144-cad6-63ec0682c5e8@nemebean.com> Message-ID: On Mon, 2021-10-11 at 10:27 -0500, Ben Nemec wrote: > Hey, > > I was just looking for the Keystone docs and discovered that they are > not listed on https://docs.openstack.org/xena/projects.html. If I > s/wallaby/xena/ on the wallaby version then it resolves, so it looks > like the docs are published they just aren't included in the index for > some reason. That's because Keystone hadn't merge a patch to the stable/xena branch when we created [1]. We need to uncomment the project (and any other projects that now have docs) in the 'www/project-data/xena.yaml' file in openstack-manuals. Stephen [1] https://review.opendev.org/c/openstack/openstack-manuals/+/812120 > > -Ben > From dalvarez at redhat.com Tue Oct 12 10:26:53 2021 From: dalvarez at redhat.com (Daniel Alvarez) Date: Tue, 12 Oct 2021 12:26:53 +0200 Subject: [wallaby][neutron] Distributed floating IP In-Reply-To: References: Message-ID: <5413F2AA-1FD9-42F3-A2C5-987BE418E610@redhat.com> Hi Ammad > On 12 Oct 2021, at 10:33, Ammad Syed wrote: > > ? > Hi, > > I am using openstack wallaby. I have seen an issue not sure if its a bug or configuration related issue. I am using ml2/ovn backend with distributed floating IP enabled. I have made my compute node 1 as a gateway chassis where the routers are scheduled. > > I have then created an instance and NATed a public IP. The instance deployed on compute 2. When I see the IP address via curl ipinfo.io it shows the floating IP that I have NATed. > > Then I migrated the instance to compute node 1. I had many ping drops for a couple of seconds then its back to normal. I have then seen the IP address via curl ipinfo.io. It showed me the SNAT IP address of router. Could it be that the compute 1 is not properly configured with a connection on the public network? Provider bridge, correct bridge mappings and so on and then the traffic falls back to centralized? > > Then I migrated the instance back to compute node 2, I had ping drops for 20 seconds and then the instance came back. I have seen the IP via curl, it showed the floating IP that I have nated with instance. > > Is it the expected behavior ? > > Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Tue Oct 12 11:38:08 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 12 Oct 2021 16:38:08 +0500 Subject: [wallaby][neutron] Distributed floating IP In-Reply-To: <5413F2AA-1FD9-42F3-A2C5-987BE418E610@redhat.com> References: <5413F2AA-1FD9-42F3-A2C5-987BE418E610@redhat.com> Message-ID: All three nodes are exactly identical. I am able to take the SSH of the VM via floating IP attached to it but the reverse traffic is getting out with the SNAT IP of the router when I put the VM on the gateway chassis. Ammad On Tue, Oct 12, 2021 at 3:26 PM Daniel Alvarez wrote: > > Hi Ammad > > > On 12 Oct 2021, at 10:33, Ammad Syed wrote: > > ? > Hi, > > I am using openstack wallaby. I have seen an issue not sure if its a bug > or configuration related issue. I am using ml2/ovn backend with distributed > floating IP enabled. I have made my compute node 1 as a gateway chassis > where the routers are scheduled. > > I have then created an instance and NATed a public IP. The instance > deployed on compute 2. When I see the IP address via curl ipinfo.io it > shows the floating IP that I have NATed. > > Then I migrated the instance to compute node 1. I had many ping drops for > a couple of seconds then its back to normal. I have then seen the IP > address via curl ipinfo.io. It showed me the SNAT IP address of router. > > > Could it be that the compute 1 is not properly configured with a > connection on the public network? Provider bridge, correct bridge mappings > and so on and then the traffic falls back to centralized? > > > > Then I migrated the instance back to compute node 2, I had ping drops for > 20 seconds and then the instance came back. I have seen the IP via curl, it > showed the floating IP that I have nated with instance. > > Is it the expected behavior ? > > Ammad > > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Tue Oct 12 12:06:28 2021 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Tue, 12 Oct 2021 14:06:28 +0200 Subject: [nova & libvirt] about attach disk on aarch64 In-Reply-To: References: Message-ID: <6538214e-645b-c45b-fecf-401aeb508ef6@linaro.org> W dniu 12.10.2021 o?10:31, Jaze Lee pisze: > Hello, > We run stein openstack in our environment. And already set libvirt > value in nova.conf > hw_machine_type=aarch64=virt > num_pcie_ports = 15 > > We test and find sometimes disks can not be attached correctly. > For example, > Built vm with 6 disks, only three disk be there. The others will be > inactive in virsh. > No obvious error can be found in nova-compute, libvirt, vm os logs. > > libvirt:5.0.0 > qemu:2.12.0-44 > openstack-nova: 19.3.2 > librbd1:ibrbd1-14.2.16-1.el7.aarch64 > > Any Suggestions? Update to Wallaby on top of CentOS Stream 8? Will get whole stack update. Stein is not supported anymore. And I wonder does someone support AArch64 in CentOS 7 (RHEL 7 does not). From dtantsur at redhat.com Tue Oct 12 12:29:53 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 12 Oct 2021 14:29:53 +0200 Subject: [stable][requirements][zuul] unpinned setuptools dependency on stable In-Reply-To: <20210924143600.yfjuxerlid52vlji@yuggoth.org> References: <6J4UZQ.VOBD0LVDTPUX1@est.tech> <20210924143600.yfjuxerlid52vlji@yuggoth.org> Message-ID: On Fri, Sep 24, 2021 at 4:40 PM Jeremy Stanley wrote: > On 2021-09-24 07:19:03 -0600 (-0600), Alex Schultz wrote: > [...] > > JFYI as I was looking into some other requirements issues > > yesterday, I hit this error with anyjson[0] 0.3.3 as well. It's > > used in a handful of projects[1] and there has not been a release > > since 2012[2] so this might be a problem in xena. I haven't > > checked the projects respective gates, but just want to highlight > > we'll probably have additional fallout from the setuptools change. > [...] > > Yes, we've also run into similar problems with pydot2 and > funcparserlib, and I'm sure there's plenty more of what is > effectively abandonware lingering in various projects' requirements > lists. The long and short of it is that people with newer versions > of SetupTools are going to be unable to install those, full stop. > The maintainers of some of them may be spurred to action and release > a new version, but in so doing may also drop support for older > interpreters we still test with on some stable branches (this was > the case with funcparserlib). > Apparently, suds-jurko has the same problem, breaking oslo.vmware [1] and thus cinder. Dmitry [1] https://review.opendev.org/c/openstack/oslo.vmware/+/813377 > > On the other hand, controlling what version of SetupTools others > have and use isn't always possible, unlike runtime dependencies, so > that really should be a solution of last resort. Making exceptions > to stable branch policy in unusual circumstances such as this seems > like a reasonable and more effective compromise. > -- > Jeremy Stanley > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraden at verisign.com Tue Oct 12 12:47:46 2021 From: abraden at verisign.com (Braden, Albert) Date: Tue, 12 Oct 2021 12:47:46 +0000 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: References: <7b85e6646792469aaa7e513ecfda8551@verisign.com> Message-ID: Thank you Michael, this is very helpful. Do you have any insight into why we don't experience this in Queens clusters? We aren't running a lock manager there either, and I haven't been able to duplicate the problem there. -----Original Message----- From: Michael Johnson Sent: Monday, October 11, 2021 4:24 PM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. You will need one of the Tooz supported distributed lock managers: Consul, Memcacded, Redis, or zookeeper. Michael On Mon, Oct 11, 2021 at 11:57 AM Braden, Albert wrote: > > After investigating further, I realized that we're not running redis, and I think that means that redis_connection_string doesn't get set. Does this mean that we must run redis, or is there a workaround? > > -----Original Message----- > From: Braden, Albert > Sent: Monday, October 11, 2021 2:48 PM > To: 'johnsomor at gmail.com' > Cc: 'openstack-discuss at lists.openstack.org' > Subject: RE: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > I think so. I see this: > > ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} > > ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" > > Did anything with the distributed lock manager between Queens and Train? > > -----Original Message----- > From: Michael Johnson > Sent: Monday, October 11, 2021 1:15 PM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > Hi Albert, > > Have you configured your distributed lock manager for Designate? > > [coordination] > backend_url = > > Michael > > On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > > > > > Before applying the change, we see the DNS record in the recordset: > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > $ > > > > > > > > and we can pull it from the DNS server on the controllers: > > > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > > > After applying the change, we don?t see it: > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > $ > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > $ > > > > > > > > We see this in the logs: > > > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From iurygregory at gmail.com Tue Oct 12 12:48:51 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Tue, 12 Oct 2021 14:48:51 +0200 Subject: [ironic] No weekly meeting on Oct18 Message-ID: Hello ironicers! Just a reminder that on Oct 18, we won't have our weekly meeting because we have a session in the PTG. -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Oct 12 13:03:11 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 12 Oct 2021 13:03:11 +0000 Subject: [Xena] It works! In-Reply-To: References: <0b302e18-d9a3-6b80-3049-e2f9933bec1f@dds.nl> <20211011142008.heixx43ckfsaoyd4@yuggoth.org> <69f236d2-0441-c688-562d-19c818b8030a@dds.nl> <17c70bc2a63.ede9811c850136.3432835365508929383@ghanshyammann.com> Message-ID: <20211012130310.iqm4d3zyqz2cvrso@yuggoth.org> On 2021-10-12 08:44:55 +0200 (+0200), tjoen wrote: > On 10/11/21 21:02, Ghanshyam Mann wrote: [...] > > Now cryptography 35.0.0 is used for current master branch > > testing (non voting py3.9). > > With openssl-3 I hope I don't think any of the LTS distributions we use for testing (CentOS, Ubuntu) have OpenSSL 3.x packages available. Even in Debian, unstable is still using 1.1.1l-1 while 3.0.0-1 is only available from experimental. We may start running some tests with OpenSSL 3.x versions once they begin to appear in Debian/testing or a new Fedora version, but widespread testing with it likely won't happen until we add CentOS 9 Stream or Ubuntu 22.04 LTS (assuming one of those provides it once they exist). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rigault.francois at gmail.com Tue Oct 12 14:03:58 2021 From: rigault.francois at gmail.com (Francois) Date: Tue, 12 Oct 2021 16:03:58 +0200 Subject: [neutron] OVN and dynamic routing Message-ID: Hello Neutron! I am looking into running stacks with OVN on a leaf-spine network, and have some floating IPs routed between racks. Basically each rack is assigned its own set of subnets. Some VLANs are stretched across all racks: the provisioning VLAN used by tripleo to deploy the stack, and the VLANs for the controllers API IPs. However, each tenant subnet is local to a rack: for example each OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its own rack. Traffic between 2 racks is sent to a spine, and leaves and spines run some eVPN-like thing: each pair of ToR is a vtep, traffic is encapsulated as VXLAN, and routes between vteps are exchanged with BGP. I am looking into supporting floating IPs in there: I expect floating IPs to be able to move between racks, as such I am looking into publishing the route for a FIP towards an hypervisor, through BGP. Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. It seems there are several ideas to achieve this (it was discussed [before][1] in ovs conference) - using [neutron-dynamic-routing][2] - that seems to have some gaps for OVN. It uses os-ken to talk to switches and exchange routes - using [OVN BGP agent][3] that uses FRR, it seems there is a related [RFE][4] for integration in tripleo There is btw also a [BGPVPN][5] project (it does not match my usecase as far as I tried to understand it) that also has some code that talks BGP to switches, already integrated in tripleo. For my tests, I was able to use the neutron-dynamic-routing project (almost) as documented, with a few changes: - for traffic going from VMs to outside the stack, the hypervisor was trying to resolve the "gateway of fIPs" with ARP request which does not make any sense. I created a dummy port with the mac address of the virtual router of the switches: ``` $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml - Fixed IP Addresses: - ip_address: 10.64.254.1 subnet_id: 8f37 ID: 4028 MAC Address: 00:1c:73:00:00:11 Name: lagw Status: DOWN ``` this prevent the hypervisor to send ARP requests to a non existent gateway - for traffic coming back, we start the neutron-bgp-dragent agent on the controllers. We create the right bgp speaker, peers, etc. - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it selects fips and join with ports owned by a "floatingip_agent_gateway" which does not exist on OVN. We can define ourselves some ports so that the dragent is able to find the tenant IP of a host: ``` openstack port create --network provider --device-owner network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip ip-address=10.64.245.102 ag2 ``` - when creating a floating IP and assigning a port to it, Neutron reads changes from OVN SB and fills the binding information into the port: ``` $ openstack port show -c binding_host_id `openstack floating ip show 10.64.254.177 -f value -c port_id` +-----------------+----------------------------------------+ | Field | Value | +-----------------+----------------------------------------+ | binding_host_id | cpu35d.cloud | +-----------------+----------------------------------------+ ``` this allows the dragent to publish the route for the fip ``` $ openstack bgp speaker list advertised routes bgpspeaker +------------------+---------------+ | Destination | Nexthop | +------------------+---------------+ | 10.64.254.177/32 | 10.64.245.102 | +------------------+---------------+ ``` - traffic reaches the hypervisor but (for reason I don't understand) I had to add a rule ``` $ ip rule 0: from all lookup local 32765: from all iif vlan1234 lookup ovn 32766: from all lookup main 32767: from all lookup default $ ip route show table ovn 10.64.254.177 dev vlan1234 scope link ``` so that the traffic coming for the fip is not immediately discarded by the hypervisor (it's not an ideal solution but it is a workaround that makes my one fIP work!) So all in all it seems it would be possible to use the neutron-dynamic-routing agent, with some minor modifications (eg: to also publish the fip of the OVN L3 gateway router). I am wondering whether I have overlooked anything, and if such kind of deployment (OVN + neutron dynamic routing or similar) is already in use somewhere. Does it make sense to have a RFE for better integration between OVN and neutron-dynamic-routing? Thanks Francois [1]: https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf [2]: https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html [3]: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ [4]: https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html [5]: https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml From dpeacock at redhat.com Tue Oct 12 14:31:56 2021 From: dpeacock at redhat.com (David Peacock) Date: Tue, 12 Oct 2021 10:31:56 -0400 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: Hi Anirudh, You're hitting a known bug that we're in the process of propagating a fix for; sorry for this. :-) As per a patch we have under review, use the inventory file located under ~/tripleo-deploy/ directory: tripleo-ansible-inventory.yaml. To generate an inventory file, use the playbook in "tripleo-ansible: cli-config-download.yaml". https://review.opendev.org/c/openstack/tripleo-validations/+/813535 Let us know if this doesn't put you on the right track. Thanks, David On Sat, Oct 9, 2021 at 5:12 PM Anirudh Gupta wrote: > Hi Team, > > I am installing Tripleo using the below link > > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html > > In the Introspect section, When I executed the command > openstack tripleo validator run --group pre-introspection > > I got the following error: > > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | UUID | Validations | > Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | > > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu | > PASSED | localhost | localhost | | 0:00:01.261 | > | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space | > PASSED | localhost | localhost | | 0:00:04.480 | > | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram | > PASSED | localhost | localhost | | 0:00:02.173 | > | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode | > PASSED | localhost | localhost | | 0:00:01.546 | > | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway | > FAILED | undercloud | No host matched | | | > | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space | > FAILED | undercloud | No host matched | | | > | 2f0239db-d530-48eb-b606-f82179e72e50 | undercloud-neutron-sanity-check | > FAILED | undercloud | No host matched | | | > | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range | > FAILED | undercloud | No host matched | | | > | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection | > FAILED | undercloud | No host matched | | | > | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush | > FAILED | undercloud | No host matched | | | > > +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ > > > Then I created the following inventory file: > [Undercloud] > undercloud > > Passed this command while running the pre-introspection command. > It then executed successfully. > > > But with Pre-deployment, it is still failing even after passing the > inventory > > > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > | UUID | Validations > | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration > | > > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e > | PASSED | localhost | localhost | | > 0:00:00.504 | > | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns > | PASSED | localhost | localhost | | > 0:00:00.481 | > | 93611c13-49a2-4cae-ad87-099546459481 | service-status > | PASSED | all | undercloud | | > 0:00:06.942 | > | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux > | PASSED | all | undercloud | | > 0:00:02.433 | > | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version > | FAILED | all | undercloud | | > 0:00:03.576 | > | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed > | PASSED | undercloud | undercloud | | > 0:00:02.850 | > | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed > | FAILED | allovercloud | No host matched | | > | > | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment > | FAILED | undercloud | undercloud | | > 0:00:31.559 | > | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug > | FAILED | undercloud | undercloud | | > 0:00:02.057 | > | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | > collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud > | | 0:00:00.884 | > | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted > | FAILED | undercloud | undercloud | | > 0:00:02.138 | > | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count > | PASSED | undercloud | undercloud | | > 0:00:06.164 | > | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count > | FAILED | undercloud | undercloud | | > 0:00:00.934 | > | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning > | FAILED | undercloud | undercloud | | > 0:00:02.456 | > | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration > | FAILED | undercloud | undercloud | | > 0:00:00.882 | > | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment > | FAILED | undercloud | undercloud | | > 0:00:00.880 | > | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks > | FAILED | undercloud | undercloud | | > 0:00:01.934 | > | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans > | FAILED | undercloud | undercloud | | > 0:00:01.931 | > | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding > | PASSED | all | undercloud | | > 0:00:00.366 | > > +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ > > Also this step of passing the inventory file is not mentioned anywhere in > the document. Is there anything I am missing? > > Regards > Anirudh Gupta > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbultel at redhat.com Tue Oct 12 15:02:07 2021 From: mbultel at redhat.com (Mathieu Bultel) Date: Tue, 12 Oct 2021 17:02:07 +0200 Subject: [TripleO] Issue in running Pre-Introspection In-Reply-To: References: Message-ID: Hi, Which release are you using ? You have to provide a valid inventory file via the openstack CLI in order to allow the VF to know which hosts & ips is. Mathieu On Fri, Oct 1, 2021 at 5:17 PM Anirudh Gupta wrote: > Hi Team,, > > Upon further debugging, I found that pre-introspection internally calls > the ansible playbook located at path /usr/share/ansible/validation-playbooks > File "dhcp-introspection.yaml" has hosts mentioned as undercloud. > > - hosts: *undercloud* > become: true > vars: > ... > ... > > > But the artifacts created for dhcp-introspection at > location /home/stack/validations/artifacts/_dhcp-introspection.yaml_2021-10-01T11 > has file *hosts *present which has *localhost* written into it as a > result of which when command gets executed it gives the error *"Could not > match supplied host pattern, ignoring: undercloud:"* > > Can someone suggest how is this artifacts written in tripleo and the way > we can change hosts file entry to undercloud so that it can work > > Similar is the case with other tasks > like undercloud-tokenflush, ctlplane-ip-range etc > > Regards > Anirudh Gupta > > On Wed, Sep 29, 2021 at 4:47 PM Anirudh Gupta wrote: > >> Hi Team, >> >> I tried installing Undercloud using the below link: >> >> >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud >> >> I am getting the following error: >> >> (undercloud) [stack at undercloud ~]$ openstack tripleo validator run >> --group pre-introspection >> Selected log directory '/home/stack/validations' does not exist. >> Attempting to create it. >> >> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >> | UUID | Validations >> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >> >> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >> | 7029c1f6-5ab4-465d-82d7-3f29058012ce | check-cpu >> | PASSED | localhost | localhost | | 0:00:02.531 | >> | db059017-30f1-4b97-925e-3f55b586d492 | check-disk-space >> | PASSED | localhost | localhost | | 0:00:04.432 | >> | e23dd9a1-90d3-4797-ae0a-b43e55ab6179 | check-ram >> | PASSED | localhost | localhost | | 0:00:01.324 | >> | 598ca02d-258a-44ad-b78d-3877321cdfe6 | check-selinux-mode >> | PASSED | localhost | localhost | | 0:00:01.591 | >> | c4435b4c-b432-4a1e-8a99-00638034a884 | *check-network-gateway >> | FAILED* | undercloud | *No host matched* | | >> | >> | cb1eed23-ef2f-4acd-a43a-86fb09bf0372 | *undercloud-disk-space >> | FAILED* | undercloud | *No host matched* | | >> | >> | abde5329-9289-4b24-bf16-c4d82b03e67a | *undercloud-neutron-sanity-check >> | FAILED* | undercloud | *No host matched* | | >> | >> | d0e5fdca-ece6-4a37-b759-ed1fac31a10f | *ctlplane-ip-range >> | FAILED* | undercloud | No host matched | | >> | >> | 91511807-225c-4852-bb52-6d0003c51d49 | *dhcp-introspection >> | FAILED* | undercloud | No host matched | | >> | >> | e96f7704-d2fb-465d-972b-47e2f057449c |* undercloud-tokenflush >> | FAILED *| undercloud | No host matched | | >> | >> >> >> As per the validation link, >> >> https://docs.openstack.org/tripleo-validations/wallaby/validations-pre-introspection-details.html >> >> check-network-gateway >> >> If gateway in undercloud.conf is different from local_ip, verify that >> the gateway exists and is reachable >> >> Observation - In my case IP specified in local_ip and gateway, both are >> pingable, but still this error is being observed >> >> >> ctlplane-ip-range? >> >> >> Check the number of IP addresses available for the overcloud nodes. >> >> Verify that the number of IP addresses defined in dhcp_start and dhcp_end fields >> in undercloud.conf is not too low. >> >> - >> >> ctlplane_iprange_min_size: 20 >> >> Observation - In my case I have defined more than 20 IPs >> >> >> Similarly for disk related issue, I have dedicated 100 GB space in /var >> and / >> >> Filesystem Size Used Avail Use% Mounted on >> devtmpfs 12G 0 12G 0% /dev >> tmpfs 12G 84K 12G 1% /dev/shm >> tmpfs 12G 8.7M 12G 1% /run >> tmpfs 12G 0 12G 0% /sys/fs/cgroup >> /dev/mapper/cl-root 100G 2.5G 98G 3% / >> /dev/mapper/cl-home 47G 365M 47G 1% /home >> /dev/mapper/cl-var 103G 1.1G 102G 2% /var >> /dev/vda1 947M 200M 747M 22% /boot >> tmpfs 2.4G 0 2.4G 0% /run/user/0 >> tmpfs 2.4G 0 2.4G 0% /run/user/1000 >> >> Despite setting al the parameters, still I am not able to pass >> pre-introspection checks. *"NO Host Matched" *is found in the table. >> >> >> Regards >> >> Anirudh Gupta >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jing.c.zhang at nokia.com Tue Oct 12 14:23:07 2021 From: jing.c.zhang at nokia.com (Zhang, Jing C. (Nokia - CA/Ottawa)) Date: Tue, 12 Oct 2021 14:23:07 +0000 Subject: [Octavia] Can not create LB on SRIOV network In-Reply-To: References: Message-ID: Hi Michael, Nova log does not have other error besides "port-bind-failure...check neutron log"...as if you manually attempts to attach VM to SRIOV provider network not using the direct port type. Jing -----Original Message----- From: Michael Johnson Sent: Monday, October 11, 2021 2:00 PM To: Sean Mooney Cc: Zhang, Jing C. (Nokia - CA/Ottawa) ; openstack-discuss at lists.openstack.org Subject: Re: [Octavia] Can not create LB on SRIOV network Ah, so that is probably the issue. Nova doesn't support the interface attach for SRIOV in Train. We do currently require that the port be hot plugged after boot. I would still be interested in seeing the log messages, just to confirm that is the issue or if we have other work to do. The vnic_type=direct should not be an issue as the port is being passed into Octavia pre-created. I think it was already mentioned that the port was successful when used during boot via the --nic option. Thanks for the pointer Sean. Michael On Mon, Oct 11, 2021 at 10:22 AM Sean Mooney wrote: > > On Mon, Oct 11, 2021 at 6:12 PM Michael Johnson wrote: > > > > Interesting, thank you for trying that out. > > > > We call the nova "interface_attach" and pass in the port_id you > > provided on the load balancer create command line. > > > > In the worker log, above the "tree" log lines, is there another > > ERROR log line that includes the exception returned from nova? > until very recently nova did not support interface attach for sriov interfaces. > https://specs.openstack.org/openstack/nova-specs/specs/victoria/implem > ented/sriov-interface-attach-detach.html > today we do allow it but we do not guarentee it will work. > > if there are not enoch pci slots in the vm or there are not enough VF > on the host that are attached to the correct phsynet the attach will > fail. > the most comon reason the attach fails is either numa affintiy cannot > be acived or there is an issue in the guest/qemu the guest kernel need > to repond to the hotplug event when qemu tries to add the device if it > does not it will fail. > > keeping all of tha tin mind for sriov attach to work octavia will have > to create the port with vnic_type=driect or one of the other valid > options like macvtap or direct phsyical. > you cannot attach sriov device that can be used with octavia using > flavor extra specs. > > > > > Also, I would be interested to see what nova logged as to why it was > > unable to attach the port. That may be in the main nova logs, or > > possibly on the compute host nova logs. > > > > Michael > > > > On Thu, Oct 7, 2021 at 5:36 PM Zhang, Jing C. (Nokia - CA/Ottawa) > > wrote: > > > > > > Hi Michael, > > > > > > I made a mistake when creating VM manually, I should use --nic option not --network option. After correcting that, I can create VM with the extra-flavor: > > > > > > $ openstack server create --flavor octavia-flavor --image Centos7 > > > --nic port-id=test-port --security-group demo-secgroup --key-name > > > demo-key test-vm > > > > > > $ nova list --all --fields name,status,host,networks | grep > > > test-vm > > > | 8548400b-725a-405a-aeeb-ed1d208915e2 | test-vm | ACTIVE | overcloud-sriovperformancecompute-201-1.localdomain | ext-net1=10.5.201.149 > > > > > > A 2nd VF interface is seen inside the VM: > > > > > > [centos at test-vm ~]$ ip a > > > ... > > > 3: eth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 > > > link/ether 0a:b2:d4:85:a2:e6 brd ff:ff:ff:ff:ff:ff > > > > > > This MAC is not seen by neutron though: > > > > > > $ openstack port list | grep 0a:b2:d4:85:a2:e6 > > > > > > [empty] > > > > > > ===================== > > > However when I tried to create LB with the same VM flavor, it failed at the same place as before. > > > > > > Looking at worker.log, it seems the error is similar to use --network option to create the VM manually. But you are the expert. > > > > > > "Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52" > > > > > > Here is the full list of command line: > > > > > > $ openstack flavor list | grep octavia-flavor > > > | eb312b9a-d04d-4a88-9db2-7a88ce167cff | octavia-flavor | 4096 | 0 | 0 | 4 | True | > > > > > > openstack loadbalancer flavorprofile create --name ofp1 --provider amphora --flavor-data '{"compute_flavor": "eb312b9a-d04d-4a88-9db2-7a88ce167cff"}' > > > openstack loadbalancer flavor create --name of1 --flavorprofile > > > ofp1 --enable openstack loadbalancer create --name lb1 --flavor > > > of1 --vip-port-id test-port --vip-subnet-id ext-subnet1 > > > > > > > > > |__Flow 'octavia-create-loadbalancer-flow': PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/network_tasks.py", line 399, in execute > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker loadbalancer, loadbalancer.vip, amphora, subnet) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 391, in plug_aap_port > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker interface = self._plug_amphora_vip(amphora, subnet) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 123, in _plug_amphora_vip > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker raise base.PlugVIPException(message) > > > 2021-10-08 00:19:26.497 71 ERROR octavia.controller.worker.v1.controller_worker PlugVIPException: Error plugging amphora (compute_id: 64be3ced-90b1-4fb2-a13c-dd5e73ba0526) into vip network 7a1fa805-6d21-4da6-9573-1586c2faef52. > > > 2021-10-08 00:19:26.497 71 ERROR > > > octavia.controller.worker.v1.controller_worker > > > > > > > > > -----Original Message----- > > > From: Zhang, Jing C. (Nokia - CA/Ottawa) > > > Sent: Thursday, October 7, 2021 6:18 PM > > > To: Michael Johnson > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: RE: [Octavia] Can not create LB on SRIOV network > > > > > > Hi Michael, > > > > > > Thank you so much for the information. > > > > > > I tried the extra-flavor walk-around, I can not use it to create VM in Train release, I suspect this old extra-flavor is too old, but I did not dig further. > > > > > > However, both Train and latest nova spec still shows the above extra-flavor with the old whitelist format: > > > https://docs.openstack.org/nova/train/admin/pci-passthrough.html > > > https://docs.openstack.org/nova/latest/admin/pci-passthrough.html > > > > > > ========================= > > > Here is the detail: > > > > > > Env: NIC is intel 82599, creating VM with SRIOV direct port works well. > > > > > > Nova.conf > > > > > > passthrough_whitelist={"devname":"ens1f0","physical_network":"phys > > > net5"} > > > passthrough_whitelist={"devname":"ens1f1","physical_network":"phys > > > net6"} > > > > > > Sriov_agent.ini > > > > > > [sriov_nic] > > > physical_device_mappings=physnet5:ens1f0,physnet6:ens1f1 > > > > > > (1) Added the alias in nova.conf for nova-compute and nova-api, and restart the two nova components: > > > > > > alias = { "vendor_id":"8086", "product_id":"10ed", > > > "device_type":"type-VF", "name":"vf", "numa_policy": "required" } > > > > > > (2) Used the extra-spec in nova flavor > > > > > > openstack flavor set octavia-flavor --property "pci_passthrough:alias"="vf:1" > > > > > > (3) Failed to create VM with this flavor, sriov agent log does not > > > show port event, for sure also failed to create LB, > > > PortBindingFailed > > > > > > > > > (4) Tried multiple formats to add whitelist for PF and VF in > > > nova.conf for nova-compute, and retried, still failed > > > > > > passthrough_whitelist={"vendor_id":"8086","product_id":"10f8","dev > > > name":"ens1f0","physical_network":"physnet5"} #PF > > > passthrough_whitelist={"vendor_id":"8086","product_id":"10ed","phy > > > sical_network":"physnet5"} #VF > > > > > > The sriov agent log does not show port event for any of them. > > > > > > > > > > > > > > > -----Original Message----- > > > From: Michael Johnson > > > Sent: Wednesday, October 6, 2021 4:48 PM > > > To: Zhang, Jing C. (Nokia - CA/Ottawa) > > > Cc: openstack-discuss at lists.openstack.org > > > Subject: Re: [Octavia] Can not create LB on SRIOV network > > > > > > Hi Jing, > > > > > > To my knowledge no one has done the work to support SRIOV network ports in Octavia load balancers. This is an open roadmap item[1]. > > > > > > It will require some development effort as we hot-plug the tenant traffic ports, which means we need to give nova some hints when booting the instances that the amphora instance will be using SRIOV. > > > > > > You might be able to accomplish it on train using the flavors capability. You would create a special nova flavor with the required "extra_specs"[2] to schedule the instance on the proper SRIOV host with the SRIOV libvirt settings. Then you can create an Octavia flavor[3] that uses this special nova flavor. You could then create a load balancer by passing in the neutron SRIOV port as the VIP port. > > > This would not provide a solution for adding additional SRIOV ports to the load balancer for the member servers, but you can use the VIP port to access members. > > > > > > I have not tried this and would be interested to hear if it works for you. > > > > > > If you are interested in implementing SRIOV support for Octavia, please consider adding it to the PTG agenda[4] and joining us at the virtual PTG. > > > > > > Michael > > > > > > [1] https://wiki.openstack.org/wiki/Octavia/Roadmap > > > [2] > > > https://docs.openstack.org/nova/xena/configuration/extra-specs.htm > > > l [3] https://docs.openstack.org/octavia/latest/admin/flavors.html > > > [4] https://etherpad.opendev.org/p/yoga-ptg-octavia > > > > > > On Wed, Oct 6, 2021 at 10:24 AM Zhang, Jing C. (Nokia - CA/Ottawa) wrote: > > > > > > > > I can not create Octavia LB on SRIOV network in Train. I went to Octavia story board, did a search but was unable to figure out (the story for SRIOV?). > > > > > > > > > > > > > > > > I left a comment under this story, I re-post my questions there, hoping someone knows the answer. > > > > > > > > > > > > > > > > Thank you so much > > > > > > > > > > > > > > > > Jing > > > > > > > > > > > > > > > > https://storyboard.openstack.org/#!/story/2006886 Add VM SRIOV > > > > Interface Config Guide (Openstack) > > > > > > > > > > > > > > > > Hi, > > > > In Openstack train release, creating Octavia LB on SRIOV network fails. > > > > I come here to search if there is already a plan to add this support, and see this story. > > > > This story gives the impression that the capability is already supported, it is a matter of adding user guide. > > > > So, my question is, in which Openstack release, creating LB on SRIOV network is supported? > > > > Thank you > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From johnsomor at gmail.com Tue Oct 12 15:32:43 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 12 Oct 2021 08:32:43 -0700 Subject: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail In-Reply-To: References: <7b85e6646792469aaa7e513ecfda8551@verisign.com> Message-ID: I don't have a good answer for you on that as it pre-dates my history with Designate a bit. I suspect it has to do with the removal of the pool-manager and the restructuring of the controller code. Maybe someone else on the discuss list has more insight. Michael On Tue, Oct 12, 2021 at 5:47 AM Braden, Albert wrote: > > Thank you Michael, this is very helpful. Do you have any insight into why we don't experience this in Queens clusters? We aren't running a lock manager there either, and I haven't been able to duplicate the problem there. > > -----Original Message----- > From: Michael Johnson > Sent: Monday, October 11, 2021 4:24 PM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > You will need one of the Tooz supported distributed lock managers: > Consul, Memcacded, Redis, or zookeeper. > > Michael > > On Mon, Oct 11, 2021 at 11:57 AM Braden, Albert wrote: > > > > After investigating further, I realized that we're not running redis, and I think that means that redis_connection_string doesn't get set. Does this mean that we must run redis, or is there a workaround? > > > > -----Original Message----- > > From: Braden, Albert > > Sent: Monday, October 11, 2021 2:48 PM > > To: 'johnsomor at gmail.com' > > Cc: 'openstack-discuss at lists.openstack.org' > > Subject: RE: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > > > I think so. I see this: > > > > ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} > > > > ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" > > > > Did anything with the distributed lock manager between Queens and Train? > > > > -----Original Message----- > > From: Michael Johnson > > Sent: Monday, October 11, 2021 1:15 PM > > To: Braden, Albert > > Cc: openstack-discuss at lists.openstack.org > > Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > > > Hi Albert, > > > > Have you configured your distributed lock manager for Designate? > > > > [coordination] > > backend_url = > > > > Michael > > > > On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > > > > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > > > > > > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > > > > > > > > > Before applying the change, we see the DNS record in the recordset: > > > > > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > > > $ > > > > > > > > > > > > and we can pull it from the DNS server on the controllers: > > > > > > > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > > > > > > > After applying the change, we don?t see it: > > > > > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > > > $ > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > $ > > > > > > > > > > > > We see this in the logs: > > > > > > > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > > > > > > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > > > > > > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From tonyliu0592 at hotmail.com Tue Oct 12 16:04:55 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Tue, 12 Oct 2021 16:04:55 +0000 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: Message-ID: I wonder, since there is already VXLAN based L2 across multiple racks (which means you are not looking for pure L3 solution), while keep tenant network multi-subnet on L3 for EW traffic, why not have external network also on L2 and stretched on multiple racks for NS traffic, assuming you are using distributed FIP? Thanks! Tony ________________________________________ From: Francois Sent: October 12, 2021 07:03 AM To: openstack-discuss Subject: [neutron] OVN and dynamic routing Hello Neutron! I am looking into running stacks with OVN on a leaf-spine network, and have some floating IPs routed between racks. Basically each rack is assigned its own set of subnets. Some VLANs are stretched across all racks: the provisioning VLAN used by tripleo to deploy the stack, and the VLANs for the controllers API IPs. However, each tenant subnet is local to a rack: for example each OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its own rack. Traffic between 2 racks is sent to a spine, and leaves and spines run some eVPN-like thing: each pair of ToR is a vtep, traffic is encapsulated as VXLAN, and routes between vteps are exchanged with BGP. I am looking into supporting floating IPs in there: I expect floating IPs to be able to move between racks, as such I am looking into publishing the route for a FIP towards an hypervisor, through BGP. Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. It seems there are several ideas to achieve this (it was discussed [before][1] in ovs conference) - using [neutron-dynamic-routing][2] - that seems to have some gaps for OVN. It uses os-ken to talk to switches and exchange routes - using [OVN BGP agent][3] that uses FRR, it seems there is a related [RFE][4] for integration in tripleo There is btw also a [BGPVPN][5] project (it does not match my usecase as far as I tried to understand it) that also has some code that talks BGP to switches, already integrated in tripleo. For my tests, I was able to use the neutron-dynamic-routing project (almost) as documented, with a few changes: - for traffic going from VMs to outside the stack, the hypervisor was trying to resolve the "gateway of fIPs" with ARP request which does not make any sense. I created a dummy port with the mac address of the virtual router of the switches: ``` $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml - Fixed IP Addresses: - ip_address: 10.64.254.1 subnet_id: 8f37 ID: 4028 MAC Address: 00:1c:73:00:00:11 Name: lagw Status: DOWN ``` this prevent the hypervisor to send ARP requests to a non existent gateway - for traffic coming back, we start the neutron-bgp-dragent agent on the controllers. We create the right bgp speaker, peers, etc. - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it selects fips and join with ports owned by a "floatingip_agent_gateway" which does not exist on OVN. We can define ourselves some ports so that the dragent is able to find the tenant IP of a host: ``` openstack port create --network provider --device-owner network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip ip-address=10.64.245.102 ag2 ``` - when creating a floating IP and assigning a port to it, Neutron reads changes from OVN SB and fills the binding information into the port: ``` $ openstack port show -c binding_host_id `openstack floating ip show 10.64.254.177 -f value -c port_id` +-----------------+----------------------------------------+ | Field | Value | +-----------------+----------------------------------------+ | binding_host_id | cpu35d.cloud | +-----------------+----------------------------------------+ ``` this allows the dragent to publish the route for the fip ``` $ openstack bgp speaker list advertised routes bgpspeaker +------------------+---------------+ | Destination | Nexthop | +------------------+---------------+ | 10.64.254.177/32 | 10.64.245.102 | +------------------+---------------+ ``` - traffic reaches the hypervisor but (for reason I don't understand) I had to add a rule ``` $ ip rule 0: from all lookup local 32765: from all iif vlan1234 lookup ovn 32766: from all lookup main 32767: from all lookup default $ ip route show table ovn 10.64.254.177 dev vlan1234 scope link ``` so that the traffic coming for the fip is not immediately discarded by the hypervisor (it's not an ideal solution but it is a workaround that makes my one fIP work!) So all in all it seems it would be possible to use the neutron-dynamic-routing agent, with some minor modifications (eg: to also publish the fip of the OVN L3 gateway router). I am wondering whether I have overlooked anything, and if such kind of deployment (OVN + neutron dynamic routing or similar) is already in use somewhere. Does it make sense to have a RFE for better integration between OVN and neutron-dynamic-routing? Thanks Francois [1]: https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf [2]: https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html [3]: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ [4]: https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html [5]: https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml From openstack at nemebean.com Tue Oct 12 16:18:11 2021 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 12 Oct 2021 11:18:11 -0500 Subject: [KEYSTONE][POLICIES] - Overrides that don't work? In-Reply-To: References: Message-ID: Probably. I'm not an expert on writing Keystone policies so I can't promise anything. :-) However, I'm fairly confident that if you get a properly scoped token it will get you past your current error. Anything beyond that would be a barely educated guess on my part. On 10/11/21 12:18 PM, Ga?l THEROND wrote: > Hi ben! Thanks a lot for the answer! > > Ok I?ll get a look at that, but if I correctly understand a user with a > role of project-admin attached to him as a scoped to domain he should be > able to add users to a group once the policy update right? > > Once again thanks a lot for your answer! > > Le?lun. 11 oct. 2021 ? 17:25, Ben Nemec > a ?crit?: > > I don't believe it's possible to override the scope of a policy > rule. In > this case it sounds like the user should request a domain-scoped token > to perform this operation. For details on who to do that, see > https://docs.openstack.org/keystone/wallaby/admin/tokens-overview.html#authorization-scopes > > > On 10/6/21 7:52 AM, Ga?l THEROND wrote: > > Hi team, > > > > I'm having a weird behavior with my Openstack platform that makes me > > think I may have misunderstood some mechanisms on the way > policies are > > working and especially the overriding. > > > > So, long story short, I've few services that get custom policies > such as > > glance that behave as expected, Keystone's one aren't. > > > > All in all, here is what I'm understanding of the mechanism: > > > > This is the keystone policy that I'm looking to override: > > https://paste.openstack.org/show/bwuF6jFISscRllWdUURL/ > > > > > > > > This policy default can be found in here: > > > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > > > > > > > > > Here is the policy that I'm testing: > > https://paste.openstack.org/show/bHQ0PXvOro4lXNTlxlie/ > > > > > > > > I know, this policy isn't taking care of the admin role but it's > not the > > point. > > > >? From my understanding, any user with the project-manager role > should be > > able to add any available user on any available group as long as the > > project-manager domain is the same as the target. > > > > However, when I'm doing that, keystone complains that I'm not > authorized > > to do so because the user token scope is 'PROJECT' where it > should be > > 'SYSTEM' or 'DOMAIN'. > > > > Now, I wouldn't be surprised of that message being thrown?out > with the > > default policy as it's stated on the code with the following: > > > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > > > > > > > > > So the question is, if the custom policy doesn't override the > default > > scope_types how am I supposed to make it work? > > > > I hope it was clear enough, but if not, feel free to ask me for more > > information. > > > > PS: I've tried to assign this role with a domain scope to my user > and > > I've still the same issue. > > > > Thanks a lot everyone! > > > > > From ignaziocassano at gmail.com Tue Oct 12 16:38:24 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 12 Oct 2021 18:38:24 +0200 Subject: [kolla-ansible][neutron] configuration question Message-ID: Hello Everyone, I need to know if it possible configure kolla neutron ovs with more than one bridge mappings, for example: bridge_mappings = physnet1:br-ex,physnet2:br-ex2 I figure out that in standard configuration ansible playbook create br-ex and add the interface with variable "neutron_external_interface" under br-ex. What can I do if I need to do if I wand more than one bridge ? How kolla ansible playbook can help in this case ? I could use multiple bridges in /etc/kolla/config neutron configuration files, but I do not know how ansible playbook can do the job. because I do not see any variable can help me in /etc/kolla/globals.yml Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 12 17:40:51 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 12 Oct 2021 19:40:51 +0200 Subject: [kolla-ansible][neutron] configuration question In-Reply-To: References: Message-ID: Reading at this bug: https://bugs.launchpad.net/kolla-ansible/+bug/1626259 It seems only for documentation, so it must work. Right? Ignazio Il giorno mar 12 ott 2021 alle ore 18:38 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello Everyone, > I need to know if it possible configure kolla neutron ovs with more than > one bridge mappings, for example: > > bridge_mappings = physnet1:br-ex,physnet2:br-ex2 > > I figure out that in standard configuration ansible playbook create br-ex and add > > the interface with variable "neutron_external_interface" under br-ex. > > What can I do if I need to do if I wand more than one bridge ? > > How kolla ansible playbook can help in this case ? > > I could use multiple bridges in /etc/kolla/config neutron configuration files, but I do not know how ansible playbook can do the job. > > because I do not see any variable can help me in /etc/kolla/globals.yml > Thanks > > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rigault.francois at gmail.com Tue Oct 12 19:26:56 2021 From: rigault.francois at gmail.com (Francois) Date: Tue, 12 Oct 2021 21:26:56 +0200 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: Message-ID: (yes we are using distributed fips) Well we don't want stretched VLANs. However... if we followed the doc we would end up with 3 controllers in the same rack which would not be resilient. Since we have just 3 controllers plugged on specific, identified ports, we afford to have a stretched VLAN on these few ports only. For the provisioning network, I am taking a shortcut since this network should basically only be needed once in a while for stack upgrades and nothing interesting (like mac addresses moving) happens there. The data plane traffic, that needs scalability and resiliency, is not going through these VLANs. I think stretched VLANs on leaf spine networks are forbidden in general for these reasons (large L2 networks? STP reducing the bandwidth? broadcast storm? larger failure domain? I don't know specifically, I would need help from a network engineer to explain the reason). https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/spine_leaf_networking/index On Tue, 12 Oct 2021 at 18:04, Tony Liu wrote: > > I wonder, since there is already VXLAN based L2 across multiple racks > (which means you are not looking for pure L3 solution), > while keep tenant network multi-subnet on L3 for EW traffic, > why not have external network also on L2 and stretched on multiple racks > for NS traffic, assuming you are using distributed FIP? > > Thanks! > Tony > ________________________________________ > From: Francois > Sent: October 12, 2021 07:03 AM > To: openstack-discuss > Subject: [neutron] OVN and dynamic routing > > Hello Neutron! > I am looking into running stacks with OVN on a leaf-spine network, and > have some floating IPs routed between racks. > > Basically each rack is assigned its own set of subnets. > Some VLANs are stretched across all racks: the provisioning VLAN used > by tripleo to deploy the stack, and the VLANs for the controllers API > IPs. However, each tenant subnet is local to a rack: for example each > OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its > own rack. Traffic between 2 racks is sent to a spine, and leaves and > spines run some eVPN-like thing: each pair of ToR is a vtep, traffic > is encapsulated as VXLAN, and routes between vteps are exchanged with > BGP. > > I am looking into supporting floating IPs in there: I expect floating > IPs to be able to move between racks, as such I am looking into > publishing the route for a FIP towards an hypervisor, through BGP. > Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. > > It seems there are several ideas to achieve this (it was discussed > [before][1] in ovs conference) > - using [neutron-dynamic-routing][2] - that seems to have some gaps > for OVN. It uses os-ken to talk to switches and exchange routes > - using [OVN BGP agent][3] that uses FRR, it seems there is a related > [RFE][4] for integration in tripleo > > There is btw also a [BGPVPN][5] project (it does not match my usecase > as far as I tried to understand it) that also has some code that talks > BGP to switches, already integrated in tripleo. > > For my tests, I was able to use the neutron-dynamic-routing project > (almost) as documented, with a few changes: > - for traffic going from VMs to outside the stack, the hypervisor was > trying to resolve the "gateway of fIPs" with ARP request which does > not make any sense. I created a dummy port with the mac address of the > virtual router of the switches: > ``` > $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml > - Fixed IP Addresses: > - ip_address: 10.64.254.1 > subnet_id: 8f37 > ID: 4028 > MAC Address: 00:1c:73:00:00:11 > Name: lagw > Status: DOWN > ``` > this prevent the hypervisor to send ARP requests to a non existent gateway > - for traffic coming back, we start the neutron-bgp-dragent agent on > the controllers. We create the right bgp speaker, peers, etc. > - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it > selects fips and join with ports owned by a "floatingip_agent_gateway" > which does not exist on OVN. We can define ourselves some ports so > that the dragent is able to find the tenant IP of a host: > ``` > openstack port create --network provider --device-owner > network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip > ip-address=10.64.245.102 ag2 > ``` > - when creating a floating IP and assigning a port to it, Neutron > reads changes from OVN SB and fills the binding information into the > port: > ``` > $ openstack port show -c binding_host_id `openstack floating ip show > 10.64.254.177 -f value -c port_id` > +-----------------+----------------------------------------+ > | Field | Value | > +-----------------+----------------------------------------+ > | binding_host_id | cpu35d.cloud | > +-----------------+----------------------------------------+ > ``` > this allows the dragent to publish the route for the fip > ``` > $ openstack bgp speaker list advertised routes bgpspeaker > +------------------+---------------+ > | Destination | Nexthop | > +------------------+---------------+ > | 10.64.254.177/32 | 10.64.245.102 | > +------------------+---------------+ > ``` > - traffic reaches the hypervisor but (for reason I don't understand) I > had to add a rule > ``` > $ ip rule > 0: from all lookup local > 32765: from all iif vlan1234 lookup ovn > 32766: from all lookup main > 32767: from all lookup default > $ ip route show table ovn > 10.64.254.177 dev vlan1234 scope link > ``` > so that the traffic coming for the fip is not immediately discarded by > the hypervisor (it's not an ideal solution but it is a workaround that > makes my one fIP work!) > > So all in all it seems it would be possible to use the > neutron-dynamic-routing agent, with some minor modifications (eg: to > also publish the fip of the OVN L3 gateway router). > > I am wondering whether I have overlooked anything, and if such kind of > deployment (OVN + neutron dynamic routing or similar) is already in > use somewhere. Does it make sense to have a RFE for better integration > between OVN and neutron-dynamic-routing? > > Thanks > Francois > > > > > [1]: https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf > [2]: https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html > [3]: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ > [4]: https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html > [5]: https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml > From piotrmisiak1984 at gmail.com Tue Oct 12 19:40:26 2021 From: piotrmisiak1984 at gmail.com (Piotr Misiak) Date: Tue, 12 Oct 2021 21:40:26 +0200 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: Message-ID: You dont need a stretched provisioning network when you setup a DHCP relay :) IMO the L2 external network in Neutron is a major issue in OpenStack scaling. I?d love to see a BGP support in OVN and OVN neutron plugin. On Tue, 12 Oct 2021 at 21:28 Francois wrote: > (yes we are using distributed fips) Well we don't want stretched > VLANs. However... if we followed the doc we would end up with 3 > controllers in the same rack which would not be resilient. Since we > have just 3 controllers plugged on specific, identified ports, we > afford to have a stretched VLAN on these few ports only. For the > provisioning network, I am taking a shortcut since this network should > basically only be needed once in a while for stack upgrades and > nothing interesting (like mac addresses moving) happens there. The > data plane traffic, that needs scalability and resiliency, is not > going through these VLANs. I think stretched VLANs on leaf spine > networks are forbidden in general for these reasons (large L2 > networks? STP reducing the bandwidth? broadcast storm? larger failure > domain? I don't know specifically, I would need help from a network > engineer to explain the reason). > > > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/spine_leaf_networking/index > > > On Tue, 12 Oct 2021 at 18:04, Tony Liu wrote: > > > > I wonder, since there is already VXLAN based L2 across multiple racks > > (which means you are not looking for pure L3 solution), > > while keep tenant network multi-subnet on L3 for EW traffic, > > why not have external network also on L2 and stretched on multiple racks > > for NS traffic, assuming you are using distributed FIP? > > > > Thanks! > > Tony > > ________________________________________ > > From: Francois > > Sent: October 12, 2021 07:03 AM > > To: openstack-discuss > > Subject: [neutron] OVN and dynamic routing > > > > Hello Neutron! > > I am looking into running stacks with OVN on a leaf-spine network, and > > have some floating IPs routed between racks. > > > > Basically each rack is assigned its own set of subnets. > > Some VLANs are stretched across all racks: the provisioning VLAN used > > by tripleo to deploy the stack, and the VLANs for the controllers API > > IPs. However, each tenant subnet is local to a rack: for example each > > OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its > > own rack. Traffic between 2 racks is sent to a spine, and leaves and > > spines run some eVPN-like thing: each pair of ToR is a vtep, traffic > > is encapsulated as VXLAN, and routes between vteps are exchanged with > > BGP. > > > > I am looking into supporting floating IPs in there: I expect floating > > IPs to be able to move between racks, as such I am looking into > > publishing the route for a FIP towards an hypervisor, through BGP. > > Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. > > > > It seems there are several ideas to achieve this (it was discussed > > [before][1] in ovs conference) > > - using [neutron-dynamic-routing][2] - that seems to have some gaps > > for OVN. It uses os-ken to talk to switches and exchange routes > > - using [OVN BGP agent][3] that uses FRR, it seems there is a related > > [RFE][4] for integration in tripleo > > > > There is btw also a [BGPVPN][5] project (it does not match my usecase > > as far as I tried to understand it) that also has some code that talks > > BGP to switches, already integrated in tripleo. > > > > For my tests, I was able to use the neutron-dynamic-routing project > > (almost) as documented, with a few changes: > > - for traffic going from VMs to outside the stack, the hypervisor was > > trying to resolve the "gateway of fIPs" with ARP request which does > > not make any sense. I created a dummy port with the mac address of the > > virtual router of the switches: > > ``` > > $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml > > - Fixed IP Addresses: > > - ip_address: 10.64.254.1 > > subnet_id: 8f37 > > ID: 4028 > > MAC Address: 00:1c:73:00:00:11 > > Name: lagw > > Status: DOWN > > ``` > > this prevent the hypervisor to send ARP requests to a non existent > gateway > > - for traffic coming back, we start the neutron-bgp-dragent agent on > > the controllers. We create the right bgp speaker, peers, etc. > > - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it > > selects fips and join with ports owned by a "floatingip_agent_gateway" > > which does not exist on OVN. We can define ourselves some ports so > > that the dragent is able to find the tenant IP of a host: > > ``` > > openstack port create --network provider --device-owner > > network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip > > ip-address=10.64.245.102 ag2 > > ``` > > - when creating a floating IP and assigning a port to it, Neutron > > reads changes from OVN SB and fills the binding information into the > > port: > > ``` > > $ openstack port show -c binding_host_id `openstack floating ip show > > 10.64.254.177 -f value -c port_id` > > +-----------------+----------------------------------------+ > > | Field | Value | > > +-----------------+----------------------------------------+ > > | binding_host_id | cpu35d.cloud | > > +-----------------+----------------------------------------+ > > ``` > > this allows the dragent to publish the route for the fip > > ``` > > $ openstack bgp speaker list advertised routes bgpspeaker > > +------------------+---------------+ > > | Destination | Nexthop | > > +------------------+---------------+ > > | 10.64.254.177/32 | 10.64.245.102 | > > +------------------+---------------+ > > ``` > > - traffic reaches the hypervisor but (for reason I don't understand) I > > had to add a rule > > ``` > > $ ip rule > > 0: from all lookup local > > 32765: from all iif vlan1234 lookup ovn > > 32766: from all lookup main > > 32767: from all lookup default > > $ ip route show table ovn > > 10.64.254.177 dev vlan1234 scope link > > ``` > > so that the traffic coming for the fip is not immediately discarded by > > the hypervisor (it's not an ideal solution but it is a workaround that > > makes my one fIP work!) > > > > So all in all it seems it would be possible to use the > > neutron-dynamic-routing agent, with some minor modifications (eg: to > > also publish the fip of the OVN L3 gateway router). > > > > I am wondering whether I have overlooked anything, and if such kind of > > deployment (OVN + neutron dynamic routing or similar) is already in > > use somewhere. Does it make sense to have a RFE for better integration > > between OVN and neutron-dynamic-routing? > > > > Thanks > > Francois > > > > > > > > > > [1]: > https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf > > [2]: > https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html > > [3]: > https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ > > [4]: > https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html > > [5]: > https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Tue Oct 12 19:46:50 2021 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 12 Oct 2021 12:46:50 -0700 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: Message-ID: <16ee3932-e4d5-076b-4f56-89d35bf4bd8a@redhat.com> On 10/12/21 07:03, Francois wrote: > Hello Neutron! > I am looking into running stacks with OVN on a leaf-spine network, and > have some floating IPs routed between racks. > > Basically each rack is assigned its own set of subnets. > Some VLANs are stretched across all racks: the provisioning VLAN used > by tripleo to deploy the stack, and the VLANs for the controllers API > IPs. However, each tenant subnet is local to a rack: for example each > OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its > own rack. Traffic between 2 racks is sent to a spine, and leaves and > spines run some eVPN-like thing: each pair of ToR is a vtep, traffic > is encapsulated as VXLAN, and routes between vteps are exchanged with > BGP. > There has been a lot of work put into TripleO to allow you to provision hosts across L3 boundaries using DHCP relay. You can create a routed provisioning network using "helper-address" or vendor-specific commands on your top-of-rack switches, and a different subnet and DHCP address pool per rack. > I am looking into supporting floating IPs in there: I expect floating > IPs to be able to move between racks, as such I am looking into > publishing the route for a FIP towards an hypervisor, through BGP. > Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. This is becoming a very common architecture, and that is why there are several projects working to achieve the same goal with slightly different implementations. > > It seems there are several ideas to achieve this (it was discussed > [before][1] in ovs conference) > - using [neutron-dynamic-routing][2] - that seems to have some gaps > for OVN. It uses os-ken to talk to switches and exchange routes > - using [OVN BGP agent][3] that uses FRR, it seems there is a related > [RFE][4] for integration in tripleo > > There is btw also a [BGPVPN][5] project (it does not match my usecase > as far as I tried to understand it) that also has some code that talks > BGP to switches, already integrated in tripleo. > > For my tests, I was able to use the neutron-dynamic-routing project > (almost) as documented, with a few changes: > - for traffic going from VMs to outside the stack, the hypervisor was > trying to resolve the "gateway of fIPs" with ARP request which does > not make any sense. I created a dummy port with the mac address of the > virtual router of the switches: > ``` > $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml > - Fixed IP Addresses: > - ip_address: 10.64.254.1 > subnet_id: 8f37 > ID: 4028 > MAC Address: 00:1c:73:00:00:11 > Name: lagw > Status: DOWN > ``` > this prevent the hypervisor to send ARP requests to a non existent gateway > - for traffic coming back, we start the neutron-bgp-dragent agent on > the controllers. We create the right bgp speaker, peers, etc. > - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it > selects fips and join with ports owned by a "floatingip_agent_gateway" > which does not exist on OVN. We can define ourselves some ports so > that the dragent is able to find the tenant IP of a host: > ``` > openstack port create --network provider --device-owner > network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip > ip-address=10.64.245.102 ag2 > ``` > - when creating a floating IP and assigning a port to it, Neutron > reads changes from OVN SB and fills the binding information into the > port: > ``` > $ openstack port show -c binding_host_id `openstack floating ip show > 10.64.254.177 -f value -c port_id` > +-----------------+----------------------------------------+ > | Field | Value | > +-----------------+----------------------------------------+ > | binding_host_id | cpu35d.cloud | > +-----------------+----------------------------------------+ > ``` > this allows the dragent to publish the route for the fip > ``` > $ openstack bgp speaker list advertised routes bgpspeaker > +------------------+---------------+ > | Destination | Nexthop | > +------------------+---------------+ > | 10.64.254.177/32 | 10.64.245.102 | > +------------------+---------------+ > ``` > - traffic reaches the hypervisor but (for reason I don't understand) I > had to add a rule > ``` > $ ip rule > 0: from all lookup local > 32765: from all iif vlan1234 lookup ovn > 32766: from all lookup main > 32767: from all lookup default > $ ip route show table ovn > 10.64.254.177 dev vlan1234 scope link > ``` > so that the traffic coming for the fip is not immediately discarded by > the hypervisor (it's not an ideal solution but it is a workaround that > makes my one fIP work!) > > So all in all it seems it would be possible to use the > neutron-dynamic-routing agent, with some minor modifications (eg: to > also publish the fip of the OVN L3 gateway router). > > I am wondering whether I have overlooked anything, and if such kind of > deployment (OVN + neutron dynamic routing or similar) is already in > use somewhere. Does it make sense to have a RFE for better integration > between OVN and neutron-dynamic-routing? I have been helping to contribute to integrating FRR with OVN in order to advertise FIPs and provider network IPs into BGP. The OVN BGP Agent is very new, and I'm pretty sure that nobody is using it in production yet. However the initial implementation is fairly simple and hopefully it will mature quickly. As you discovered, the solution that uses neutron-bgp-dragent and os-ken is not compatible with OVN, that is why ovs-bgp-agent is being developed. You should be able to try the ovs-bgp-agent with FRR and properly configured routing switches, it functions for the basic use case. The OVN BGP Agent will ensure that FIP and provider network IPs are present in the kernel as a /32 or /128 host route, which is then advertised into the BGP fabric using the FRR BGP daemon. If the default route is received from BGP it will be installed into the kernel by the FRR zebra daemon which syncs kernel routes with the FRR BGP routing table. The OVN BGP agent installs flows for the Neutron network gateways that hand off traffic to the kernel for routing. Since the kernel routing table is used, the agent isn't compatible with DPDK fast datapath yet. We don't have good documentation for the OVN BGP integration yet. I've only recently been able to make it my primary priority, and some of the other engineers which have done the initial proof of concept are moving on to other projects. There will be some discussions at the upcoming OpenStack PTG about this work, but I am hopeful that the missing pieces for your use case will come about in the Yoga cycle. > > Thanks > Francois > > > > > [1]: https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf > [2]: https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html > [3]: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ > [4]: https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html > [5]: https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml > -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter From tonyliu0592 at hotmail.com Tue Oct 12 20:03:47 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Tue, 12 Oct 2021 20:03:47 +0000 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: Message-ID: Not sure if this helps. http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019681.html https://docs.openstack.org/neutron/latest/admin/config-bgp-floating-ip-over-l2-segmented-network.html Thanks! Tony ________________________________________ From: Francois Sent: October 12, 2021 12:26 PM To: Tony Liu Cc: openstack-discuss Subject: Re: [neutron] OVN and dynamic routing (yes we are using distributed fips) Well we don't want stretched VLANs. However... if we followed the doc we would end up with 3 controllers in the same rack which would not be resilient. Since we have just 3 controllers plugged on specific, identified ports, we afford to have a stretched VLAN on these few ports only. For the provisioning network, I am taking a shortcut since this network should basically only be needed once in a while for stack upgrades and nothing interesting (like mac addresses moving) happens there. The data plane traffic, that needs scalability and resiliency, is not going through these VLANs. I think stretched VLANs on leaf spine networks are forbidden in general for these reasons (large L2 networks? STP reducing the bandwidth? broadcast storm? larger failure domain? I don't know specifically, I would need help from a network engineer to explain the reason). https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/spine_leaf_networking/index On Tue, 12 Oct 2021 at 18:04, Tony Liu wrote: > > I wonder, since there is already VXLAN based L2 across multiple racks > (which means you are not looking for pure L3 solution), > while keep tenant network multi-subnet on L3 for EW traffic, > why not have external network also on L2 and stretched on multiple racks > for NS traffic, assuming you are using distributed FIP? > > Thanks! > Tony > ________________________________________ > From: Francois > Sent: October 12, 2021 07:03 AM > To: openstack-discuss > Subject: [neutron] OVN and dynamic routing > > Hello Neutron! > I am looking into running stacks with OVN on a leaf-spine network, and > have some floating IPs routed between racks. > > Basically each rack is assigned its own set of subnets. > Some VLANs are stretched across all racks: the provisioning VLAN used > by tripleo to deploy the stack, and the VLANs for the controllers API > IPs. However, each tenant subnet is local to a rack: for example each > OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its > own rack. Traffic between 2 racks is sent to a spine, and leaves and > spines run some eVPN-like thing: each pair of ToR is a vtep, traffic > is encapsulated as VXLAN, and routes between vteps are exchanged with > BGP. > > I am looking into supporting floating IPs in there: I expect floating > IPs to be able to move between racks, as such I am looking into > publishing the route for a FIP towards an hypervisor, through BGP. > Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. > > It seems there are several ideas to achieve this (it was discussed > [before][1] in ovs conference) > - using [neutron-dynamic-routing][2] - that seems to have some gaps > for OVN. It uses os-ken to talk to switches and exchange routes > - using [OVN BGP agent][3] that uses FRR, it seems there is a related > [RFE][4] for integration in tripleo > > There is btw also a [BGPVPN][5] project (it does not match my usecase > as far as I tried to understand it) that also has some code that talks > BGP to switches, already integrated in tripleo. > > For my tests, I was able to use the neutron-dynamic-routing project > (almost) as documented, with a few changes: > - for traffic going from VMs to outside the stack, the hypervisor was > trying to resolve the "gateway of fIPs" with ARP request which does > not make any sense. I created a dummy port with the mac address of the > virtual router of the switches: > ``` > $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml > - Fixed IP Addresses: > - ip_address: 10.64.254.1 > subnet_id: 8f37 > ID: 4028 > MAC Address: 00:1c:73:00:00:11 > Name: lagw > Status: DOWN > ``` > this prevent the hypervisor to send ARP requests to a non existent gateway > - for traffic coming back, we start the neutron-bgp-dragent agent on > the controllers. We create the right bgp speaker, peers, etc. > - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it > selects fips and join with ports owned by a "floatingip_agent_gateway" > which does not exist on OVN. We can define ourselves some ports so > that the dragent is able to find the tenant IP of a host: > ``` > openstack port create --network provider --device-owner > network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip > ip-address=10.64.245.102 ag2 > ``` > - when creating a floating IP and assigning a port to it, Neutron > reads changes from OVN SB and fills the binding information into the > port: > ``` > $ openstack port show -c binding_host_id `openstack floating ip show > 10.64.254.177 -f value -c port_id` > +-----------------+----------------------------------------+ > | Field | Value | > +-----------------+----------------------------------------+ > | binding_host_id | cpu35d.cloud | > +-----------------+----------------------------------------+ > ``` > this allows the dragent to publish the route for the fip > ``` > $ openstack bgp speaker list advertised routes bgpspeaker > +------------------+---------------+ > | Destination | Nexthop | > +------------------+---------------+ > | 10.64.254.177/32 | 10.64.245.102 | > +------------------+---------------+ > ``` > - traffic reaches the hypervisor but (for reason I don't understand) I > had to add a rule > ``` > $ ip rule > 0: from all lookup local > 32765: from all iif vlan1234 lookup ovn > 32766: from all lookup main > 32767: from all lookup default > $ ip route show table ovn > 10.64.254.177 dev vlan1234 scope link > ``` > so that the traffic coming for the fip is not immediately discarded by > the hypervisor (it's not an ideal solution but it is a workaround that > makes my one fIP work!) > > So all in all it seems it would be possible to use the > neutron-dynamic-routing agent, with some minor modifications (eg: to > also publish the fip of the OVN L3 gateway router). > > I am wondering whether I have overlooked anything, and if such kind of > deployment (OVN + neutron dynamic routing or similar) is already in > use somewhere. Does it make sense to have a RFE for better integration > between OVN and neutron-dynamic-routing? > > Thanks > Francois > > > > > [1]: https://www.openvswitch.org/support/ovscon2020/slides/OVS-CONF-2020-OVN-WITH-DYNAMIC-ROUTING.pdf > [2]: https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html > [3]: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ > [4]: https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html > [5]: https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-bgpvpn.yaml > From gmann at ghanshyammann.com Tue Oct 12 23:07:33 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Oct 2021 18:07:33 -0500 Subject: [all] RBAC related discussion in Yoga PTG Message-ID: <17c76c2d8d8.b4647d6b920595.8260462059922238034@ghanshyammann.com> Hello Everyone, As you might know, we are not so far from the Yoga PTG. I have created the below etherpad to collect the RBAC related discussion happening in various project sessions. - https://etherpad.opendev.org/p/policy-popup-yoga-ptg We have not schedule any separate sessions for this instead thought of attending the related discussion in project PTG itself. Please do the below two steps before PTG: 1. Add the common topics (for QA, Horizon etc) you would like to discuss/know. 2. Add any related rbac sessions you have planned in your project PTG. - I have added a few of them but few need the exact schedule/time so that we can plan to attend it. Please check and add the time for your project sessions. -gmann From franck.vedel at univ-grenoble-alpes.fr Wed Oct 13 07:57:52 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Wed, 13 Oct 2021 09:57:52 +0200 Subject: =?utf-8?Q?Probl=C3=A8me_with_image_from_snapshot?= Message-ID: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> Hello and first sorry for my english? thanks google. Something is wrong with what I want to do: I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). Here is what I want to do and which does not work as I want: - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. I create the snapshot, I place the "--public" parameter on the new image. I'm trying to create a new instance from this snapshot with the admin account: it works. I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? Thanks if you have ideas for helping me Franck VEDEL -------------- next part -------------- An HTML attachment was scrubbed... URL: From rigault.francois at gmail.com Wed Oct 13 08:03:51 2021 From: rigault.francois at gmail.com (Francois) Date: Wed, 13 Oct 2021 10:03:51 +0200 Subject: [neutron] OVN and dynamic routing In-Reply-To: References: <16ee3932-e4d5-076b-4f56-89d35bf4bd8a@redhat.com> Message-ID: ...forgot to add the mailing list in the reply On Wed, 13 Oct 2021 at 10:01, Francois wrote: > > On Tue, 12 Oct 2021 at 21:46, Dan Sneddon wrote: > > > > On 10/12/21 07:03, Francois wrote: > > > Hello Neutron! > > > I am looking into running stacks with OVN on a leaf-spine network, and > > > have some floating IPs routed between racks. > > > > > > Basically each rack is assigned its own set of subnets. > > > Some VLANs are stretched across all racks: the provisioning VLAN used > > > by tripleo to deploy the stack, and the VLANs for the controllers API > > > IPs. However, each tenant subnet is local to a rack: for example each > > > OVN chassis has a ovn-encap-ip with an IP of the tenant subnet of its > > > own rack. Traffic between 2 racks is sent to a spine, and leaves and > > > spines run some eVPN-like thing: each pair of ToR is a vtep, traffic > > > is encapsulated as VXLAN, and routes between vteps are exchanged with > > > BGP. > > > > > > > There has been a lot of work put into TripleO to allow you to provision > > hosts across L3 boundaries using DHCP relay. You can create a routed > > provisioning network using "helper-address" or vendor-specific commands > > on your top-of-rack switches, and a different subnet and DHCP address > > pool per rack. > Yes I saw that in the doc. I was not planning on using this for reasons I mentioned in another reply (this provisioning network is ""useless most of the time"" since there is almost no provisioning happening :D ) If any, I would love to work on Ironic DHCP-less deployments which was almost working last time I tried and I saw Ironic team contributing fixes since then. > > > > > > I am looking into supporting floating IPs in there: I expect floating > > > IPs to be able to move between racks, as such I am looking into > > > publishing the route for a FIP towards an hypervisor, through BGP. > > > Each fip is a /32 subnet with an hypervisor tenant's IP as next hop. > > > > This is becoming a very common architecture, and that is why there are > > several projects working to achieve the same goal with slightly > > different implementations. > > > > > > > > It seems there are several ideas to achieve this (it was discussed > > > [before][1] in ovs conference) > > > - using [neutron-dynamic-routing][2] - that seems to have some gaps > > > for OVN. It uses os-ken to talk to switches and exchange routes > > > - using [OVN BGP agent][3] that uses FRR, it seems there is a related > > > [RFE][4] for integration in tripleo > > > > > > There is btw also a [BGPVPN][5] project (it does not match my usecase > > > as far as I tried to understand it) that also has some code that talks > > > BGP to switches, already integrated in tripleo. > > > > > > For my tests, I was able to use the neutron-dynamic-routing project > > > (almost) as documented, with a few changes: > > > - for traffic going from VMs to outside the stack, the hypervisor was > > > trying to resolve the "gateway of fIPs" with ARP request which does > > > not make any sense. I created a dummy port with the mac address of the > > > virtual router of the switches: > > > ``` > > > $ openstack port list --mac-address 00:1c:73:00:00:11 -f yaml > > > - Fixed IP Addresses: > > > - ip_address: 10.64.254.1 > > > subnet_id: 8f37 > > > ID: 4028 > > > MAC Address: 00:1c:73:00:00:11 > > > Name: lagw > > > Status: DOWN > > > ``` > > > this prevent the hypervisor to send ARP requests to a non existent gateway > > > - for traffic coming back, we start the neutron-bgp-dragent agent on > > > the controllers. We create the right bgp speaker, peers, etc. > > > - neutron-bgp-dragent seems to work primarily with ovs ml2 plugin, it > > > selects fips and join with ports owned by a "floatingip_agent_gateway" > > > which does not exist on OVN. We can define ourselves some ports so > > > that the dragent is able to find the tenant IP of a host: > > > ``` > > > openstack port create --network provider --device-owner > > > network:floatingip_agent_gateway --host cpu35d.cloud --fixed-ip > > > ip-address=10.64.245.102 ag2 > > > ``` > > > - when creating a floating IP and assigning a port to it, Neutron > > > reads changes from OVN SB and fills the binding information into the > > > port: > > > ``` > > > $ openstack port show -c binding_host_id `openstack floating ip show > > > 10.64.254.177 -f value -c port_id` > > > +-----------------+----------------------------------------+ > > > | Field | Value | > > > +-----------------+----------------------------------------+ > > > | binding_host_id | cpu35d.cloud | > > > +-----------------+----------------------------------------+ > > > ``` > > > this allows the dragent to publish the route for the fip > > > ``` > > > $ openstack bgp speaker list advertised routes bgpspeaker > > > +------------------+---------------+ > > > | Destination | Nexthop | > > > +------------------+---------------+ > > > | 10.64.254.177/32 | 10.64.245.102 | > > > +------------------+---------------+ > > > ``` > > > - traffic reaches the hypervisor but (for reason I don't understand) I > > > had to add a rule > > > ``` > > > $ ip rule > > > 0: from all lookup local > > > 32765: from all iif vlan1234 lookup ovn > > > 32766: from all lookup main > > > 32767: from all lookup default > > > $ ip route show table ovn > > > 10.64.254.177 dev vlan1234 scope link > > > ``` > > > so that the traffic coming for the fip is not immediately discarded by > > > the hypervisor (it's not an ideal solution but it is a workaround that > > > makes my one fIP work!) > > > > > > So all in all it seems it would be possible to use the > > > neutron-dynamic-routing agent, with some minor modifications (eg: to > > > also publish the fip of the OVN L3 gateway router). > > > > > > I am wondering whether I have overlooked anything, and if such kind of > > > deployment (OVN + neutron dynamic routing or similar) is already in > > > use somewhere. Does it make sense to have a RFE for better integration > > > between OVN and neutron-dynamic-routing? > > > > I have been helping to contribute to integrating FRR with OVN in order > > to advertise FIPs and provider network IPs into BGP. The OVN BGP Agent > > is very new, and I'm pretty sure that nobody is using it in production > > yet. However the initial implementation is fairly simple and hopefully > > it will mature quickly. > > > > As you discovered, the solution that uses neutron-bgp-dragent and os-ken > > is not compatible with OVN > Pretty much the contrary, it basically worked. There are a few differences but the gap seems very tiny (unless I overlooked something and I'm fundamentally wrong) I don't understand why a new project would be needed to make it work for OVN. > > >, that is why ovs-bgp-agent is being > > developed. You should be able to try the ovs-bgp-agent with FRR and > > properly configured routing switches, it functions for the basic use case. > > > > The OVN BGP Agent will ensure that FIP and provider network IPs are > > present in the kernel as a /32 or /128 host route, which is then > > advertised into the BGP fabric using the FRR BGP daemon. If the default > > route is received from BGP it will be installed into the kernel by the > > FRR zebra daemon which syncs kernel routes with the FRR BGP routing > > table. The OVN BGP agent installs flows for the Neutron network gateways > > that hand off traffic to the kernel for routing. Since the kernel > > routing table is used, the agent isn't compatible with DPDK fast > > datapath yet. > > > > We don't have good documentation for the OVN BGP integration yet. I've > > only recently been able to make it my primary priority, and some of the > > other engineers which have done the initial proof of concept are moving > > on to other projects. There will be some discussions at the upcoming > > OpenStack PTG about this work, but I am hopeful that the missing pieces > > for your use case will come about in the Yoga cycle. > I did not try to run the OVN BGP agent but I saw your blog posts and I think it's enough to get started with. I still don't get why an extra OVN BGP agent would be needed. One thing I was wondering from the blog posts (and your reply here) is whether every single compute would need connectivity to the physical switches to publish the routes - as the dragent runs on the controller node you only need to configure connectivity between the controllers and the physical switches while in the FRR case you need to open much more. > Would the developments in the Yoga cycle be focused on the OVN BGP agent only, and so there is no interest in improving the neutron-dynamic-routing project ? > Thanks for your insightful comments :) > > > > > > > > > Thanks > > > Francois From mark at stackhpc.com Wed Oct 13 08:09:05 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 13 Oct 2021 09:09:05 +0100 Subject: [kolla-ansible][neutron] configuration question In-Reply-To: References: Message-ID: On Tue, 12 Oct 2021 at 18:47, Ignazio Cassano wrote: > > Reading at this bug: > https://bugs.launchpad.net/kolla-ansible/+bug/1626259 > It seems only for documentation, so it must work. > Right? > Ignazio As mentioned in the above bug: neutron_bridge_name: "br-ex,br-ex2" neutron_external_interface: "eth1,eth2" Mark > > Il giorno mar 12 ott 2021 alle ore 18:38 Ignazio Cassano ha scritto: >> >> Hello Everyone, >> I need to know if it possible configure kolla neutron ovs with more than one bridge mappings, for example: >> >> bridge_mappings = physnet1:br-ex,physnet2:br-ex2 >> >> I figure out that in standard configuration ansible playbook create br-ex and add >> >> the interface with variable "neutron_external_interface" under br-ex. >> >> What can I do if I need to do if I wand more than one bridge ? >> >> How kolla ansible playbook can help in this case ? >> >> I could use multiple bridges in /etc/kolla/config neutron configuration files, but I do not know how ansible playbook can do the job. >> >> because I do not see any variable can help me in /etc/kolla/globals.yml >> Thanks >> >> Ignazio From ignaziocassano at gmail.com Wed Oct 13 08:09:56 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 13 Oct 2021 10:09:56 +0200 Subject: [kolla-ansible][neutron] configuration question In-Reply-To: References: Message-ID: Many thanks Ignazio Il giorno mer 13 ott 2021 alle ore 10:09 Mark Goddard ha scritto: > On Tue, 12 Oct 2021 at 18:47, Ignazio Cassano > wrote: > > > > Reading at this bug: > > https://bugs.launchpad.net/kolla-ansible/+bug/1626259 > > It seems only for documentation, so it must work. > > Right? > > Ignazio > > As mentioned in the above bug: > > neutron_bridge_name: "br-ex,br-ex2" > neutron_external_interface: "eth1,eth2" > > Mark > > > > > Il giorno mar 12 ott 2021 alle ore 18:38 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> > >> Hello Everyone, > >> I need to know if it possible configure kolla neutron ovs with more > than one bridge mappings, for example: > >> > >> bridge_mappings = physnet1:br-ex,physnet2:br-ex2 > >> > >> I figure out that in standard configuration ansible playbook create > br-ex and add > >> > >> the interface with variable "neutron_external_interface" under br-ex. > >> > >> What can I do if I need to do if I wand more than one bridge ? > >> > >> How kolla ansible playbook can help in this case ? > >> > >> I could use multiple bridges in /etc/kolla/config neutron configuration > files, but I do not know how ansible playbook can do the job. > >> > >> because I do not see any variable can help me in /etc/kolla/globals.yml > >> Thanks > >> > >> Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at bitswalk.com Wed Oct 13 12:19:32 2021 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Wed, 13 Oct 2021 14:19:32 +0200 Subject: [KEYSTONE][POLICIES] - Overrides that don't work? In-Reply-To: References: Message-ID: All right, I'll test that out a bit more using a native Keystone user type as for now I'm dealing with ADFS/SSO based users that can't use CLI because ECP isn't available and so rely on Application Credentials that are project scoped ^^ Le mar. 12 oct. 2021 ? 18:18, Ben Nemec a ?crit : > Probably. I'm not an expert on writing Keystone policies so I can't > promise anything. :-) > > However, I'm fairly confident that if you get a properly scoped token it > will get you past your current error. Anything beyond that would be a > barely educated guess on my part. > > On 10/11/21 12:18 PM, Ga?l THEROND wrote: > > Hi ben! Thanks a lot for the answer! > > > > Ok I?ll get a look at that, but if I correctly understand a user with a > > role of project-admin attached to him as a scoped to domain he should be > > able to add users to a group once the policy update right? > > > > Once again thanks a lot for your answer! > > > > Le lun. 11 oct. 2021 ? 17:25, Ben Nemec > > a ?crit : > > > > I don't believe it's possible to override the scope of a policy > > rule. In > > this case it sounds like the user should request a domain-scoped > token > > to perform this operation. For details on who to do that, see > > > https://docs.openstack.org/keystone/wallaby/admin/tokens-overview.html#authorization-scopes > > < > https://docs.openstack.org/keystone/wallaby/admin/tokens-overview.html#authorization-scopes > > > > > > On 10/6/21 7:52 AM, Ga?l THEROND wrote: > > > Hi team, > > > > > > I'm having a weird behavior with my Openstack platform that makes > me > > > think I may have misunderstood some mechanisms on the way > > policies are > > > working and especially the overriding. > > > > > > So, long story short, I've few services that get custom policies > > such as > > > glance that behave as expected, Keystone's one aren't. > > > > > > All in all, here is what I'm understanding of the mechanism: > > > > > > This is the keystone policy that I'm looking to override: > > > https://paste.openstack.org/show/bwuF6jFISscRllWdUURL/ > > > > > > > > > > > > > This policy default can be found in here: > > > > > > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > > > > > > > > < > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/master/keystone/common/policies/group.py#L197 > >> > > > > > > Here is the policy that I'm testing: > > > https://paste.openstack.org/show/bHQ0PXvOro4lXNTlxlie/ > > > > > > > > > > > > > I know, this policy isn't taking care of the admin role but it's > > not the > > > point. > > > > > > From my understanding, any user with the project-manager role > > should be > > > able to add any available user on any available group as long as > the > > > project-manager domain is the same as the target. > > > > > > However, when I'm doing that, keystone complains that I'm not > > authorized > > > to do so because the user token scope is 'PROJECT' where it > > should be > > > 'SYSTEM' or 'DOMAIN'. > > > > > > Now, I wouldn't be surprised of that message being thrown out > > with the > > > default policy as it's stated on the code with the following: > > > > > > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > > > > > > > > < > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > > < > https://opendev.org/openstack/keystone/src/branch/stable/ussuri/keystone/common/policies/group.py#L197 > >> > > > > > > So the question is, if the custom policy doesn't override the > > default > > > scope_types how am I supposed to make it work? > > > > > > I hope it was clear enough, but if not, feel free to ask me for > more > > > information. > > > > > > PS: I've tried to assign this role with a domain scope to my user > > and > > > I've still the same issue. > > > > > > Thanks a lot everyone! > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Oct 13 12:47:22 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 13 Oct 2021 09:47:22 -0300 Subject: [cinder] Bug deputy report for week of 10-13-2021 Message-ID: This is a bug report from 10-06-2021-15-09 to 10-13-2021. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Medium - https://bugs.launchpad.net/cinder/+bug/1946736 'Powerflex driver: update supported storage versions'. - https://bugs.launchpad.net/cinder/+bug/1946350 '[gate-failure] nova-live-migration evacuation failure due to slow lvchange -a command in c-vol during volume attachment update'. - https://bugs.launchpad.net/cinder/+bug/1946340 '[gate-failure] Unable to stack on fedora34 due to cinder pulling in oslo.vmware and suds-jurko that uses use_2to3 that is invalid with setuptools >=58.0.0'. - https://bugs.launchpad.net/cinder/+bug/1946263 'NetApp ONTAP Failing migrating volume from/to FlexGroup pool'. Low - https://bugs.launchpad.net/cinder/+bug/1946618 'Add same volume to the group-update does not show proper error'. Wishlist - https://bugs.launchpad.net/cinder/+bug/1946645 '[doc] Install and configure a storage node in cinder'. Cheers -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpeacock at redhat.com Wed Oct 13 13:10:17 2021 From: dpeacock at redhat.com (David Peacock) Date: Wed, 13 Oct 2021 09:10:17 -0400 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: Sounds like progress, thanks for the update. For clarification, which version are you attempting to deploy? Upstream master? Thanks, David On Wed, Oct 13, 2021 at 3:57 AM Anirudh Gupta wrote: > Hi David, > > Thanks for your response. > In order to run pre-introspection, I debugged and created an inventory > file of my own having the following content > > [Undercloud] > undercloud > > With this and also with the file you mentioned, I was able to run > pre-introspection successfully. > > (undercloud) [stack at undercloud ~]$ openstack tripleo validator run > --group pre-introspection -i > tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml > > +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ > | UUID | Validations | > Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | > > +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ > | 6cdc7c84-d278-430a-b6fc-3893e42310d8 | check-cpu | > PASSED | localhost | localhost | | 0:00:01.116 | > | ac0d54a5-51c3-4f52-9dba-2a9b26583591 | check-disk-space | > PASSED | localhost | localhost | | 0:00:03.546 | > | 3af6fefc-47d0-40b1-bd5b-88e03e0f61ef | check-ram | > PASSED | localhost | localhost | | 0:00:01.069 | > | e8d17007-6c46-4959-8bfc-dc59dd77ba65 | check-selinux-mode | > PASSED | localhost | localhost | | 0:00:01.395 | > | 28df7ed3-8cea-4a4d-af34-14c8eec406ea | check-network-gateway | > PASSED | undercloud | undercloud | | 0:00:02.347 | > | efa6b4ab-de40-42a0-815e-238e5b81995c | undercloud-disk-space | > PASSED | undercloud | undercloud | | 0:00:03.657 | > | 89293cce-5f30-4626-b326-5cfeff48ab0c | undercloud-neutron-sanity-check | > PASSED | undercloud | undercloud | | 0:00:07.715 | > | 0da9986f-8fc6-46f7-8936-c8b838c12c7b | ctlplane-ip-range | > PASSED | undercloud | undercloud | | 0:00:01.973 | > | 89f286ee-cd83-4d05-8d99-bffd03df142b | dhcp-introspection | > PASSED | undercloud | undercloud | | 0:00:06.364 | > | c5256e61-f787-4a1b-9e1a-1eff0c0b2bb6 | undercloud-tokenflush | > PASSED | undercloud | undercloud | | 0:00:01.209 | > > +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ > > > But passing this file while pre-deployment, it is still failing. > (undercloud) [stack at undercloud undercloud]$ openstack tripleo validator > run --group pre-deployment -i tripleo-ansible-inventory.yaml > > +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ > | UUID | Validations > | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration > | > > +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ > | 6deebd06-cf12-4083-a4f2-a31306a719b3 | 512e > | PASSED | localhost | localhost | | > 0:00:00.511 | > | a2b80c05-40c0-4dd6-9d8d-03be0f5278ba | dns > | PASSED | localhost | localhost | | > 0:00:00.428 | > | bd3c32b3-6a0e-424c-9d2e-2898c5bb50ef | service-status > | PASSED | all | undercloud | | > 0:00:05.923 | > | 7342190b-2ad9-4639-91c7-582ae4b141c6 | validate-selinux > | PASSED | all | undercloud | | > 0:00:02.299 | > | 665c4d42-e058-4e9d-9ee1-30e29b3a75c8 | package-version > | FAILED | all | undercloud | | > 0:03:34.295 | > | e0001906-5a8c-4f9b-9ad7-7b5b4d4b8d22 | ceph-ansible-installed > | PASSED | undercloud | undercloud | | > 0:00:02.723 | > | beb5bf3d-3ee8-4fd6-8daa-0cf13023c1f3 | ceph-dependencies-installed > | PASSED | allovercloud | undercloud | | > 0:00:02.610 | > | d872e781-4cd2-4509-ad51-74d7f3b3ebbf | tls-everywhere-pre-deployment > | FAILED | undercloud | undercloud | | > 0:00:36.546 | > | bc7e8940-d61a-4349-a5be-a41312b8bd2f | undercloud-debug > | FAILED | undercloud | undercloud | | > 0:00:01.702 | > | 8de4f037-ac24-4700-b449-405e723a7e50 | > collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud > | | 0:00:00.936 | > | 1aadf9f7-a200-499a-826f-06c2ad3f1ab7 | undercloud-heat-purge-deleted > | PASSED | undercloud | undercloud | | > 0:00:02.232 | > | db5204af-a054-4eae-9325-c2f592997b59 | undercloud-process-count > | PASSED | undercloud | undercloud | | > 0:00:07.770 | > | 7fdb9935-a30d-4356-8524-23065da894e4 | default-node-count > | FAILED | undercloud | undercloud | | > 0:00:00.942 | > | 0868a984-7de0-42f0-8d6b-abb19c72c98b | dhcp-provisioning > | FAILED | undercloud | undercloud | | > 0:00:01.668 | > | 7796624f-5b13-4d66-8dce-8998f2370625 | ironic-boot-configuration > | FAILED | undercloud | undercloud | | > 0:00:00.935 | > | e087bbae-6371-4e2e-9445-0fcc1f936b96 | network-environment > | FAILED | undercloud | undercloud | | > 0:00:00.936 | > | db93613d-9cab-4954-949f-d7b2578c20c5 | node-disks > | FAILED | undercloud | undercloud | | > 0:00:01.741 | > | 66bed170-ffb1-4466-b065-9f6012abdd6e | switch-vlans > | FAILED | undercloud | undercloud | | > 0:00:01.795 | > | 4911cd84-26cf-4c43-ba5a-645c5c5f20b4 | system-encoding > | PASSED | all | undercloud | | > 0:00:00.393 | > > +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ > > > As per the response from Alex, This could probably because these > validations calls might be broken and and are not tested in CI > > I am moving forward with the deployment ignoring these errors as suggested > > Regards > Anirudh Gupta > > > On Tue, Oct 12, 2021 at 8:02 PM David Peacock wrote: > >> Hi Anirudh, >> >> You're hitting a known bug that we're in the process of propagating a fix >> for; sorry for this. :-) >> >> As per a patch we have under review, use the inventory file located under >> ~/tripleo-deploy/ directory: tripleo-ansible-inventory.yaml. >> To generate an inventory file, use the playbook in "tripleo-ansible: >> cli-config-download.yaml". >> >> https://review.opendev.org/c/openstack/tripleo-validations/+/813535 >> >> Let us know if this doesn't put you on the right track. >> >> Thanks, >> David >> >> On Sat, Oct 9, 2021 at 5:12 PM Anirudh Gupta wrote: >> >>> Hi Team, >>> >>> I am installing Tripleo using the below link >>> >>> >>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html >>> >>> In the Introspect section, When I executed the command >>> openstack tripleo validator run --group pre-introspection >>> >>> I got the following error: >>> >>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>> | UUID | Validations >>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >>> >>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>> | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu >>> | PASSED | localhost | localhost | | 0:00:01.261 | >>> | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space >>> | PASSED | localhost | localhost | | 0:00:04.480 | >>> | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram >>> | PASSED | localhost | localhost | | 0:00:02.173 | >>> | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode >>> | PASSED | localhost | localhost | | 0:00:01.546 | >>> | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway >>> | FAILED | undercloud | No host matched | | | >>> | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space >>> | FAILED | undercloud | No host matched | | | >>> | 2f0239db-d530-48eb-b606-f82179e72e50 | undercloud-neutron-sanity-check >>> | FAILED | undercloud | No host matched | | | >>> | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range >>> | FAILED | undercloud | No host matched | | | >>> | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection >>> | FAILED | undercloud | No host matched | | | >>> | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush >>> | FAILED | undercloud | No host matched | | | >>> >>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>> >>> >>> Then I created the following inventory file: >>> [Undercloud] >>> undercloud >>> >>> Passed this command while running the pre-introspection command. >>> It then executed successfully. >>> >>> >>> But with Pre-deployment, it is still failing even after passing the >>> inventory >>> >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>> | UUID | Validations >>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | >>> Duration | >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>> | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e >>> | PASSED | localhost | localhost | | >>> 0:00:00.504 | >>> | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns >>> | PASSED | localhost | localhost | | >>> 0:00:00.481 | >>> | 93611c13-49a2-4cae-ad87-099546459481 | service-status >>> | PASSED | all | undercloud | | >>> 0:00:06.942 | >>> | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux >>> | PASSED | all | undercloud | | >>> 0:00:02.433 | >>> | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version >>> | FAILED | all | undercloud | | >>> 0:00:03.576 | >>> | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed >>> | PASSED | undercloud | undercloud | | >>> 0:00:02.850 | >>> | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed >>> | FAILED | allovercloud | No host matched | | >>> | >>> | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment >>> | FAILED | undercloud | undercloud | | >>> 0:00:31.559 | >>> | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug >>> | FAILED | undercloud | undercloud | | >>> 0:00:02.057 | >>> | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | >>> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >>> | | 0:00:00.884 | >>> | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted >>> | FAILED | undercloud | undercloud | | >>> 0:00:02.138 | >>> | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count >>> | PASSED | undercloud | undercloud | | >>> 0:00:06.164 | >>> | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.934 | >>> | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning >>> | FAILED | undercloud | undercloud | | >>> 0:00:02.456 | >>> | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.882 | >>> | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.880 | >>> | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.934 | >>> | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.931 | >>> | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding >>> | PASSED | all | undercloud | | >>> 0:00:00.366 | >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>> >>> Also this step of passing the inventory file is not mentioned anywhere >>> in the document. Is there anything I am missing? >>> >>> Regards >>> Anirudh Gupta >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Wed Oct 13 14:46:08 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Wed, 13 Oct 2021 10:46:08 -0400 Subject: [docs] has_project_guide key Message-ID: Does the has_project_guide key in the www/project-data/.yaml file have any meaning? One of my projects has been dragging this key along from release to release and I do not see it documented [1]. I want to avoid unintended results if that key is removed. Thanks. [1]: https://docs.openstack.org/doc-contrib-guide/doc-tools/template-generator.html Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Wed Oct 13 15:16:28 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Wed, 13 Oct 2021 15:16:28 +0000 Subject: =?utf-8?B?UkU6IFByb2Jsw6htZSB3aXRoIGltYWdlIGZyb20gc25hcHNob3Q=?= In-Reply-To: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> Message-ID: <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> Franck; I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Regarding OpenStack, could you tell us what glance and cinder drivers you use? Have you done other volume to image before? Have you verified that the image finishes creating before trying to create a VM from it? I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] Sent: Wednesday, October 13, 2021 12:58 AM To: openstack-discuss Subject: Probl?me with image from snapshot Hello and first sorry for my english? thanks google. Something is wrong with what I want to do: I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). Here is what I want to do and which does not work as I want: - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. I create the snapshot, I place the "--public" parameter on the new image. I'm trying to create a new instance from this snapshot with the admin account: it works. I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? Thanks if you have ideas for helping me Franck VEDEL From dpeacock at redhat.com Wed Oct 13 15:44:19 2021 From: dpeacock at redhat.com (David Peacock) Date: Wed, 13 Oct 2021 11:44:19 -0400 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: Thank you. Good to know. :-) On Wed, Oct 13, 2021 at 11:26 AM Anirudh Gupta wrote: > Hi David > > I am trying this on Openstack Wallaby Release. > > Regards > Anirudh Gupta > > On Wed, 13 Oct, 2021, 6:40 pm David Peacock, wrote: > >> Sounds like progress, thanks for the update. >> >> For clarification, which version are you attempting to deploy? Upstream >> master? >> >> Thanks, >> David >> >> On Wed, Oct 13, 2021 at 3:57 AM Anirudh Gupta >> wrote: >> >>> Hi David, >>> >>> Thanks for your response. >>> In order to run pre-introspection, I debugged and created an inventory >>> file of my own having the following content >>> >>> [Undercloud] >>> undercloud >>> >>> With this and also with the file you mentioned, I was able to run >>> pre-introspection successfully. >>> >>> (undercloud) [stack at undercloud ~]$ openstack tripleo validator run >>> --group pre-introspection -i >>> tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml >>> >>> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >>> | UUID | Validations >>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >>> >>> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >>> | 6cdc7c84-d278-430a-b6fc-3893e42310d8 | check-cpu >>> | PASSED | localhost | localhost | | 0:00:01.116 | >>> | ac0d54a5-51c3-4f52-9dba-2a9b26583591 | check-disk-space >>> | PASSED | localhost | localhost | | 0:00:03.546 | >>> | 3af6fefc-47d0-40b1-bd5b-88e03e0f61ef | check-ram >>> | PASSED | localhost | localhost | | 0:00:01.069 | >>> | e8d17007-6c46-4959-8bfc-dc59dd77ba65 | check-selinux-mode >>> | PASSED | localhost | localhost | | 0:00:01.395 | >>> | 28df7ed3-8cea-4a4d-af34-14c8eec406ea | check-network-gateway >>> | PASSED | undercloud | undercloud | | 0:00:02.347 | >>> | efa6b4ab-de40-42a0-815e-238e5b81995c | undercloud-disk-space >>> | PASSED | undercloud | undercloud | | 0:00:03.657 | >>> | 89293cce-5f30-4626-b326-5cfeff48ab0c | undercloud-neutron-sanity-check >>> | PASSED | undercloud | undercloud | | 0:00:07.715 | >>> | 0da9986f-8fc6-46f7-8936-c8b838c12c7b | ctlplane-ip-range >>> | PASSED | undercloud | undercloud | | 0:00:01.973 | >>> | 89f286ee-cd83-4d05-8d99-bffd03df142b | dhcp-introspection >>> | PASSED | undercloud | undercloud | | 0:00:06.364 | >>> | c5256e61-f787-4a1b-9e1a-1eff0c0b2bb6 | undercloud-tokenflush >>> | PASSED | undercloud | undercloud | | 0:00:01.209 | >>> >>> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >>> >>> >>> But passing this file while pre-deployment, it is still failing. >>> (undercloud) [stack at undercloud undercloud]$ openstack tripleo validator >>> run --group pre-deployment -i tripleo-ansible-inventory.yaml >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >>> | UUID | Validations >>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration >>> | >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >>> | 6deebd06-cf12-4083-a4f2-a31306a719b3 | 512e >>> | PASSED | localhost | localhost | | >>> 0:00:00.511 | >>> | a2b80c05-40c0-4dd6-9d8d-03be0f5278ba | dns >>> | PASSED | localhost | localhost | | >>> 0:00:00.428 | >>> | bd3c32b3-6a0e-424c-9d2e-2898c5bb50ef | service-status >>> | PASSED | all | undercloud | | >>> 0:00:05.923 | >>> | 7342190b-2ad9-4639-91c7-582ae4b141c6 | validate-selinux >>> | PASSED | all | undercloud | | >>> 0:00:02.299 | >>> | 665c4d42-e058-4e9d-9ee1-30e29b3a75c8 | package-version >>> | FAILED | all | undercloud | | >>> 0:03:34.295 | >>> | e0001906-5a8c-4f9b-9ad7-7b5b4d4b8d22 | ceph-ansible-installed >>> | PASSED | undercloud | undercloud | | >>> 0:00:02.723 | >>> | beb5bf3d-3ee8-4fd6-8daa-0cf13023c1f3 | ceph-dependencies-installed >>> | PASSED | allovercloud | undercloud | | >>> 0:00:02.610 | >>> | d872e781-4cd2-4509-ad51-74d7f3b3ebbf | tls-everywhere-pre-deployment >>> | FAILED | undercloud | undercloud | | >>> 0:00:36.546 | >>> | bc7e8940-d61a-4349-a5be-a41312b8bd2f | undercloud-debug >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.702 | >>> | 8de4f037-ac24-4700-b449-405e723a7e50 | >>> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >>> | | 0:00:00.936 | >>> | 1aadf9f7-a200-499a-826f-06c2ad3f1ab7 | undercloud-heat-purge-deleted >>> | PASSED | undercloud | undercloud | | >>> 0:00:02.232 | >>> | db5204af-a054-4eae-9325-c2f592997b59 | undercloud-process-count >>> | PASSED | undercloud | undercloud | | >>> 0:00:07.770 | >>> | 7fdb9935-a30d-4356-8524-23065da894e4 | default-node-count >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.942 | >>> | 0868a984-7de0-42f0-8d6b-abb19c72c98b | dhcp-provisioning >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.668 | >>> | 7796624f-5b13-4d66-8dce-8998f2370625 | ironic-boot-configuration >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.935 | >>> | e087bbae-6371-4e2e-9445-0fcc1f936b96 | network-environment >>> | FAILED | undercloud | undercloud | | >>> 0:00:00.936 | >>> | db93613d-9cab-4954-949f-d7b2578c20c5 | node-disks >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.741 | >>> | 66bed170-ffb1-4466-b065-9f6012abdd6e | switch-vlans >>> | FAILED | undercloud | undercloud | | >>> 0:00:01.795 | >>> | 4911cd84-26cf-4c43-ba5a-645c5c5f20b4 | system-encoding >>> | PASSED | all | undercloud | | >>> 0:00:00.393 | >>> >>> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >>> >>> >>> As per the response from Alex, This could probably because these >>> validations calls might be broken and and are not tested in CI >>> >>> I am moving forward with the deployment ignoring these errors as >>> suggested >>> >>> Regards >>> Anirudh Gupta >>> >>> >>> On Tue, Oct 12, 2021 at 8:02 PM David Peacock >>> wrote: >>> >>>> Hi Anirudh, >>>> >>>> You're hitting a known bug that we're in the process of propagating a >>>> fix for; sorry for this. :-) >>>> >>>> As per a patch we have under review, use the inventory file located >>>> under ~/tripleo-deploy/ directory: tripleo-ansible-inventory.yaml. >>>> To generate an inventory file, use the playbook in "tripleo-ansible: >>>> cli-config-download.yaml". >>>> >>>> https://review.opendev.org/c/openstack/tripleo-validations/+/813535 >>>> >>>> Let us know if this doesn't put you on the right track. >>>> >>>> Thanks, >>>> David >>>> >>>> On Sat, Oct 9, 2021 at 5:12 PM Anirudh Gupta >>>> wrote: >>>> >>>>> Hi Team, >>>>> >>>>> I am installing Tripleo using the below link >>>>> >>>>> >>>>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html >>>>> >>>>> In the Introspect section, When I executed the command >>>>> openstack tripleo validator run --group pre-introspection >>>>> >>>>> I got the following error: >>>>> >>>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>>> | UUID | Validations >>>>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration >>>>> | >>>>> >>>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>>> | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu >>>>> | PASSED | localhost | localhost | | 0:00:01.261 >>>>> | >>>>> | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space >>>>> | PASSED | localhost | localhost | | >>>>> 0:00:04.480 | >>>>> | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram >>>>> | PASSED | localhost | localhost | | 0:00:02.173 >>>>> | >>>>> | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode >>>>> | PASSED | localhost | localhost | | >>>>> 0:00:01.546 | >>>>> | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway >>>>> | FAILED | undercloud | No host matched | | >>>>> | >>>>> | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space >>>>> | FAILED | undercloud | No host matched | | >>>>> | >>>>> | 2f0239db-d530-48eb-b606-f82179e72e50 | >>>>> undercloud-neutron-sanity-check | FAILED | undercloud | No host matched | >>>>> | | >>>>> | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range >>>>> | FAILED | undercloud | No host matched | | >>>>> | >>>>> | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection >>>>> | FAILED | undercloud | No host matched | | >>>>> | >>>>> | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush >>>>> | FAILED | undercloud | No host matched | | >>>>> | >>>>> >>>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>>> >>>>> >>>>> Then I created the following inventory file: >>>>> [Undercloud] >>>>> undercloud >>>>> >>>>> Passed this command while running the pre-introspection command. >>>>> It then executed successfully. >>>>> >>>>> >>>>> But with Pre-deployment, it is still failing even after passing the >>>>> inventory >>>>> >>>>> >>>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>>> | UUID | Validations >>>>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | >>>>> Duration | >>>>> >>>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>>> | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e >>>>> | PASSED | localhost | localhost | | >>>>> 0:00:00.504 | >>>>> | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns >>>>> | PASSED | localhost | localhost | | >>>>> 0:00:00.481 | >>>>> | 93611c13-49a2-4cae-ad87-099546459481 | service-status >>>>> | PASSED | all | undercloud | | >>>>> 0:00:06.942 | >>>>> | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux >>>>> | PASSED | all | undercloud | | >>>>> 0:00:02.433 | >>>>> | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version >>>>> | FAILED | all | undercloud | | >>>>> 0:00:03.576 | >>>>> | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed >>>>> | PASSED | undercloud | undercloud | | >>>>> 0:00:02.850 | >>>>> | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed >>>>> | FAILED | allovercloud | No host matched | | >>>>> | >>>>> | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:31.559 | >>>>> | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:02.057 | >>>>> | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | >>>>> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >>>>> | | 0:00:00.884 | >>>>> | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:02.138 | >>>>> | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count >>>>> | PASSED | undercloud | undercloud | | >>>>> 0:00:06.164 | >>>>> | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:00.934 | >>>>> | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:02.456 | >>>>> | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:00.882 | >>>>> | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:00.880 | >>>>> | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:01.934 | >>>>> | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans >>>>> | FAILED | undercloud | undercloud | | >>>>> 0:00:01.931 | >>>>> | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding >>>>> | PASSED | all | undercloud | | >>>>> 0:00:00.366 | >>>>> >>>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>>> >>>>> Also this step of passing the inventory file is not mentioned anywhere >>>>> in the document. Is there anything I am missing? >>>>> >>>>> Regards >>>>> Anirudh Gupta >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Wed Oct 13 15:49:47 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 13 Oct 2021 17:49:47 +0200 Subject: [nova][placement] No meeting next week Message-ID: As agreed during yesterday's meeting [1], Tuesday Oct 19th's meeting is *CANCELLED* as all of us will be attending the virtual PTG. I'm more than happy tho to see all of you next week thru video ! I'll add the PTG connection details in https://etherpad.opendev.org/p/nova-yoga-ptg -Sylvain [1] https://meetings.opendev.org/meetings/nova/2021/nova.2021-10-12-16.00.log.html#l-189 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Wed Oct 13 04:55:32 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 13 Oct 2021 10:25:32 +0530 Subject: [TripleO] Issue in running Pre-Introspection In-Reply-To: References: Message-ID: Hi Mathieu, Thanks for your reply. I am using Openstack Wallaby Release. The document I was referring to had not specified the usage of any inventory file. https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html Although I figured this out and passed the inventory file as follows [Undercloud] undercloud After passing this all the errors were removed Regards Anirudh Gupta On Tue, Oct 12, 2021 at 8:32 PM Mathieu Bultel wrote: > Hi, > > Which release are you using ? > You have to provide a valid inventory file via the openstack CLI in order > to allow the VF to know which hosts & ips is. > > Mathieu > > On Fri, Oct 1, 2021 at 5:17 PM Anirudh Gupta wrote: > >> Hi Team,, >> >> Upon further debugging, I found that pre-introspection internally calls >> the ansible playbook located at path /usr/share/ansible/validation-playbooks >> File "dhcp-introspection.yaml" has hosts mentioned as undercloud. >> >> - hosts: *undercloud* >> become: true >> vars: >> ... >> ... >> >> >> But the artifacts created for dhcp-introspection at >> location /home/stack/validations/artifacts/_dhcp-introspection.yaml_2021-10-01T11 >> has file *hosts *present which has *localhost* written into it as a >> result of which when command gets executed it gives the error *"Could >> not match supplied host pattern, ignoring: undercloud:"* >> >> Can someone suggest how is this artifacts written in tripleo and the way >> we can change hosts file entry to undercloud so that it can work >> >> Similar is the case with other tasks >> like undercloud-tokenflush, ctlplane-ip-range etc >> >> Regards >> Anirudh Gupta >> >> On Wed, Sep 29, 2021 at 4:47 PM Anirudh Gupta >> wrote: >> >>> Hi Team, >>> >>> I tried installing Undercloud using the below link: >>> >>> >>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud >>> >>> I am getting the following error: >>> >>> (undercloud) [stack at undercloud ~]$ openstack tripleo validator run >>> --group pre-introspection >>> Selected log directory '/home/stack/validations' does not exist. >>> Attempting to create it. >>> >>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>> | UUID | Validations >>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >>> >>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>> | 7029c1f6-5ab4-465d-82d7-3f29058012ce | check-cpu >>> | PASSED | localhost | localhost | | 0:00:02.531 | >>> | db059017-30f1-4b97-925e-3f55b586d492 | check-disk-space >>> | PASSED | localhost | localhost | | 0:00:04.432 | >>> | e23dd9a1-90d3-4797-ae0a-b43e55ab6179 | check-ram >>> | PASSED | localhost | localhost | | 0:00:01.324 | >>> | 598ca02d-258a-44ad-b78d-3877321cdfe6 | check-selinux-mode >>> | PASSED | localhost | localhost | | 0:00:01.591 | >>> | c4435b4c-b432-4a1e-8a99-00638034a884 | *check-network-gateway >>> | FAILED* | undercloud | *No host matched* | | >>> | >>> | cb1eed23-ef2f-4acd-a43a-86fb09bf0372 | *undercloud-disk-space >>> | FAILED* | undercloud | *No host matched* | | >>> | >>> | abde5329-9289-4b24-bf16-c4d82b03e67a | *undercloud-neutron-sanity-check >>> | FAILED* | undercloud | *No host matched* | | >>> | >>> | d0e5fdca-ece6-4a37-b759-ed1fac31a10f | *ctlplane-ip-range >>> | FAILED* | undercloud | No host matched | | >>> | >>> | 91511807-225c-4852-bb52-6d0003c51d49 | *dhcp-introspection >>> | FAILED* | undercloud | No host matched | | >>> | >>> | e96f7704-d2fb-465d-972b-47e2f057449c |* undercloud-tokenflush >>> | FAILED *| undercloud | No host matched | | >>> | >>> >>> >>> As per the validation link, >>> >>> https://docs.openstack.org/tripleo-validations/wallaby/validations-pre-introspection-details.html >>> >>> check-network-gateway >>> >>> If gateway in undercloud.conf is different from local_ip, verify that >>> the gateway exists and is reachable >>> >>> Observation - In my case IP specified in local_ip and gateway, both are >>> pingable, but still this error is being observed >>> >>> >>> ctlplane-ip-range? >>> >>> >>> Check the number of IP addresses available for the overcloud nodes. >>> >>> Verify that the number of IP addresses defined in dhcp_start and >>> dhcp_end fields in undercloud.conf is not too low. >>> >>> - >>> >>> ctlplane_iprange_min_size: 20 >>> >>> Observation - In my case I have defined more than 20 IPs >>> >>> >>> Similarly for disk related issue, I have dedicated 100 GB space in /var >>> and / >>> >>> Filesystem Size Used Avail Use% Mounted on >>> devtmpfs 12G 0 12G 0% /dev >>> tmpfs 12G 84K 12G 1% /dev/shm >>> tmpfs 12G 8.7M 12G 1% /run >>> tmpfs 12G 0 12G 0% /sys/fs/cgroup >>> /dev/mapper/cl-root 100G 2.5G 98G 3% / >>> /dev/mapper/cl-home 47G 365M 47G 1% /home >>> /dev/mapper/cl-var 103G 1.1G 102G 2% /var >>> /dev/vda1 947M 200M 747M 22% /boot >>> tmpfs 2.4G 0 2.4G 0% /run/user/0 >>> tmpfs 2.4G 0 2.4G 0% /run/user/1000 >>> >>> Despite setting al the parameters, still I am not able to pass >>> pre-introspection checks. *"NO Host Matched" *is found in the table. >>> >>> >>> Regards >>> >>> Anirudh Gupta >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Wed Oct 13 05:41:00 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 13 Oct 2021 11:11:00 +0530 Subject: [TripleO] Timeout while introspecting Overcloud Node In-Reply-To: References: Message-ID: Hi Team,, To further update, this issue is regularly seen at my setup. I have created 2 different undercloud machines in order to confirm this. This issue got resolved when I rebooted the undercloud node once after its installation. Regards Anirudh Gupta On Tue, Oct 5, 2021 at 6:54 PM Anirudh Gupta wrote: > Hi Team, > > We were trying to provision Overcloud Nodes using the Tripleo wallaby > release. > For this, on Undercloud machine (Centos 8.4), we downloaded the > ironic-python and overcloud images from the following link: > > https://images.rdoproject.org/centos8/wallaby/rdo_trunk/current-tripleo/ > > After untarring, we executed the command > > *openstack overcloud image upload* > > This command setted the images at path /var/lib/ironic/images folder > successfully. > > Then we uploaded our instackenv.json file and executed the command > > *openstack overcloud node introspect --all-manageable* > > On the overcloud node, we are getting the Timeout error while getting the > agent.kernel and agent.ramdisk image. > > *http://10.0.1.10/8088/agent.kernel......Connection > timed out > (http://ipxe.org/4c0a6092 )* > *http://10.0.1.10/8088/agent.kernel......Connection > timed out > (http://ipxe.org/4c0a6092 )* > > However, from another test machine, when I tried *wget http://10.0.1.10/8088/agent.kernel > * - It successfully worked > > Screenshot is attached for the reference > > Can someone please help in resolving this issue. > > Regards > Anirudh Gupta > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Wed Oct 13 07:57:33 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 13 Oct 2021 13:27:33 +0530 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: Hi David, Thanks for your response. In order to run pre-introspection, I debugged and created an inventory file of my own having the following content [Undercloud] undercloud With this and also with the file you mentioned, I was able to run pre-introspection successfully. (undercloud) [stack at undercloud ~]$ openstack tripleo validator run --group pre-introspection -i tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ | 6cdc7c84-d278-430a-b6fc-3893e42310d8 | check-cpu | PASSED | localhost | localhost | | 0:00:01.116 | | ac0d54a5-51c3-4f52-9dba-2a9b26583591 | check-disk-space | PASSED | localhost | localhost | | 0:00:03.546 | | 3af6fefc-47d0-40b1-bd5b-88e03e0f61ef | check-ram | PASSED | localhost | localhost | | 0:00:01.069 | | e8d17007-6c46-4959-8bfc-dc59dd77ba65 | check-selinux-mode | PASSED | localhost | localhost | | 0:00:01.395 | | 28df7ed3-8cea-4a4d-af34-14c8eec406ea | check-network-gateway | PASSED | undercloud | undercloud | | 0:00:02.347 | | efa6b4ab-de40-42a0-815e-238e5b81995c | undercloud-disk-space | PASSED | undercloud | undercloud | | 0:00:03.657 | | 89293cce-5f30-4626-b326-5cfeff48ab0c | undercloud-neutron-sanity-check | PASSED | undercloud | undercloud | | 0:00:07.715 | | 0da9986f-8fc6-46f7-8936-c8b838c12c7b | ctlplane-ip-range | PASSED | undercloud | undercloud | | 0:00:01.973 | | 89f286ee-cd83-4d05-8d99-bffd03df142b | dhcp-introspection | PASSED | undercloud | undercloud | | 0:00:06.364 | | c5256e61-f787-4a1b-9e1a-1eff0c0b2bb6 | undercloud-tokenflush | PASSED | undercloud | undercloud | | 0:00:01.209 | +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ But passing this file while pre-deployment, it is still failing. (undercloud) [stack at undercloud undercloud]$ openstack tripleo validator run --group pre-deployment -i tripleo-ansible-inventory.yaml +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ | UUID | Validations | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ | 6deebd06-cf12-4083-a4f2-a31306a719b3 | 512e | PASSED | localhost | localhost | | 0:00:00.511 | | a2b80c05-40c0-4dd6-9d8d-03be0f5278ba | dns | PASSED | localhost | localhost | | 0:00:00.428 | | bd3c32b3-6a0e-424c-9d2e-2898c5bb50ef | service-status | PASSED | all | undercloud | | 0:00:05.923 | | 7342190b-2ad9-4639-91c7-582ae4b141c6 | validate-selinux | PASSED | all | undercloud | | 0:00:02.299 | | 665c4d42-e058-4e9d-9ee1-30e29b3a75c8 | package-version | FAILED | all | undercloud | | 0:03:34.295 | | e0001906-5a8c-4f9b-9ad7-7b5b4d4b8d22 | ceph-ansible-installed | PASSED | undercloud | undercloud | | 0:00:02.723 | | beb5bf3d-3ee8-4fd6-8daa-0cf13023c1f3 | ceph-dependencies-installed | PASSED | allovercloud | undercloud | | 0:00:02.610 | | d872e781-4cd2-4509-ad51-74d7f3b3ebbf | tls-everywhere-pre-deployment | FAILED | undercloud | undercloud | | 0:00:36.546 | | bc7e8940-d61a-4349-a5be-a41312b8bd2f | undercloud-debug | FAILED | undercloud | undercloud | | 0:00:01.702 | | 8de4f037-ac24-4700-b449-405e723a7e50 | collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud | | 0:00:00.936 | | 1aadf9f7-a200-499a-826f-06c2ad3f1ab7 | undercloud-heat-purge-deleted | PASSED | undercloud | undercloud | | 0:00:02.232 | | db5204af-a054-4eae-9325-c2f592997b59 | undercloud-process-count | PASSED | undercloud | undercloud | | 0:00:07.770 | | 7fdb9935-a30d-4356-8524-23065da894e4 | default-node-count | FAILED | undercloud | undercloud | | 0:00:00.942 | | 0868a984-7de0-42f0-8d6b-abb19c72c98b | dhcp-provisioning | FAILED | undercloud | undercloud | | 0:00:01.668 | | 7796624f-5b13-4d66-8dce-8998f2370625 | ironic-boot-configuration | FAILED | undercloud | undercloud | | 0:00:00.935 | | e087bbae-6371-4e2e-9445-0fcc1f936b96 | network-environment | FAILED | undercloud | undercloud | | 0:00:00.936 | | db93613d-9cab-4954-949f-d7b2578c20c5 | node-disks | FAILED | undercloud | undercloud | | 0:00:01.741 | | 66bed170-ffb1-4466-b065-9f6012abdd6e | switch-vlans | FAILED | undercloud | undercloud | | 0:00:01.795 | | 4911cd84-26cf-4c43-ba5a-645c5c5f20b4 | system-encoding | PASSED | all | undercloud | | 0:00:00.393 | +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ As per the response from Alex, This could probably because these validations calls might be broken and and are not tested in CI I am moving forward with the deployment ignoring these errors as suggested Regards Anirudh Gupta On Tue, Oct 12, 2021 at 8:02 PM David Peacock wrote: > Hi Anirudh, > > You're hitting a known bug that we're in the process of propagating a fix > for; sorry for this. :-) > > As per a patch we have under review, use the inventory file located under > ~/tripleo-deploy/ directory: tripleo-ansible-inventory.yaml. > To generate an inventory file, use the playbook in "tripleo-ansible: > cli-config-download.yaml". > > https://review.opendev.org/c/openstack/tripleo-validations/+/813535 > > Let us know if this doesn't put you on the right track. > > Thanks, > David > > On Sat, Oct 9, 2021 at 5:12 PM Anirudh Gupta wrote: > >> Hi Team, >> >> I am installing Tripleo using the below link >> >> >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html >> >> In the Introspect section, When I executed the command >> openstack tripleo validator run --group pre-introspection >> >> I got the following error: >> >> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >> | UUID | Validations >> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >> >> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >> | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu >> | PASSED | localhost | localhost | | 0:00:01.261 | >> | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space >> | PASSED | localhost | localhost | | 0:00:04.480 | >> | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram >> | PASSED | localhost | localhost | | 0:00:02.173 | >> | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode >> | PASSED | localhost | localhost | | 0:00:01.546 | >> | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway >> | FAILED | undercloud | No host matched | | | >> | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space >> | FAILED | undercloud | No host matched | | | >> | 2f0239db-d530-48eb-b606-f82179e72e50 | undercloud-neutron-sanity-check >> | FAILED | undercloud | No host matched | | | >> | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range >> | FAILED | undercloud | No host matched | | | >> | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection >> | FAILED | undercloud | No host matched | | | >> | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush >> | FAILED | undercloud | No host matched | | | >> >> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >> >> >> Then I created the following inventory file: >> [Undercloud] >> undercloud >> >> Passed this command while running the pre-introspection command. >> It then executed successfully. >> >> >> But with Pre-deployment, it is still failing even after passing the >> inventory >> >> >> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >> | UUID | Validations >> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | >> Duration | >> >> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >> | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e >> | PASSED | localhost | localhost | | >> 0:00:00.504 | >> | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns >> | PASSED | localhost | localhost | | >> 0:00:00.481 | >> | 93611c13-49a2-4cae-ad87-099546459481 | service-status >> | PASSED | all | undercloud | | >> 0:00:06.942 | >> | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux >> | PASSED | all | undercloud | | >> 0:00:02.433 | >> | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version >> | FAILED | all | undercloud | | >> 0:00:03.576 | >> | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed >> | PASSED | undercloud | undercloud | | >> 0:00:02.850 | >> | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed >> | FAILED | allovercloud | No host matched | | >> | >> | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment >> | FAILED | undercloud | undercloud | | >> 0:00:31.559 | >> | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug >> | FAILED | undercloud | undercloud | | >> 0:00:02.057 | >> | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | >> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >> | | 0:00:00.884 | >> | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted >> | FAILED | undercloud | undercloud | | >> 0:00:02.138 | >> | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count >> | PASSED | undercloud | undercloud | | >> 0:00:06.164 | >> | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count >> | FAILED | undercloud | undercloud | | >> 0:00:00.934 | >> | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning >> | FAILED | undercloud | undercloud | | >> 0:00:02.456 | >> | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration >> | FAILED | undercloud | undercloud | | >> 0:00:00.882 | >> | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment >> | FAILED | undercloud | undercloud | | >> 0:00:00.880 | >> | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks >> | FAILED | undercloud | undercloud | | >> 0:00:01.934 | >> | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans >> | FAILED | undercloud | undercloud | | >> 0:00:01.931 | >> | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding >> | PASSED | all | undercloud | | >> 0:00:00.366 | >> >> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >> >> Also this step of passing the inventory file is not mentioned anywhere in >> the document. Is there anything I am missing? >> >> Regards >> Anirudh Gupta >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Wed Oct 13 08:02:02 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 13 Oct 2021 13:32:02 +0530 Subject: [tripleo] Unable to deploy Overcloud Nodes Message-ID: Hi Team, As per the link below, While executing the command to deploy the overcloud nodes https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud I am executing the command - openstack overcloud deploy --templates On running this command, I am getting the following error *ERROR: (pymysql.err.OperationalError) (1045, "Access denied for user 'heat'@'10.255.255.4' (using password: YES)")(Background on this error at: http://sqlalche.me/e/e3q8 )* 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured while running the command: subprocess.CalledProcessError: Command '['sudo', 'podman', 'run', '--rm', '--user', 'heat', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', 'db_sync']' returned non-zero exit status 1. 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent call last): 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, self).run(parsed_args) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in run 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, self).run(parsed_args) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = self.take_action(parsed_args) or 0 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 1277, in take_action 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud self.setup_ephemeral_heat(parsed_args) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", line 767, in setup_ephemeral_heat 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud utils.launch_heat(self.heat_launcher, restore_db=restore_db) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 2706, in launch_heat 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud launcher.heat_db_sync(restore_db) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib/python3.6/site-packages/tripleoclient/heat_launcher.py", line 530, in heat_db_sync 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud subprocess.check_call(cmd) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud File "/usr/lib64/python3.6/subprocess.py", line 311, in check_call 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud raise CalledProcessError(retcode, cmd) 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud subprocess.CalledProcessError: Command '['sudo', 'podman', 'run', '--rm', '--user', 'heat', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', 'db_sync']' returned non-zero exit status 1. 2021-10-13 05:46:28.390 183680 ERROR tripleoclient.v1.overcloud_deploy.DeployOvercloud 2021-10-13 05:46:28.400 183680 ERROR openstack [-] Command '['sudo', 'podman', 'run', '--rm', '--user', 'heat', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', 'db_sync']' returned non-zero exit status 1.: subprocess.CalledProcessError: Command '['sudo', 'podman', 'run', '--rm', '--user', 'heat', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', '--volume', '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', 'db_sync']' returned non-zero exit status 1. 2021-10-13 05:46:28.401 183680 INFO osc_lib.shell [-] END return value: 1 Can someone please help in resolving this issue. Are there any parameters, templates that need to be passed in order to make it work. Regards Anirudh Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: From faisalsheikh.cyber at gmail.com Wed Oct 13 11:19:48 2021 From: faisalsheikh.cyber at gmail.com (Faisal Sheikh) Date: Wed, 13 Oct 2021 16:19:48 +0500 Subject: [wallaby][neutron][ovn] SSL connection to OVN-NB/SB OVSDB Message-ID: Hi, I am using Openstack Wallaby release with OVN on Ubuntu 20.04. My environment consists of 2 compute nodes and 1 controller node. ovs-vswitchd (Open vSwitch) 2.15.0 Ubuntu Kernel Version: 5.4.0-88-generic compute node1 172.16.30.1 compute node2 172.16.30.3 controller/Network node IP 172.16.30.46 I want to configure the ovn southbound and northbound database to listen on SSL connection. Set a certificate, private key, and CA certificate on both compute nodes and controller nodes in /etc/neutron/plugins/ml2/ml2_conf.ini and using string ssl:IP:Port to connect the southbound/northbound database but I am unable to establish connection on SSL. It's not connecting to ovsdb-server on 6641/6642. Error in the neutron logs is like below: 2021-10-12 17:15:27.728 50561 WARNING neutron.quota.resource_registry [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] security_group_rule is already registered 2021-10-12 17:15:27.754 50561 WARNING keystonemiddleware.auth_token [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. 2021-10-12 17:15:27.761 50561 INFO oslo_service.service [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Starting 1 workers 2021-10-12 17:15:27.768 50561 INFO neutron.service [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Neutron service started, listening on 0.0.0.0:9696 2021-10-12 17:15:27.776 50561 ERROR ovsdbapp.backend.ovs_idl.idlutils [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 2021-10-12 17:15:27.779 50561 CRITICAL neutron [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unhandled error: neutron_lib.callbacks.exceptions.CallbackFailure: Callback neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 failed with "Could not retrieve schema from ssl:172.16.30.46:6641" 2021-10-12 17:15:27.779 50561 ERROR neutron Traceback (most recent call last): 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/bin/neutron-server", line 10, in 2021-10-12 17:15:27.779 50561 ERROR neutron sys.exit(main()) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", line 19, in main 2021-10-12 17:15:27.779 50561 ERROR neutron server.boot_server(wsgi_eventlet.eventlet_wsgi_server) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, in boot_server 2021-10-12 17:15:27.779 50561 ERROR neutron server_func() 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line 24, in eventlet_wsgi_server 2021-10-12 17:15:27.779 50561 ERROR neutron neutron_api = service.serve_wsgi(service.NeutronApiService) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in serve_wsgi 2021-10-12 17:15:27.779 50561 ERROR neutron registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", line 60, in publish 2021-10-12 17:15:27.779 50561 ERROR neutron _get_callback_manager().publish(resource, event, trigger, payload=payload) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 149, in publish 2021-10-12 17:15:27.779 50561 ERROR neutron return self.notify(resource, event, trigger, payload=payload) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in _wrapped 2021-10-12 17:15:27.779 50561 ERROR neutron raise db_exc.RetryRequest(e) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__ 2021-10-12 17:15:27.779 50561 ERROR neutron self.force_reraise() 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise 2021-10-12 17:15:27.779 50561 ERROR neutron raise self.value 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in _wrapped 2021-10-12 17:15:27.779 50561 ERROR neutron return function(*args, **kwargs) 2021-10-12 17:15:27.779 50561 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 174, in notify 2021-10-12 17:15:27.779 50561 ERROR neutron raise exceptions.CallbackFailure(errors=errors) 2021-10-12 17:15:27.779 50561 ERROR neutron neutron_lib.callbacks.exceptions.CallbackFailure: Callback neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 failed with "Could not retrieve schema from ssl:172.16.30.46:6641" 2021-10-12 17:15:27.779 50561 ERROR neutron 2021-10-12 17:15:27.783 50572 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager [-] Error during notification for neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-373774 process, after_init: Exception: Could not retrieve schema from ssl:172.16.30.46:6641 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager Traceback (most recent call last): 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 197, in _notify_loop 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager callback(resource, event, trigger, **kwargs) 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 294, in post_fork_initialize 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager self._wait_for_pg_drop_event() 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 357, in _wait_for_pg_drop_event 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 136, in nb_schema_helper 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line 721, in __get__ 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager return self.func(owner) 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", line 102, in schema_helper 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 215, in get_schema_helper 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager return create_schema_helper(fetch_schema_json(connection, schema_name)) 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 204, in fetch_schema_json 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager raise Exception("Could not retrieve schema from %s" % connection) 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager Exception: Could not retrieve schema from ssl:172.16.30.46:6641 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager 2021-10-12 17:15:27.787 50572 INFO neutron.wsgi [-] (50572) wsgi starting up on http://0.0.0.0:9696 2021-10-12 17:15:27.924 50572 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2021-10-12 17:15:27.925 50572 INFO neutron.wsgi [-] (50572) wsgi exited, is_accepting=True 2021-10-12 17:15:29.709 50573 INFO neutron.common.config [-] Logging enabled! 2021-10-12 17:15:29.710 50573 INFO neutron.common.config [-] /usr/bin/neutron-server version 18.0.0 2021-10-12 17:15:29.712 50573 INFO neutron.common.config [-] Logging enabled! 2021-10-12 17:15:29.713 50573 INFO neutron.common.config [-] /usr/bin/neutron-server version 18.0.0 2021-10-12 17:15:29.899 50573 INFO keyring.backend [-] Loading KWallet 2021-10-12 17:15:29.904 50573 INFO keyring.backend [-] Loading SecretService 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading Windows 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading chainer 2021-10-12 17:15:29.908 50573 INFO keyring.backend [-] Loading macOS 2021-10-12 17:15:29.927 50573 INFO neutron.manager [-] Loading core plugin: ml2 2021-10-12 17:15:30.355 50573 INFO neutron.plugins.ml2.managers [-] Configured type driver names: ['flat', 'geneve'] 2021-10-12 17:15:30.357 50573 INFO neutron.plugins.ml2.drivers.type_flat [-] Arbitrary flat physical_network names allowed 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] Loaded type driver names: ['flat', 'geneve'] 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] Registered types: dict_keys(['flat', 'geneve']) 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] Tenant network_types: ['geneve'] 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] Configured extension driver names: ['port_security', 'qos'] 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] Loaded extension driver names: ['port_security', 'qos'] 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] Registered extension drivers: ['port_security', 'qos'] 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] Configured mechanism driver names: ['ovn'] 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] Loaded mechanism driver names: ['ovn'] 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] Registered mechanism drivers: ['ovn'] 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] No mechanism drivers provide segment reachability information for agent scheduling. 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] Initializing driver for type 'flat' 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.drivers.type_flat [-] ML2 FlatTypeDriver initialization complete 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] Initializing driver for type 'geneve' 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.drivers.type_tunnel [-] geneve ID ranges: [(1, 65536)] 2021-10-12 17:15:32.555 50573 INFO neutron.plugins.ml2.managers [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing extension driver 'port_security' 2021-10-12 17:15:32.555 50573 INFO neutron.plugins.ml2.extensions.port_security [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] PortSecurityExtensionDriver initialization complete 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing extension driver 'qos' 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing mechanism driver 'ovn' 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting OVNMechanismDriver 2021-10-12 17:15:32.562 50573 WARNING neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Firewall driver configuration is ignored 2021-10-12 17:15:32.586 50573 INFO neutron.services.logapi.drivers.ovn.driver [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] OVN logging driver registered 2021-10-12 17:15:32.588 50573 INFO neutron.plugins.ml2.plugin [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Modular L2 Plugin initialization complete 2021-10-12 17:15:32.589 50573 INFO neutron.plugins.ml2.managers [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Got port-security extension from driver 'port_security' 2021-10-12 17:15:32.589 50573 INFO neutron.extensions.vlantransparent [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Disabled vlantransparent extension. 2021-10-12 17:15:32.589 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: ovn-router 2021-10-12 17:15:32.597 50573 INFO neutron.services.ovn_l3.plugin [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting OVNL3RouterPlugin 2021-10-12 17:15:32.597 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: qos 2021-10-12 17:15:32.600 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: metering 2021-10-12 17:15:32.603 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: port_forwarding 2021-10-12 17:15:32.605 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading service plugin ovn-router, it is required by port_forwarding 2021-10-12 17:15:32.606 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: segments 2021-10-12 17:15:32.684 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: auto_allocate 2021-10-12 17:15:32.685 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: tag 2021-10-12 17:15:32.687 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: timestamp 2021-10-12 17:15:32.689 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: network_ip_availability 2021-10-12 17:15:32.691 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: flavors 2021-10-12 17:15:32.693 50573 INFO neutron.manager [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: revisions 2021-10-12 17:15:32.695 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing extension manager. 2021-10-12 17:15:32.696 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension address-group not supported by any of loaded plugins 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: address-scope 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension router-admin-state-down-before-update not supported by any of loaded plugins 2021-10-12 17:15:32.698 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: agent 2021-10-12 17:15:32.699 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension agent-resources-synced not supported by any of loaded plugins 2021-10-12 17:15:32.700 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: allowed-address-pairs 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: auto-allocated-topology 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: availability_zone 2021-10-12 17:15:32.702 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension availability_zone_filter not supported by any of loaded plugins 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension data-plane-status not supported by any of loaded plugins 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: default-subnetpools 2021-10-12 17:15:32.704 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dhcp_agent_scheduler not supported by any of loaded plugins 2021-10-12 17:15:32.705 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dns-integration not supported by any of loaded plugins 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dns-domain-ports not supported by any of loaded plugins 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dvr not supported by any of loaded plugins 2021-10-12 17:15:32.707 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension empty-string-filtering not supported by any of loaded plugins 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension expose-l3-conntrack-helper not supported by any of loaded plugins 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: expose-port-forwarding-in-fip 2021-10-12 17:15:32.709 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: external-net 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: extra_dhcp_opt 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: extraroute 2021-10-12 17:15:32.711 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension extraroute-atomic not supported by any of loaded plugins 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension filter-validation not supported by any of loaded plugins 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: floating-ip-port-forwarding-description 2021-10-12 17:15:32.713 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: fip-port-details 2021-10-12 17:15:32.714 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: flavors 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: floating-ip-port-forwarding 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension floatingip-pools not supported by any of loaded plugins 2021-10-12 17:15:32.716 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: ip_allocation 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension ip-substring-filtering not supported by any of loaded plugins 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: l2_adjacency 2021-10-12 17:15:32.718 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: router 2021-10-12 17:15:32.719 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-conntrack-helper not supported by any of loaded plugins 2021-10-12 17:15:32.720 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: ext-gw-mode 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-ha not supported by any of loaded plugins 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-flavors not supported by any of loaded plugins 2021-10-12 17:15:32.722 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-port-ip-change-not-allowed not supported by any of loaded plugins 2021-10-12 17:15:32.723 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3_agent_scheduler not supported by any of loaded plugins 2021-10-12 17:15:32.724 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension logging not supported by any of loaded plugins 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: metering 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: metering_source_and_destination_fields 2021-10-12 17:15:32.726 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: multi-provider 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: net-mtu 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: net-mtu-writable 2021-10-12 17:15:32.728 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: network_availability_zone 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: network-ip-availability 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension network-segment-range not supported by any of loaded plugins 2021-10-12 17:15:32.730 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: pagination 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: port-device-profile 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension port-mac-address-regenerate not supported by any of loaded plugins 2021-10-12 17:15:32.732 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: port-numa-affinity-policy 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: port-resource-request 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: binding 2021-10-12 17:15:32.734 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension binding-extended not supported by any of loaded plugins 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: port-security 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: project-id 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: provider 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos 2021-10-12 17:15:32.737 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-bw-limit-direction 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-bw-minimum-ingress 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-default 2021-10-12 17:15:32.739 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-fip 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension qos-gateway-ip not supported by any of loaded plugins 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-port-network-policy 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-rule-type-details 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: qos-rules-alias 2021-10-12 17:15:32.742 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: quotas 2021-10-12 17:15:32.743 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: quota_details 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: rbac-policies 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension rbac-address-group not supported by any of loaded plugins 2021-10-12 17:15:32.745 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: rbac-address-scope 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension rbac-security-groups not supported by any of loaded plugins 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension rbac-subnetpool not supported by any of loaded plugins 2021-10-12 17:15:32.747 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: revision-if-match 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: standard-attr-revisions 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: router_availability_zone 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension router-service-type not supported by any of loaded plugins 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: security-groups-normalized-cidr 2021-10-12 17:15:32.750 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension port-security-groups-filtering not supported by any of loaded plugins 2021-10-12 17:15:32.751 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: security-groups-remote-address-group 2021-10-12 17:15:32.756 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: security-group 2021-10-12 17:15:32.757 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: segment 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: segments-peer-subnet-host-routes 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: service-type 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: sorting 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: standard-attr-segment 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: standard-attr-description 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension stateful-security-group not supported by any of loaded plugins 2021-10-12 17:15:32.761 50573 WARNING neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Did not find expected name "Stdattrs_common" in /usr/lib/python3/dist-packages/neutron/extensions/stdattrs_common.py 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension subnet-dns-publish-fixed-ip not supported by any of loaded plugins 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension subnet_onboard not supported by any of loaded plugins 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: subnet-segmentid-writable 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension subnet-service-types not supported by any of loaded plugins 2021-10-12 17:15:32.764 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: subnet_allocation 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension subnetpool-prefix-ops not supported by any of loaded plugins 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension tag-ports-during-bulk-creation not supported by any of loaded plugins 2021-10-12 17:15:32.766 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: standard-attr-tag 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: standard-attr-timestamp 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension trunk not supported by any of loaded plugins 2021-10-12 17:15:32.768 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension trunk-details not supported by any of loaded plugins 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension uplink-status-propagation not supported by any of loaded plugins 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension vlan-transparent not supported by any of loaded plugins 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:network 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:subnet 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:subnetpool 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:port 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:router 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:floatingip 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of CountableResource for resource:rbac_policy 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:security_group 2021-10-12 17:15:32.779 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:security_group_rule 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:router 2021-10-12 17:15:32.781 50573 WARNING neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] router is already registered 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:floatingip 2021-10-12 17:15:32.782 50573 WARNING neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] floatingip is already registered 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of CountableResource for resource:rbac_policy 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] rbac_policy is already registered 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:security_group 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] security_group is already registered 2021-10-12 17:15:32.784 50573 INFO neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance of TrackedResource for resource:security_group_rule 2021-10-12 17:15:32.784 50573 WARNING neutron.quota.resource_registry [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] security_group_rule is already registered 2021-10-12 17:15:32.810 50573 WARNING keystonemiddleware.auth_token [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True. 2021-10-12 17:15:32.816 50573 INFO oslo_service.service [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting 1 workers 2021-10-12 17:15:32.824 50573 INFO neutron.service [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Neutron service started, listening on 0.0.0.0:9696 2021-10-12 17:15:32.831 50573 ERROR ovsdbapp.backend.ovs_idl.idlutils [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 2021-10-12 17:15:32.834 50573 CRITICAL neutron [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unhandled error: neutron_lib.callbacks.exceptions.CallbackFailure: Callback neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 failed with "Could not retrieve schema from ssl:172.16.30.46:6641" 2021-10-12 17:15:32.834 50573 ERROR neutron Traceback (most recent call last): 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/bin/neutron-server", line 10, in 2021-10-12 17:15:32.834 50573 ERROR neutron sys.exit(main()) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", line 19, in main 2021-10-12 17:15:32.834 50573 ERROR neutron server.boot_server(wsgi_eventlet.eventlet_wsgi_server) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, in boot_server 2021-10-12 17:15:32.834 50573 ERROR neutron server_func() 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line 24, in eventlet_wsgi_server 2021-10-12 17:15:32.834 50573 ERROR neutron neutron_api = service.serve_wsgi(service.NeutronApiService) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in serve_wsgi 2021-10-12 17:15:32.834 50573 ERROR neutron registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", line 60, in publish 2021-10-12 17:15:32.834 50573 ERROR neutron _get_callback_manager().publish(resource, event, trigger, payload=payload) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 149, in publish 2021-10-12 17:15:32.834 50573 ERROR neutron return self.notify(resource, event, trigger, payload=payload) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in _wrapped 2021-10-12 17:15:32.834 50573 ERROR neutron raise db_exc.RetryRequest(e) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__ 2021-10-12 17:15:32.834 50573 ERROR neutron self.force_reraise() 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise 2021-10-12 17:15:32.834 50573 ERROR neutron raise self.value 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in _wrapped 2021-10-12 17:15:32.834 50573 ERROR neutron return function(*args, **kwargs) 2021-10-12 17:15:32.834 50573 ERROR neutron File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 174, in notify 2021-10-12 17:15:32.834 50573 ERROR neutron raise exceptions.CallbackFailure(errors=errors) 2021-10-12 17:15:32.834 50573 ERROR neutron neutron_lib.callbacks.exceptions.CallbackFailure: Callback neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 failed with "Could not retrieve schema from ssl:172.16.30.46:6641" 2021-10-12 17:15:32.834 50573 ERROR neutron 2021-10-12 17:15:32.838 50582 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager [-] Error during notification for neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-904522 process, after_init: Exception: Could not retrieve schema from ssl:172.16.30.46:6641 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager Traceback (most recent call last): 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 197, in _notify_loop 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager callback(resource, event, trigger, **kwargs) 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 294, in post_fork_initialize 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager self._wait_for_pg_drop_event() 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 357, in _wait_for_pg_drop_event 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", line 136, in nb_schema_helper 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line 721, in __get__ 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager return self.func(owner) 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", line 102, in schema_helper 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 215, in get_schema_helper 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager return create_schema_helper(fetch_schema_json(connection, schema_name)) 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 204, in fetch_schema_json 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager raise Exception("Could not retrieve schema from %s" % connection) 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager Exception: Could not retrieve schema from ssl:172.16.30.46:6641 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager 2021-10-12 17:15:32.842 50582 INFO neutron.wsgi [-] (50582) wsgi starting up on http://0.0.0.0:9696 2021-10-12 17:15:32.961 50582 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2021-10-12 17:15:32.963 50582 INFO neutron.wsgi [-] (50582) wsgi exited, is_accepting=True 2021-10-12 17:15:34.722 50583 INFO neutron.common.config [-] Logging enabled! I would really appreciate any input in this regard. Best regards, Faisal Sheikh From anyrude10 at gmail.com Wed Oct 13 15:26:18 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 13 Oct 2021 20:56:18 +0530 Subject: [tripleo] Unable to execute pre-introspection and pre-deployment command In-Reply-To: References: Message-ID: Hi David I am trying this on Openstack Wallaby Release. Regards Anirudh Gupta On Wed, 13 Oct, 2021, 6:40 pm David Peacock, wrote: > Sounds like progress, thanks for the update. > > For clarification, which version are you attempting to deploy? Upstream > master? > > Thanks, > David > > On Wed, Oct 13, 2021 at 3:57 AM Anirudh Gupta wrote: > >> Hi David, >> >> Thanks for your response. >> In order to run pre-introspection, I debugged and created an inventory >> file of my own having the following content >> >> [Undercloud] >> undercloud >> >> With this and also with the file you mentioned, I was able to run >> pre-introspection successfully. >> >> (undercloud) [stack at undercloud ~]$ openstack tripleo validator run >> --group pre-introspection -i >> tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml >> >> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >> | UUID | Validations >> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration | >> >> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >> | 6cdc7c84-d278-430a-b6fc-3893e42310d8 | check-cpu >> | PASSED | localhost | localhost | | 0:00:01.116 | >> | ac0d54a5-51c3-4f52-9dba-2a9b26583591 | check-disk-space >> | PASSED | localhost | localhost | | 0:00:03.546 | >> | 3af6fefc-47d0-40b1-bd5b-88e03e0f61ef | check-ram >> | PASSED | localhost | localhost | | 0:00:01.069 | >> | e8d17007-6c46-4959-8bfc-dc59dd77ba65 | check-selinux-mode >> | PASSED | localhost | localhost | | 0:00:01.395 | >> | 28df7ed3-8cea-4a4d-af34-14c8eec406ea | check-network-gateway >> | PASSED | undercloud | undercloud | | 0:00:02.347 | >> | efa6b4ab-de40-42a0-815e-238e5b81995c | undercloud-disk-space >> | PASSED | undercloud | undercloud | | 0:00:03.657 | >> | 89293cce-5f30-4626-b326-5cfeff48ab0c | undercloud-neutron-sanity-check >> | PASSED | undercloud | undercloud | | 0:00:07.715 | >> | 0da9986f-8fc6-46f7-8936-c8b838c12c7b | ctlplane-ip-range >> | PASSED | undercloud | undercloud | | 0:00:01.973 | >> | 89f286ee-cd83-4d05-8d99-bffd03df142b | dhcp-introspection >> | PASSED | undercloud | undercloud | | 0:00:06.364 | >> | c5256e61-f787-4a1b-9e1a-1eff0c0b2bb6 | undercloud-tokenflush >> | PASSED | undercloud | undercloud | | 0:00:01.209 | >> >> +--------------------------------------+---------------------------------+--------+------------+----------------+-------------------+-------------+ >> >> >> But passing this file while pre-deployment, it is still failing. >> (undercloud) [stack at undercloud undercloud]$ openstack tripleo validator >> run --group pre-deployment -i tripleo-ansible-inventory.yaml >> >> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >> | UUID | Validations >> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration >> | >> >> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >> | 6deebd06-cf12-4083-a4f2-a31306a719b3 | 512e >> | PASSED | localhost | localhost | | >> 0:00:00.511 | >> | a2b80c05-40c0-4dd6-9d8d-03be0f5278ba | dns >> | PASSED | localhost | localhost | | >> 0:00:00.428 | >> | bd3c32b3-6a0e-424c-9d2e-2898c5bb50ef | service-status >> | PASSED | all | undercloud | | >> 0:00:05.923 | >> | 7342190b-2ad9-4639-91c7-582ae4b141c6 | validate-selinux >> | PASSED | all | undercloud | | >> 0:00:02.299 | >> | 665c4d42-e058-4e9d-9ee1-30e29b3a75c8 | package-version >> | FAILED | all | undercloud | | >> 0:03:34.295 | >> | e0001906-5a8c-4f9b-9ad7-7b5b4d4b8d22 | ceph-ansible-installed >> | PASSED | undercloud | undercloud | | >> 0:00:02.723 | >> | beb5bf3d-3ee8-4fd6-8daa-0cf13023c1f3 | ceph-dependencies-installed >> | PASSED | allovercloud | undercloud | | >> 0:00:02.610 | >> | d872e781-4cd2-4509-ad51-74d7f3b3ebbf | tls-everywhere-pre-deployment >> | FAILED | undercloud | undercloud | | >> 0:00:36.546 | >> | bc7e8940-d61a-4349-a5be-a41312b8bd2f | undercloud-debug >> | FAILED | undercloud | undercloud | | >> 0:00:01.702 | >> | 8de4f037-ac24-4700-b449-405e723a7e50 | >> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >> | | 0:00:00.936 | >> | 1aadf9f7-a200-499a-826f-06c2ad3f1ab7 | undercloud-heat-purge-deleted >> | PASSED | undercloud | undercloud | | >> 0:00:02.232 | >> | db5204af-a054-4eae-9325-c2f592997b59 | undercloud-process-count >> | PASSED | undercloud | undercloud | | >> 0:00:07.770 | >> | 7fdb9935-a30d-4356-8524-23065da894e4 | default-node-count >> | FAILED | undercloud | undercloud | | >> 0:00:00.942 | >> | 0868a984-7de0-42f0-8d6b-abb19c72c98b | dhcp-provisioning >> | FAILED | undercloud | undercloud | | >> 0:00:01.668 | >> | 7796624f-5b13-4d66-8dce-8998f2370625 | ironic-boot-configuration >> | FAILED | undercloud | undercloud | | >> 0:00:00.935 | >> | e087bbae-6371-4e2e-9445-0fcc1f936b96 | network-environment >> | FAILED | undercloud | undercloud | | >> 0:00:00.936 | >> | db93613d-9cab-4954-949f-d7b2578c20c5 | node-disks >> | FAILED | undercloud | undercloud | | >> 0:00:01.741 | >> | 66bed170-ffb1-4466-b065-9f6012abdd6e | switch-vlans >> | FAILED | undercloud | undercloud | | >> 0:00:01.795 | >> | 4911cd84-26cf-4c43-ba5a-645c5c5f20b4 | system-encoding >> | PASSED | all | undercloud | | >> 0:00:00.393 | >> >> +--------------------------------------+-------------------------------------+--------+--------------+----------------+-------------------+-------------+ >> >> >> As per the response from Alex, This could probably because these >> validations calls might be broken and and are not tested in CI >> >> I am moving forward with the deployment ignoring these errors as suggested >> >> Regards >> Anirudh Gupta >> >> >> On Tue, Oct 12, 2021 at 8:02 PM David Peacock >> wrote: >> >>> Hi Anirudh, >>> >>> You're hitting a known bug that we're in the process of propagating a >>> fix for; sorry for this. :-) >>> >>> As per a patch we have under review, use the inventory file located >>> under ~/tripleo-deploy/ directory: tripleo-ansible-inventory.yaml. >>> To generate an inventory file, use the playbook in "tripleo-ansible: >>> cli-config-download.yaml". >>> >>> https://review.opendev.org/c/openstack/tripleo-validations/+/813535 >>> >>> Let us know if this doesn't put you on the right track. >>> >>> Thanks, >>> David >>> >>> On Sat, Oct 9, 2021 at 5:12 PM Anirudh Gupta >>> wrote: >>> >>>> Hi Team, >>>> >>>> I am installing Tripleo using the below link >>>> >>>> >>>> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html >>>> >>>> In the Introspect section, When I executed the command >>>> openstack tripleo validator run --group pre-introspection >>>> >>>> I got the following error: >>>> >>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>> | UUID | Validations >>>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | Duration >>>> | >>>> >>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>> | 6e74e655-8f1b-439d-8d0b-205290669f80 | check-cpu >>>> | PASSED | localhost | localhost | | 0:00:01.261 >>>> | >>>> | edb371b8-bc13-4840-92b2-61c4e45978cf | check-disk-space >>>> | PASSED | localhost | localhost | | 0:00:04.480 | >>>> | 35c871b9-37d1-44d8-a475-508e642dfd8e | check-ram >>>> | PASSED | localhost | localhost | | 0:00:02.173 >>>> | >>>> | c12882a3-8730-4abf-bdcb-56b3a8545cee | check-selinux-mode >>>> | PASSED | localhost | localhost | | 0:00:01.546 | >>>> | 659017ae-b937-4ec7-9231-32f14be8c4e5 | check-network-gateway >>>> | FAILED | undercloud | No host matched | | >>>> | >>>> | 3c7c4299-2ce1-4717-8953-c616ffeee66a | undercloud-disk-space >>>> | FAILED | undercloud | No host matched | | >>>> | >>>> | 2f0239db-d530-48eb-b606-f82179e72e50 | >>>> undercloud-neutron-sanity-check | FAILED | undercloud | No host matched | >>>> | | >>>> | e9c5b3d3-6fb1-4e93-b7b8-d67bdd6273e9 | ctlplane-ip-range >>>> | FAILED | undercloud | No host matched | | >>>> | >>>> | a69badb6-9a08-41a1-b5d6-fc10b8046687 | dhcp-introspection >>>> | FAILED | undercloud | No host matched | | | >>>> | 9045a1f0-5aea-43d3-9157-56260d65e4dc | undercloud-tokenflush >>>> | FAILED | undercloud | No host matched | | >>>> | >>>> >>>> +--------------------------------------+---------------------------------+--------+------------+-----------------+-------------------+-------------+ >>>> >>>> >>>> Then I created the following inventory file: >>>> [Undercloud] >>>> undercloud >>>> >>>> Passed this command while running the pre-introspection command. >>>> It then executed successfully. >>>> >>>> >>>> But with Pre-deployment, it is still failing even after passing the >>>> inventory >>>> >>>> >>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>> | UUID | Validations >>>> | Status | Host_Group | Status_by_Host | Unreachable_Hosts | >>>> Duration | >>>> >>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>> | 917c669c-fd74-4d41-98d8-5442dbbd8ee1 | 512e >>>> | PASSED | localhost | localhost | | >>>> 0:00:00.504 | >>>> | c4ece97b-936d-4034-8e9c-6239bd0fef7a | dns >>>> | PASSED | localhost | localhost | | >>>> 0:00:00.481 | >>>> | 93611c13-49a2-4cae-ad87-099546459481 | service-status >>>> | PASSED | all | undercloud | | >>>> 0:00:06.942 | >>>> | 175ba815-e9cd-4b76-b637-c489f1df3bcd | validate-selinux >>>> | PASSED | all | undercloud | | >>>> 0:00:02.433 | >>>> | 917618cb-af29-4517-85e7-0d3a3627c105 | package-version >>>> | FAILED | all | undercloud | | >>>> 0:00:03.576 | >>>> | 70099d55-6a29-4b77-8b00-b54c677520cb | ceph-ansible-installed >>>> | PASSED | undercloud | undercloud | | >>>> 0:00:02.850 | >>>> | 1889dd61-387a-4efe-9559-effff6f2d22e | ceph-dependencies-installed >>>> | FAILED | allovercloud | No host matched | | >>>> | >>>> | 22f1764d-bb10-4bde-b72f-b714e6263f4b | tls-everywhere-pre-deployment >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:31.559 | >>>> | 26f0cbf1-3902-40c0-ac9d-a01884d653eb | undercloud-debug >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:02.057 | >>>> | dc7ecc45-02ce-48b7-8f1b-7f21ae4fabb8 | >>>> collect-flavors-and-verify-profiles | FAILED | undercloud | undercloud >>>> | | 0:00:00.884 | >>>> | 676bc7f4-f3a0-47c6-a106-2219ddf698b9 | undercloud-heat-purge-deleted >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:02.138 | >>>> | 3983efc6-ed81-4886-8170-09cfe41f1255 | undercloud-process-count >>>> | PASSED | undercloud | undercloud | | >>>> 0:00:06.164 | >>>> | 7b1a544b-ce56-4747-be20-d0681d16085a | default-node-count >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:00.934 | >>>> | 9167af1b-038c-4c68-afd1-f875218aceb4 | dhcp-provisioning >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:02.456 | >>>> | 38c99024-5932-4087-baf1-a8aae9a58d5c | ironic-boot-configuration >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:00.882 | >>>> | da1be072-df2c-483d-99f1-a4c1177c380e | network-environment >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:00.880 | >>>> | ed416ce8-8953-487f-bb35-6212a1b213d0 | node-disks >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:01.934 | >>>> | 80118738-dc3c-4751-82c1-403f0187f980 | switch-vlans >>>> | FAILED | undercloud | undercloud | | >>>> 0:00:01.931 | >>>> | f7dcf2fd-c090-4149-aae8-98fb8bbac8c7 | system-encoding >>>> | PASSED | all | undercloud | | >>>> 0:00:00.366 | >>>> >>>> +--------------------------------------+-------------------------------------+--------+--------------+-----------------+-------------------+-------------+ >>>> >>>> Also this step of passing the inventory file is not mentioned anywhere >>>> in the document. Is there anything I am missing? >>>> >>>> Regards >>>> Anirudh Gupta >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Wed Oct 13 15:41:52 2021 From: helena at openstack.org (helena at openstack.org) Date: Wed, 13 Oct 2021 10:41:52 -0500 (CDT) Subject: [tc] [ptl] 2021 User Survey Project Specific Feedback Responses Message-ID: <1634139712.69166937@apps.rackspace.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2021 User Survey Project Specific Feedback Responses.csv Type: text/csv Size: 259676 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Wed Oct 13 16:02:43 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 13 Oct 2021 12:02:43 -0400 Subject: [cinder] festival of XS reviews 15 October 2021 Message-ID: Hello Cinder community members, This is a reminder that the most recent edition of the Cinder Festival of XS Reviews will be held at the end of this week on Friday 15 October. who: Everyone! what: The Cinder Festival of XS Reviews when: Friday 15 October 2021 from 1400-1600 UTC where: https://meetpad.opendev.org/cinder-festival-of-reviews This recurring meeting can be placed on your calendar by using this handy ICS file: http://eavesdrop.openstack.org/calendars/cinder-festival-of-reviews.ics See you there! brian From kennelson11 at gmail.com Wed Oct 13 16:05:56 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 13 Oct 2021 09:05:56 -0700 Subject: Learn how users are using OpenStack at OpenInfra Live Keynotes Message-ID: Hello Everyone, You might have heard that the OpenInfra Foundation is hosting its largest free virtual event of this year?OpenInfra Live: Keynote OpenInfra Live: Keynotes: https://openinfra.dev/live/keynotes Date and time: November 17-18 (1500-1700 UTC on each day) Register for free: https://www.eventbrite.com/e/openinfra-live-keynotes-tickets-169507530587 This two day special episode is your best opportunity to meet the newest players to the OpenInfra space and hear about how open source projects, such as OpenStack and Kubernetes, are supporting OpenInfra use cases like hybrid cloud. You will also have the chance to deep dive into the OpenStack user survey results since the launch of the OpenStack User Survey in 2013. Here is a preview of the OpenStack user survey results findings: - Over 300 OpenStack deployments were logged this year, including a significant number of new clouds?in the last 18 months, over 100 new OpenStack clouds have been built, growing the total number of cores under OpenStack management to more than 25,000,000. - Hybrid cloud scenarios continue to be popular, but over half of User Survey respondents indicated that the majority of their cloud infrastructure runs on OpenStack. Upgrades continue to be a challenge that the upstream community tackles with each additional release, but the User Survey shows the majority of organizations are running within the last seven releases. The full report of the OpenStack user survey will be distributed during the OpenInfra Live: Keynotes, so make sure you are registered for the event [2]. Can?t make it to the event? Register anyway, and we will email you a link to the recording after the event! At OpenInfra Live Keynotes, you will also have the opportunity to - interact with leaders of open source projects like OpenStack and Kubernetes to hear how the projects are supporting OpenInfra use cases like hybrid cloud - gain insight into public cloud economics and the role open source technologies play - celebrate as we announce this year?s Superuser Awards winner. This will be the one time everyone will be coming together this year. Come interact with the global OpenInfra community?Live! -Kendall Nelson (diablo_rojo) [1]: https://openinfra.dev/live/keynotes [2]: https://www.eventbrite.com/e/openinfra-live-keynotes-tickets-169507530587 -------------- next part -------------- An HTML attachment was scrubbed... URL: From midhunlaln66 at gmail.com Wed Oct 13 16:22:58 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Wed, 13 Oct 2021 21:52:58 +0530 Subject: Open stack Ansible 23.1.0.dev35 VM launching errors In-Reply-To: References: Message-ID: Hi Laurent, I resolved my issues and I could create new vm's.Thanks for your help. Now I have some doubt in different network types Vlan,Vxlan and flat networks. How these networks helps in openstack.What is the use of each network? Could you Please provide me a detailed answer or suggest me any document regarding this networks. On Fri, Oct 8, 2021, 8:26 PM Laurent Dumont wrote: > These are the nova-compute logs but I think it just catches the error from > the neutron component. Any logs from neutron-server, ovs-agent, > libvirt-agent? > > Can you share the "openstack network show NETWORK_ID_HERE" of the network > you are attaching the VM to? > > On Fri, Oct 8, 2021 at 9:53 AM Midhunlal Nb > wrote: > >> Hi, >> This is the log i am getting while launching a new vm >> >> >> Oct 08 19:11:20 ubuntu nova-compute[7324]: 2021-10-08 19:11:20.479 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.13 seconds >> to destroy the instance on the hypervisor. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.272 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 0.79 seconds >> to detach 1 volumes for instance. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Failed to >> allocate network(s): nova.exception.VirtualInterfaceCreateException: >> Virtual Interface creation failed >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Traceback (most recent call last): >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7235, in _create_guest_with_network >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> post_xml_callback=post_xml_callback) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__ >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> next(self.gen) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 479, in wait_for_instance_event >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> actual_event = event.wait() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >> line 125, in wait >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> result = hub.switch() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >> line 313, in switch >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> return self.greenlet.switch() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> eventlet.timeout.Timeout: 300 seconds >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> During handling of the above exception, another exception occurred: >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Traceback (most recent call last): >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >> line 2397, in _build_and_run_instance >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> accel_info=accel_info) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 4200, in spawn >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> cleanup_instance_disks=created_disks) >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> File >> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> line 7258, in _create_guest_with_network >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> raise exception.VirtualInterfaceCreateException() >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> nova.exception.VirtualInterfaceCreateException: Virtual Interface creation >> failed >> 2021-10-08 19:11:21.562 7324 >> ERROR nova.compute.manager [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.566 7324 >> ERROR nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Build of instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >> network(s), not rescheduling.: nova.exception.BuildAbortException: Build of >> instance 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate >> the network(s), not rescheduling. >> Oct 08 19:11:21 ubuntu nova-compute[7324]: 2021-10-08 19:11:21.569 7324 >> INFO os_vif [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Successfully unplugged vif >> VIFBridge(active=False,address=fa:16:3e:d9:9b:c8,bridge_name='brqc130c00e-0e',has_traffic_filtering=True,id=94600cad-caec-4810-bf6a-b5b9f7a26553,network=Network(c130c00e-0ec1-47a3-9b17-cc3294b286bd),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap94600cad-ca') >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.658 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Took 1.09 seconds >> to deallocate network for instance. >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.789 7324 >> INFO nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Detaching volume >> 07041181-318b-4fae-b71e-02ac7b11bca3 >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.894 7324 >> ERROR nova.virt.block_device [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] [instance: 364564c2-bfa6-4354-a4da-a18a3fef43c3] Unable to call >> for a driver detach of volume 07041181-318b-4fae-b71e-02ac7b11bca3 due to >> the instance being registered to the remote host None.: >> nova.exception.BuildAbortException: Build of instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 aborted: Failed to allocate the >> network(s), not rescheduling. >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.927 7324 >> ERROR nova.volume.cinder [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Delete attachment failed for attachment >> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. Error: Volume attachment could not be >> found with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. >> (HTTP 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) Code: >> 404: cinderclient.exceptions.NotFound: Volume attachment could not be found >> with filter: attachment_id = 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb. (HTTP >> 404) (Request-ID: req-a40b9e2c-74a1-4f14-9eda-70a4302ec9bd) >> Oct 08 19:11:22 ubuntu nova-compute[7324]: 2021-10-08 19:11:22.929 7324 >> WARNING nova.compute.manager [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Failed to detach volume: 07041181-318b-4fae-b71e-02ac7b11bca3 due >> to Volume attachment 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be >> found.: nova.exception.VolumeAttachmentNotFound: Volume attachment >> 95b1c9a4-ed58-4d22-bf99-2dd14808d9fb could not be found. >> Oct 08 19:11:23 ubuntu nova-compute[7324]: 2021-10-08 19:11:23.467 7324 >> INFO nova.scheduler.client.report [req-6e8938e0-3205-41bd-9a1d-b23c7347f6e0 >> 791cd0d24be84ce1ae6a4e3ce805e2ec 1c2192a5c417457ea4c76ed7a55bcb2a - default >> default] Deleted allocation for instance >> 364564c2-bfa6-4354-a4da-a18a3fef43c3 >> Oct 08 19:11:34 ubuntu nova-compute[7324]: 2021-10-08 19:11:34.955 7324 >> INFO nova.compute.manager [-] [instance: >> 364564c2-bfa6-4354-a4da-a18a3fef43c3] VM Stopped (Lifecycle Event) >> Oct 08 19:11:46 ubuntu nova-compute[7324]: 2021-10-08 19:11:46.028 7324 >> WARNING nova.virt.libvirt.imagecache [req-327f8ca8-a486-4240-b3f6-0b81 >> >> >> Thanks & Regards >> Midhunlal N B >> >> >> >> On Fri, Oct 8, 2021 at 6:44 PM Laurent Dumont >> wrote: >> >>> There are essentially two types of networks, vlan and vxlan, that can be >>> attached to a VM. Ideally, you want to look at the logs on the controllers >>> and the compute node. >>> >>> Openstack-ansible seems to send stuff here >>> https://docs.openstack.org/openstack-ansible/mitaka/install-guide/ops-logging.html#:~:text=Finding%20logs,at%20%2Fopenstack%2Flog%2F >>> . >>> >>> On Fri, Oct 8, 2021 at 9:05 AM Midhunlal Nb >>> wrote: >>> >>>> Hi Laurent, >>>> Thank you very much for your reply.we configured our network as per >>>> official document .Please take a look at below details. >>>> --->Controller node configured with below interfaces >>>> bond1,bond0,br-mgmt,br-vxlan,br-storage,br-vlan >>>> >>>> ---> Compute node >>>> bond1,bond0,br-mgmt,br-vxlan,br-storage >>>> >>>> I don't have much more experience in openstack,I think here we used >>>> vlan network. >>>> >>>> Thanks & Regards >>>> Midhunlal N B >>>> +918921245637 >>>> >>>> >>>> On Fri, Oct 8, 2021 at 6:19 PM Laurent Dumont >>>> wrote: >>>> >>>>> You will need to look at the neutron-server logs + the ovs/libviirt >>>>> agent logs on the compute. The error returned from the VM creation is not >>>>> useful most of the time. >>>>> >>>>> Was this a vxlan or vlan network? >>>>> >>>>> On Fri, Oct 8, 2021 at 8:45 AM Midhunlal Nb >>>>> wrote: >>>>> >>>>>> Hi team, >>>>>> -->Successfully I installed Openstack ansible 23.1.0.dev35. >>>>>> --->I logged in to horizon and created a new network and launched a >>>>>> vm but I am getting an error. >>>>>> >>>>>> Error: Failed to perform requested operation on instance "hope", the >>>>>> instance has an error status: Please try again later [Error: Build of >>>>>> instance b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate >>>>>> the network(s), not rescheduling.]. >>>>>> >>>>>> -->Then I checked log >>>>>> >>>>>> | fault | {'code': 500, 'created': >>>>>> '2021-10-08T12:26:44Z', 'message': 'Build of instance >>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>> network(s), not rescheduling.', 'details': 'Traceback (most recent call >>>>>> last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 7235, in _create_guest_with_network\n >>>>>> post_xml_callback=post_xml_callback)\n File >>>>>> "/usr/lib/python3.6/contextlib.py", line 88, in __exit__\n >>>>>> next(self.gen)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 479, in wait_for_instance_event\n actual_event = event.wait()\n >>>>>> File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/event.py", >>>>>> line 125, in wait\n result = hub.switch()\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/eventlet/hubs/hub.py", >>>>>> line 313, in switch\n return >>>>>> self.greenlet.switch()\neventlet.timeout.Timeout: 300 seconds\n\nDuring >>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>> (most recent call last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2397, in _build_and_run_instance\n accel_info=accel_info)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 4200, in spawn\n cleanup_instance_disks=created_disks)\n >>>>>> File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >>>>>> line 7258, in _create_guest_with_network\n raise >>>>>> exception.VirtualInterfaceCreateException()\nnova.exception.VirtualInterfaceCreateException: >>>>>> Virtual Interface creation failed\n\nDuring handling of the above >>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>> last):\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2219, in _do_build_and_run_instance\n filter_properties, >>>>>> request_spec, accel_uuids)\n File >>>>>> "/openstack/venvs/nova-23.1.0.dev42/lib/python3.6/site-packages/nova/compute/manager.py", >>>>>> line 2458, in _build_and_run_instance\n >>>>>> reason=msg)\nnova.exception.BuildAbortException: Build of instance >>>>>> b6f42229-06d6-4365-8a20-07df869f1610 aborted: Failed to allocate the >>>>>> network(s), not rescheduling.\n'} | >>>>>> >>>>>> Please help me with this error. >>>>>> >>>>>> >>>>>> Thanks & Regards >>>>>> Midhunlal N B >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraden at verisign.com Wed Oct 13 17:25:25 2021 From: abraden at verisign.com (Braden, Albert) Date: Wed, 13 Oct 2021 17:25:25 +0000 Subject: [Designate] After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Message-ID: After enabling redis, and allowing TCP 6379 and 26379, I see it in /etc/designate/designate.conf in the designate_producer container: backend_url = redis://admin:@10.221.176.48:26379?sentinel=kolla&sentinel_fallback=10.221.176.173:26379&sentinel_fallback=10.221.177.38:26379&db=0&socket_timeout=60&retry_on_timeout=yes And I can get to ports 6379 and 26379 with nc: (designate-producer)[root at dva3-ctrl3 /]# nc 10.221.176.173 26379 / -ERR unknown command `/`, with args beginning with: But I still see the DB error when TF rebuilds a VM: 2021-10-13 15:35:23.941 26 ERROR oslo_messaging.notify.dispatcher designate.exceptions.DuplicateRecord: Duplicate Record What am I missing? -----Original Message----- From: Michael Johnson Sent: Tuesday, October 12, 2021 11:33 AM To: Braden, Albert Cc: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: Re: Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. I don't have a good answer for you on that as it pre-dates my history with Designate a bit. I suspect it has to do with the removal of the pool-manager and the restructuring of the controller code. Maybe someone else on the discuss list has more insight. Michael On Tue, Oct 12, 2021 at 5:47 AM Braden, Albert wrote: > > Thank you Michael, this is very helpful. Do you have any insight into why we don't experience this in Queens clusters? We aren't running a lock manager there either, and I haven't been able to duplicate the problem there. > > -----Original Message----- > From: Michael Johnson > Sent: Monday, October 11, 2021 4:24 PM > To: Braden, Albert > Cc: openstack-discuss at lists.openstack.org > Subject: [EXTERNAL] Re: Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > You will need one of the Tooz supported distributed lock managers: > Consul, Memcacded, Redis, or zookeeper. > > Michael > > On Mon, Oct 11, 2021 at 11:57 AM Braden, Albert wrote: > > > > After investigating further, I realized that we're not running redis, and I think that means that redis_connection_string doesn't get set. Does this mean that we must run redis, or is there a workaround? > > > > -----Original Message----- > > From: Braden, Albert > > Sent: Monday, October 11, 2021 2:48 PM > > To: 'johnsomor at gmail.com' > > Cc: 'openstack-discuss at lists.openstack.org' > > Subject: RE: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > > > I think so. I see this: > > > > ansible/roles/designate/templates/designate.conf.j2:backend_url = {{ redis_connection_string }} > > > > ansible/group_vars/all.yml:redis_connection_string: "redis://{% for host in groups['redis'] %}{% if host == groups['redis'][0] %}admin:{{ redis_master_password }}@{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}?sentinel=kolla{% else %}&sentinel_fallback={{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ redis_sentinel_port }}{% endif %}{% endfor %}&db=0&socket_timeout=60&retry_on_timeout=yes" > > > > Did anything with the distributed lock manager between Queens and Train? > > > > -----Original Message----- > > From: Michael Johnson > > Sent: Monday, October 11, 2021 1:15 PM > > To: Braden, Albert > > Cc: openstack-discuss at lists.openstack.org > > Subject: [EXTERNAL] Re: After rebuilding Queens clusters on Train, race condition causes Designate record creation to fail > > > > Caution: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > > > Hi Albert, > > > > Have you configured your distributed lock manager for Designate? > > > > [coordination] > > backend_url = > > > > Michael > > > > On Fri, Oct 8, 2021 at 7:38 PM Braden, Albert wrote: > > > > > > Hello everyone. It?s great to be back working on OpenStack again. I?m at Verisign now. I can hardly describe how happy I am to have an employer that does not attach nonsense to the bottom of my emails! > > > > > > > > > > > > We are rebuilding our clusters from Queens to Train. On the new Train clusters, customers are complaining that deleting a VM and then immediately creating a new one with the same name (via Terraform for example) intermittently results in a missing DNS record. We can duplicate the issue by building a VM with terraform, tainting it, and applying. > > > > > > > > > > > > Before applying the change, we see the DNS record in the recordset: > > > > > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > > > $ > > > > > > > > > > > > and we can pull it from the DNS server on the controllers: > > > > > > > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > openstack-terra-test-host.dev-ostck.dva3.vrsn.com. 1 IN A 10.220.4.89 > > > > > > > > > > > > After applying the change, we don?t see it: > > > > > > > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > | f9aa73c1-84ba-4854-be71-cbb616de672c | 8d1c84082a044a53abe0d519ed9e8c60 | openstack-terra-test-host.dev-ostck.dva3.vrsn.com. | A | 10.220.4.89 | ACTIVE | NONE | > > > > > > $ > > > > > > $ for i in {1..3}; do dig @dva3-ctrl${i}.cloud.vrsn.com -t axfr dva3.vrsn.com. |grep openstack-terra; done > > > > > > $ openstack recordset list dva3.vrsn.com. --all |grep openstack-terra > > > > > > $ > > > > > > > > > > > > We see this in the logs: > > > > > > > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'c70e693b4c47402db088c43a5a177134-openstack-terra-test-host.de...' for key 'unique_recordset'") > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [SQL: INSERT INTO recordsets (id, version, created_at, zone_shard, tenant_id, zone_id, name, type, ttl, reverse_name) VALUES (%(id)s, %(version)s, %(created_at)s, %(zone_shard)s, %(tenant_id)s, %(zone_id)s, %(name)s, %(type)s, %(ttl)s, %(reverse_name)s)] > > > > > > 2021-10-09 01:53:44.307 27 ERROR oslo_messaging.notify.dispatcher [parameters: {'id': 'dbbb904c347241a791aa01ca33a87b23', 'version': 1, 'created_at': datetime.datetime(2021, 10, 9, 1, 53, 44, 182652), 'zone_shard': 3184, 'tenant_id': '8d1c84082a044a53abe0d519ed9e8c60', 'zone_id': 'c70e693b4c47402db088c43a5a177134', 'name': 'openstack-terra-test-host.dev-ostck.dva3.vrsn.com.', 'type': 'A', 'ttl': None, 'reverse_name': '.moc.nsrv.3avd.kctso-ved.tsoh-tset-arret-kcatsnepo'}] > > > > > > > > > > > > It appears that Designate is trying to create the new record before the deletion of the old one finishes. > > > > > > > > > > > > Is anyone else seeing this on Train? The same set of actions doesn?t cause this error in Queens. Do we need to change something in our Designate config, to make it wait until the old records are finished deleting before attempting to create the new ones? From zyklenfrei at gmail.com Wed Oct 13 17:43:29 2021 From: zyklenfrei at gmail.com (Manuel Holtgrewe) Date: Wed, 13 Oct 2021 19:43:29 +0200 Subject: [kayobe][kolla][ironic] kayobe overcloud provision fails because ironic compute hosts use their inspection DHCP pool IPs Message-ID: Dear list, I am experimenting with kayobe to deploy a test installation of OpenStack wallaby. You can find my configuration here: https://github.com/openstack/kayobe-config/compare/stable/wallaby...holtgrewe:my-wallaby?expand=1 I am following the kayobe documentation and have successfully setup a controller and a seed node. I am at the point where I have the nodes configured and they show up in bifrost baremetal node list. I can control them via IPMI/iDRAC/RedFish and boot them into the IPA image and the nodes can be inspected and actually go into the "manageable" status. kayobe is capable of using the inspection results and assigning the root device, so far, so good. I don't know whether my network configuration is good. I want to pin the IPs of stack-1 to stack-4 and the names resolve the correct IP addresses throughout my network. Below are some more details. In summary, I have trouble because `kayobe overcloud provision` makes my 4 overcloud bare metal host boot into IPA with DHCP enabled and they get the same IPs assigned that were given to them earlier in inspection. This means that the overcloud provision command cannot SSH into the nodes because it knows them by the wrong IPs. I must be really missing something here. What is it? Below are more details. Here is what kayobe pulled from the bifrost inspection (I believe). # cat etc/kayobe/inventory/overcloud [controllers] stack-1 ipmi_address=172.16.66.41 bmc_type=idrac stack-2 ipmi_address=172.16.66.42 bmc_type=idrac stack-3 ipmi_address=172.16.66.43 bmc_type=idrac stack-4 ipmi_address=172.16.66.44 bmc_type=idrac The IPs are also fixed here # etc/kayobe/network-allocation.yml compute_net_ips: stack-1: 172.16.32.11 stack-2: 172.16.32.12 stack-3: 172.16.32.13 stack-4: 172.16.32.14 stack-seed: 172.16.32.6 However, I thought I had to provide allocation ranges for DHCP for getting introspection to work. Thus, I have the following # etc/kayobe/networks.yml compute_net_cidr: 172.16.32.0/19 compute_net_gateway: 172.16.32.1 compute_net_vip_address: 172.16.32.2 compute_net_allocation_pool_start: 172.16.32.101 compute_net_allocation_pool_end: 172.16.32.200 compute_net_inspection_allocation_pool_start: 172.16.32.201 compute_net_inspection_allocation_pool_end: 172.16.32.250 This leads to the following dnsmasq leases in the bifrost host. # cat /var/lib/dnsmasq/dnsmasq.leases 1634187260 REDACTED 172.16.32.215 * REDACTED 1634187271 REDACTED 172.16.32.243 * REDACTED 1634187257 REDACTED 172.16.32.207 * REDACTED 1634187258 REDACTED 172.16.32.218 * REDACTED What am I missing? Best wishes, Manuel From johnsomor at gmail.com Wed Oct 13 18:07:28 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 13 Oct 2021 11:07:28 -0700 Subject: [tc] [ptl] 2021 User Survey Project Specific Feedback Responses In-Reply-To: <1634139712.69166937@apps.rackspace.com> References: <1634139712.69166937@apps.rackspace.com> Message-ID: Helena, Thank you, this is super helpful and perfect timing for the PTG. Michael On Wed, Oct 13, 2021 at 9:16 AM helena at openstack.org wrote: > > Hi everyone, > > > > Ahead of the PTG next week, I wanted to share the responses we received from the TC, PTL, and SIG submitted questions in the OpenStack User Survey. > > > > If there are duplicate responses, it is because of multiple deployments submitted by the same person. > > > > If your team would like to change your question or responses for the 2022 User Survey or you have any questions about the 2021 responses, please email community at openinfra.dev. > > > > Cheers, > > Helena > > __________________________________ > Marketing & Community Associate > The Open Infrastructure Foundation > Helena at openinfra.dev From franck.vedel at univ-grenoble-alpes.fr Wed Oct 13 19:01:30 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Wed, 13 Oct 2021 21:01:30 +0200 Subject: =?utf-8?Q?Re=3A_Probl=C3=A8me_with_image_from_snapshot?= In-Reply-To: <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> Message-ID: Hi Dominic, and thanks a lot for your help. > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Yes yes, i did that, sys prep ? generalize > Regarding OpenStack, could you tell us what glance and cinder drivers you use? i?m not sure? for cinder: LVM on a iscsi bay > Have you done other volume to image before? No, and it?s a good idea to test with a cirros instance. I will try tomorrow. > Have you verified that the image finishes creating before trying to create a VM from it? Yes > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I just tried with an instance off? same problem, sam error message (Block Device Mapping is Invalid) > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thanks a lot !! Really !! Franck VEDEL D?p. R?seaux Informatiques & T?l?coms IUT1 - Univ GRENOBLE Alpes 0476824462 Stages, Alternance, Emploi. http://www.rtgrenoble.fr > Le 13 oct. 2021 ? 17:16, a ?crit : > > Franck; > > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? > > Regarding OpenStack, could you tell us what glance and cinder drivers you use? > > Have you done other volume to image before? > > Have you verified that the image finishes creating before trying to create a VM from it? > > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. > > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 12:58 AM > To: openstack-discuss > Subject: Probl?me with image from snapshot > > Hello and first sorry for my english? thanks google. > > Something is wrong with what I want to do: > I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). > > Here is what I want to do and which does not work as I want: > - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. > I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. > I create the snapshot, I place the "--public" parameter on the new image. > I'm trying to create a new instance from this snapshot with the admin account: it works. > I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: > > Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) > > Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? > > Thanks if you have ideas for helping me > > > Franck VEDEL > -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Wed Oct 13 20:06:48 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Wed, 13 Oct 2021 20:06:48 +0000 Subject: =?utf-8?B?UkU6IFByb2Jsw6htZSB3aXRoIGltYWdlIGZyb20gc25hcHNob3Q=?= In-Reply-To: References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> Message-ID: <0670B960225633449A24709C291A525251CB431E@COM03.performair.local> Franck; What version of OpenStack are you running? Are you the cluster administrator, or a user of the cluster? I?m running Victoria, all tips below assume that major version. Can you create an image backed volume outside of the instance creation process? Do you have access to the systems running the cluster, can you review logs on the controller computers? You?re looking for the logs from the glance and cinder services. Glance?s logs should be somewhere like /var/log/glance/. I only have api.log for glance. Cinder?s should be somewhere like /var/log/cinder/. I have api.log, backup.log, scheduler.log, and volume.log. You should also check your glance and cinder configurations. They will be at /etc/glance/glance-api.conf and /etc/cinder/cinder.conf. In the glance configuration, you?re looking for the enabled_backends line in the [DEFAULT] section. If I remember correctly, it?s values has the form :. The type is the interesting part. Cinder is a little more difficult. You?re still going to be looking for an enabled_backends line, in the [DEFAULT] section, but it?s value is just a name (enabled_backends = ). You need to locate a configuration section which matches the name ([]). You?ll then be looking for a volume_driver line. Based on you response, I suspect this will be: volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver. I believe the logs will be critical to diagnosing this issue. I suspect you?ll find the error in the cinder volume.log, though it might also be in scheduler.log, or even in the glance.log. Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] Sent: Wednesday, October 13, 2021 12:02 PM To: Dominic Hilsbos Cc: openstack-discuss at lists.openstack.org Subject: Re: Probl?me with image from snapshot Hi Dominic, and thanks a lot for your help. I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Yes yes, i did that, sys prep ? generalize Regarding OpenStack, could you tell us what glance and cinder drivers you use? i?m not sure? for cinder: LVM on a iscsi bay Have you done other volume to image before? No, and it?s a good idea to test with a cirros instance. I will try tomorrow. Have you verified that the image finishes creating before trying to create a VM from it? Yes I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I just tried with an instance off? same problem, sam error message (Block Device Mapping is Invalid) I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thanks a lot !! Really !! Franck VEDEL D?p. R?seaux Informatiques & T?l?coms IUT1 - Univ GRENOBLE Alpes 0476824462 Stages, Alternance, Emploi. http://www.rtgrenoble.fr Le 13 oct. 2021 ? 17:16, > > a ?crit : Franck; I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Regarding OpenStack, could you tell us what glance and cinder drivers you use? Have you done other volume to image before? Have you verified that the image finishes creating before trying to create a VM from it? I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] Sent: Wednesday, October 13, 2021 12:58 AM To: openstack-discuss Subject: Probl?me with image from snapshot Hello and first sorry for my english? thanks google. Something is wrong with what I want to do: I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). Here is what I want to do and which does not work as I want: - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. I create the snapshot, I place the "--public" parameter on the new image. I'm trying to create a new instance from this snapshot with the admin account: it works. I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? Thanks if you have ideas for helping me Franck VEDEL -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Oct 13 22:09:24 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 13 Oct 2021 17:09:24 -0500 Subject: [tc] [ptl] 2021 User Survey Project Specific Feedback Responses In-Reply-To: <1634139712.69166937@apps.rackspace.com> References: <1634139712.69166937@apps.rackspace.com> Message-ID: <17c7bb3f929.12b32d93a990032.2271857199416779661@ghanshyammann.com> ---- On Wed, 13 Oct 2021 10:41:52 -0500 wrote ---- > Hi everyone, > > Ahead of the PTG next week, I wanted to share the responses we received from the TC, PTL, and SIG submitted questions in the OpenStack User Survey. > > If there are duplicate responses, it is because of multiple deployments submitted by the same person. > > If your team would like to change your question or responses for the 2022 User Survey or you have any questions about the 2021 responses, please email community at openinfra.dev. Thanks Helena for sharing it. I have added it in TC PTG etehrpad to discuss it in PTG and plan for next step on TC questions feedback. -gmann > > Cheers, > Helena > __________________________________ > Marketing & Community Associate > The Open Infrastructure Foundation > Helena at openinfra.dev > From gmann at ghanshyammann.com Wed Oct 13 22:10:57 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 13 Oct 2021 17:10:57 -0500 Subject: [all][tc] Technical Committee next weekly meeting on Oct 14th at 1500 UTC In-Reply-To: <17c6ffde3e7.116bfff55951568.8663051282986901805@ghanshyammann.com> References: <17c6ffde3e7.116bfff55951568.8663051282986901805@ghanshyammann.com> Message-ID: <17c7bb563eb.e90af475990047.376130421142220775@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC meeting schedule at 1500 UTC in #openstack-tc IRC channel. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Project Health checks framework ** https://etherpad.opendev.org/p/health_check ** https://review.opendev.org/c/openstack/governance/+/810037 * Stable team process change ** https://review.opendev.org/c/openstack/governance/+/810721 * Technical Writing (doc) SIG need a chair and more maintainers ** Current Chair (only maintainer in this SIG) Stephen Finucane will not continue it in the next cycle(Yoga) ** http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025161.html * Place to maintain the external hosted ELK, E-R, O-H services ** https://etherpad.opendev.org/p/elk-service-maintenance-plan * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 11 Oct 2021 10:34:42 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for Oct 14th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, Oct 13th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From gouthampravi at gmail.com Wed Oct 13 22:56:51 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 13 Oct 2021 15:56:51 -0700 Subject: [tc] [ptl] 2021 User Survey Project Specific Feedback Responses In-Reply-To: <1634139712.69166937@apps.rackspace.com> References: <1634139712.69166937@apps.rackspace.com> Message-ID: On Wed, Oct 13, 2021 at 9:19 AM helena at openstack.org wrote: > Hi everyone, > > > > Ahead of the PTG next week, I wanted to share the responses we received > from the TC, PTL, and SIG submitted questions in the OpenStack User Survey. > ++ Thank you Helena! :) > > > If there are duplicate responses, it is because of multiple deployments > submitted by the same person. > > > If your team would like to change your question or responses for the 2022 > User Survey or you have any questions about the 2021 responses, please > email community at openinfra.dev. > > > > Cheers, > > Helena > > __________________________________ > Marketing & Community Associate > The Open Infrastructure Foundation > Helena at openinfra.dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From midhunlaln66 at gmail.com Thu Oct 14 03:01:54 2021 From: midhunlaln66 at gmail.com (Midhunlal Nb) Date: Thu, 14 Oct 2021 08:31:54 +0530 Subject: Networks in openstack Message-ID: Hi Team, I have some doubt in different network types Vlan,Vxlan and flat networks. How these networks helps in openstack.What is the use of each network? Could you Please provide me a detailed answer or suggest me any document regarding this networks. Show quoted text -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Thu Oct 14 06:28:50 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Thu, 14 Oct 2021 08:28:50 +0200 Subject: =?utf-8?Q?Re=3A_Probl=C3=A8me_with_image_from_snapshot?= In-Reply-To: <0670B960225633449A24709C291A525251CB431E@COM03.performair.local> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> <0670B960225633449A24709C291A525251CB431E@COM03.performair.local> Message-ID: <79DEE6DE-47E1-4618-8B26-D4CC1C3EC0F2@univ-grenoble-alpes.fr> Yes, i?m the cluster admin. My cluster is based on Centos Stream / Kolla-ansible / Wallaby. You?re right, I need to check all the logs. (/var/log/kolla/cinder for example for me) Or check in containers?. But before, I'am not sure what I am trying to do is possible, and since I am not sure of my explanations (in English), it is difficult to make myself fully understood about the problem. Thank you very much for your help Franck VEDEL > Le 13 oct. 2021 ? 22:06, DHilsbos at performair.com a ?crit : > > Franck; > > What version of OpenStack are you running? Are you the cluster administrator, or a user of the cluster? > > I?m running Victoria, all tips below assume that major version. > > Can you create an image backed volume outside of the instance creation process? > > Do you have access to the systems running the cluster, can you review logs on the controller computers? You?re looking for the logs from the glance and cinder services. Glance?s logs should be somewhere like /var/log/glance/. I only have api.log for glance. Cinder?s should be somewhere like /var/log/cinder/. I have api.log, backup.log, scheduler.log, and volume.log. > > You should also check your glance and cinder configurations. They will be at /etc/glance/glance-api.conf and /etc/cinder/cinder.conf. > In the glance configuration, you?re looking for the enabled_backends line in the [DEFAULT] section. If I remember correctly, it?s values has the form :. The type is the interesting part. > Cinder is a little more difficult. You?re still going to be looking for an enabled_backends line, in the [DEFAULT] section, but it?s value is just a name (enabled_backends = ). You need to locate a configuration section which matches the name ([]). You?ll then be looking for a volume_driver line. Based on you response, I suspect this will be: volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver. > > I believe the logs will be critical to diagnosing this issue. I suspect you?ll find the error in the cinder volume.log, though it might also be in scheduler.log, or even in the glance.log. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 12:02 PM > To: Dominic Hilsbos > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Probl?me with image from snapshot > > Hi Dominic, and thanks a lot for your help. > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? > Yes yes, i did that, sys prep ? generalize > > Regarding OpenStack, could you tell us what glance and cinder drivers you use? > i?m not sure? for cinder: LVM on a iscsi bay > > Have you done other volume to image before? > No, and it?s a good idea to test with a cirros instance. I will try tomorrow. > > Have you verified that the image finishes creating before trying to create a VM from it? > Yes > > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. > I just tried with an instance off? same problem, sam error message (Block Device Mapping is Invalid) > > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. > Thanks a lot !! Really !! > > Franck VEDEL > D?p. R?seaux Informatiques & T?l?coms > IUT1 - Univ GRENOBLE Alpes > 0476824462 > Stages, Alternance, Emploi. > http://www.rtgrenoble.fr > > > Le 13 oct. 2021 ? 17:16, > > a ?crit : > > Franck; > > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? > > Regarding OpenStack, could you tell us what glance and cinder drivers you use? > > Have you done other volume to image before? > > Have you verified that the image finishes creating before trying to create a VM from it? > > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. > > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 12:58 AM > To: openstack-discuss > Subject: Probl?me with image from snapshot > > Hello and first sorry for my english? thanks google. > > Something is wrong with what I want to do: > I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). > > Here is what I want to do and which does not work as I want: > - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. > I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. > I create the snapshot, I place the "--public" parameter on the new image. > I'm trying to create a new instance from this snapshot with the admin account: it works. > I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: > > Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) > > Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? > > Thanks if you have ideas for helping me > > > Franck VEDEL > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Thu Oct 14 07:14:29 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Thu, 14 Oct 2021 12:44:29 +0530 Subject: [tripleo] Unable to deploy Overcloud Nodes In-Reply-To: References: Message-ID: On Wed, Oct 13, 2021 at 9:28 PM Anirudh Gupta wrote: > Hi Team, > > As per the link below, While executing the command to deploy the overcloud > nodes > > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud > > I am executing the command > > - openstack overcloud deploy --templates > > On running this command, I am getting the following error > > > *ERROR: (pymysql.err.OperationalError) (1045, "Access denied for user > 'heat'@'10.255.255.4' (using password: YES)")(Background on this error at: > http://sqlalche.me/e/e3q8 )* > Sounds like you already have an existing heat mysql database and heat user from a previous deployment and probably installed heat in the undercloud. You need to upgrade the undercloud that will remove installed heat, drop the heat database and remove the heat user. > > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud [-] Exception occured > while running the command: subprocess.CalledProcessError: Command '['sudo', > 'podman', 'run', '--rm', '--user', 'heat', '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', > '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', > 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', > 'db_sync']' returned non-zero exit status 1. > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud Traceback (most recent > call last): > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 34, in run > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud super(Command, > self).run(parsed_args) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in > run > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud return super(Command, > self).run(parsed_args) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/cliff/command.py", line 185, in run > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud return_code = > self.take_action(parsed_args) or 0 > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 1277, in take_action > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > self.setup_ephemeral_heat(parsed_args) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_deploy.py", > line 767, in setup_ephemeral_heat > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > utils.launch_heat(self.heat_launcher, restore_db=restore_db) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 2706, in > launch_heat > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > launcher.heat_db_sync(restore_db) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib/python3.6/site-packages/tripleoclient/heat_launcher.py", line > 530, in heat_db_sync > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > subprocess.check_call(cmd) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud File > "/usr/lib64/python3.6/subprocess.py", line 311, in check_call > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud raise > CalledProcessError(retcode, cmd) > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > subprocess.CalledProcessError: Command '['sudo', 'podman', 'run', '--rm', > '--user', 'heat', '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', > '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', > 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', > 'db_sync']' returned non-zero exit status 1. > 2021-10-13 05:46:28.390 183680 ERROR > tripleoclient.v1.overcloud_deploy.DeployOvercloud > 2021-10-13 05:46:28.400 183680 ERROR openstack [-] Command '['sudo', > 'podman', 'run', '--rm', '--user', 'heat', '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', > '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', > 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', > 'db_sync']' returned non-zero exit status 1.: > subprocess.CalledProcessError: Command '['sudo', 'podman', 'run', '--rm', > '--user', 'heat', '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher/heat.conf:/etc/heat/heat.conf:z', > '--volume', > '/home/stack/overcloud-deploy/overcloud/heat-launcher:/home/stack/overcloud-deploy/overcloud/heat-launcher:z', > 'localhost/tripleo/openstack-heat-api:ephemeral', 'heat-manage', > 'db_sync']' returned non-zero exit status 1. > 2021-10-13 05:46:28.401 183680 INFO osc_lib.shell [-] END return value: 1 > > > Can someone please help in resolving this issue. > Are there any parameters, templates that need to be passed in order to > make it work. > > Regards > Anirudh Gupta > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Oct 14 07:37:33 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 14 Oct 2021 09:37:33 +0200 Subject: Networks in openstack In-Reply-To: References: Message-ID: <2745324.atdPhlSkOF@p1> Hi, On czwartek, 14 pa?dziernika 2021 05:01:54 CEST Midhunlal Nb wrote: > Hi Team, > > I have some doubt in different network types > > Vlan,Vxlan and flat networks. > > How these networks helps in openstack.What is the use of each network? > > Could you Please provide me a detailed answer or suggest me any document > regarding this networks. > Show quoted text Generally vlan and flat networks are "provider" network types while vxlan is tunnel network and don't require any configuration from Your provider/DC. Provider networks can be also "external" networks so can provide access to the "internet" for Your cloud. Tunnel networks are isolated and can't have direct access to the external world. See https://assafmuller.com/2018/07/23/tenant-provider-and-external-neutron-networks/ where Assaf explained different types of networks pretty well. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From claus.r at mnet-mail.de Thu Oct 14 08:39:11 2021 From: claus.r at mnet-mail.de (claus.r) Date: Thu, 14 Oct 2021 10:39:11 +0200 Subject: Networks in openstack In-Reply-To: <2745324.atdPhlSkOF@p1> References: <2745324.atdPhlSkOF@p1> Message-ID: <7466d808-a265-2aa6-6d0c-a96cdc9f8c9e@mnet-mail.de> Is it possible to have vxlan also for external Network? Am 14.10.21 um 09:37 schrieb Slawek Kaplonski: > Hi, > > On czwartek, 14 pa?dziernika 2021 05:01:54 CEST Midhunlal Nb wrote: >> Hi Team, >> >> I have some doubt in different network types >> >> Vlan,Vxlan and flat networks. >> >> How these networks helps in openstack.What is the use of each network? >> >> Could you Please provide me a detailed answer or suggest me any document >> regarding this networks. >> Show quoted text > Generally vlan and flat networks are "provider" network types while vxlan is > tunnel network and don't require any configuration from Your provider/DC. > Provider networks can be also "external" networks so can provide access to the > "internet" for Your cloud. Tunnel networks are isolated and can't have direct > access to the external world. > > See https://assafmuller.com/2018/07/23/tenant-provider-and-external-neutron-networks/ where Assaf explained different types of networks pretty well. > From mark at stackhpc.com Thu Oct 14 09:59:22 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 14 Oct 2021 10:59:22 +0100 Subject: [kayobe][kolla][ironic] kayobe overcloud provision fails because ironic compute hosts use their inspection DHCP pool IPs In-Reply-To: References: Message-ID: On Wed, 13 Oct 2021 at 18:49, Manuel Holtgrewe wrote: > > Dear list, > > I am experimenting with kayobe to deploy a test installation of > OpenStack wallaby. You can find my configuration here: > > https://github.com/openstack/kayobe-config/compare/stable/wallaby...holtgrewe:my-wallaby?expand=1 > > I am following the kayobe documentation and have successfully setup a > controller and a seed node. > > I am at the point where I have the nodes configured and they show up > in bifrost baremetal node list. I can control them via > IPMI/iDRAC/RedFish and boot them into the IPA image and the nodes can > be inspected and actually go into the "manageable" status. kayobe is > capable of using the inspection results and assigning the root device, > so far, so good. > > I don't know whether my network configuration is good. I want to pin > the IPs of stack-1 to stack-4 and the names resolve the correct IP > addresses throughout my network. > > Below are some more details. In summary, I have trouble because > `kayobe overcloud provision` makes my 4 overcloud bare metal host boot > into IPA with DHCP enabled and they get the same IPs assigned that > were given to them earlier in inspection. This means that the > overcloud provision command cannot SSH into the nodes because it knows > them by the wrong IPs. > > I must be really missing something here. What is it? Hi Manuel. Bifrost will assign IPs from its IP address pool to the machines during inspection and provisioning. IPA will use these addresses. Once provisioning is complete, the machines should boot up into a CentOS image, using the IPs you have allocated. These are statically configured via a configdrive, which is installed during provisioning. If the node stays running IPA, then something is going wrong with provisioning. Mark > > Below are more details. > > Here is what kayobe pulled from the bifrost inspection (I believe). > > # cat etc/kayobe/inventory/overcloud > [controllers] > stack-1 ipmi_address=172.16.66.41 bmc_type=idrac > stack-2 ipmi_address=172.16.66.42 bmc_type=idrac > stack-3 ipmi_address=172.16.66.43 bmc_type=idrac > stack-4 ipmi_address=172.16.66.44 bmc_type=idrac > > The IPs are also fixed here > > # etc/kayobe/network-allocation.yml > compute_net_ips: > stack-1: 172.16.32.11 > stack-2: 172.16.32.12 > stack-3: 172.16.32.13 > stack-4: 172.16.32.14 > stack-seed: 172.16.32.6 > > However, I thought I had to provide allocation ranges for DHCP for > getting introspection to work. > > Thus, I have the following > > # etc/kayobe/networks.yml > compute_net_cidr: 172.16.32.0/19 > compute_net_gateway: 172.16.32.1 > compute_net_vip_address: 172.16.32.2 > compute_net_allocation_pool_start: 172.16.32.101 > compute_net_allocation_pool_end: 172.16.32.200 > compute_net_inspection_allocation_pool_start: 172.16.32.201 > compute_net_inspection_allocation_pool_end: 172.16.32.250 > > This leads to the following dnsmasq leases in the bifrost host. > > # cat /var/lib/dnsmasq/dnsmasq.leases > 1634187260 REDACTED 172.16.32.215 * REDACTED > 1634187271 REDACTED 172.16.32.243 * REDACTED > 1634187257 REDACTED 172.16.32.207 * REDACTED > 1634187258 REDACTED 172.16.32.218 * REDACTED > > What am I missing? > > Best wishes, > Manuel > From skaplons at redhat.com Thu Oct 14 10:52:00 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 14 Oct 2021 12:52:00 +0200 Subject: Networks in openstack In-Reply-To: <7466d808-a265-2aa6-6d0c-a96cdc9f8c9e@mnet-mail.de> References: <2745324.atdPhlSkOF@p1> <7466d808-a265-2aa6-6d0c-a96cdc9f8c9e@mnet-mail.de> Message-ID: <2698745.TLkxdtWsSY@p1> Hi, On czwartek, 14 pa?dziernika 2021 10:39:11 CEST claus.r wrote: > Is it possible to have vxlan also for external Network? >From API PoV You can set router:external = True for any type of network so yes, it's doable. > > Am 14.10.21 um 09:37 schrieb Slawek Kaplonski: > > Hi, > > > > On czwartek, 14 pa?dziernika 2021 05:01:54 CEST Midhunlal Nb wrote: > >> Hi Team, > >> > >> I have some doubt in different network types > >> > >> Vlan,Vxlan and flat networks. > >> > >> How these networks helps in openstack.What is the use of each network? > >> > >> Could you Please provide me a detailed answer or suggest me any document > >> regarding this networks. > >> Show quoted text > > > > Generally vlan and flat networks are "provider" network types while vxlan is > > tunnel network and don't require any configuration from Your provider/DC. > > Provider networks can be also "external" networks so can provide access to > > the "internet" for Your cloud. Tunnel networks are isolated and can't have > > direct access to the external world. > > > > See > > https://assafmuller.com/2018/07/23/tenant-provider-and-external-neutron-net > > works/ where Assaf explained different types of networks pretty well. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From elod.illes at est.tech Thu Oct 14 13:42:28 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 14 Oct 2021 15:42:28 +0200 Subject: [oslo] Propose to EOL stable/queens, stable/rocky on all the oslo scope In-Reply-To: <25b21881-bd0b-f763-9bb5-a66340108455@nemebean.com> References: <3055264.zr5fvq113q@p1> <25b21881-bd0b-f763-9bb5-a66340108455@nemebean.com> Message-ID: <3ff9fe9f-e75f-62a9-1da5-8cdb140427a8@est.tech> What Ben wrote is correct. One comment for the topic: oslo projects have Pike open (and broken) as well, so together with stablre/rocky and stable/queens stable/pike branches should be also marked as End of Life if no maintainers stepping up for these branches. Thanks, El?d On 2021. 10. 04. 22:59, Ben Nemec wrote: > > > On 10/4/21 2:00 PM, Slawek Kaplonski wrote: >> Hi, >> >> On poniedzia?ek, 4 pa?dziernika 2021 20:46:29 CEST feilong wrote: >>> Hi Herve, >>> >>> Please correct me, does that mean we have to also EOL stable/queens and >>> stable/rocky for most of the other projects technically? Or it >>> should be >>> OK? Thanks. >> >> I don't think we have to. I think it's not that common that we are >> using new >> versions of oslo libs in those stable branches so IMHO if all works >> fine for >> some project and it has maintainers, it still can be in EM phase. >> Or is my understanding wrong here? > > The Oslo libs released for those versions will continue to work, so > you're right that it wouldn't be necessary to EOL all of the consumers > of Oslo. > > The danger would be if a critical bug were found in one of those old > releases and a fix needed to be released. However, at this point the > likelihood of finding such a serious bug seems pretty low, and in some > cases it may be possible to use a newer Oslo release with an older > service. > >> >>> >>> On 5/10/21 5:09 am, Herve Beraud wrote: >>>> Hi, >>>> >>>> On our last meeting of the oslo team we discussed the problem with >>>> broken stable >>>> branches (rocky and older) in oslo's projects [1]. >>>> >>>> Indeed, almost all these branches are broken. El?d Ill?s kindly >>>> generated a list of periodic-stable errors on Oslo's stable >>>> branches [2]. >>>> >>>> Given the lack of active maintainers on Oslo and given the current >>>> status of the CI in those branches, I propose to make them End Of >>>> Life. >>>> >>>> I will wait until the end of month for anyone who would like to maybe >>>> step up >>>> as maintainer of those branches and who would at least try to fix CI >>>> of them. >>>> >>>> If no one will volunteer for that, I'll EOLing those branches for all >>>> the projects under the oslo umbrella. >>>> >>>> Let us know your thoughts. >>>> >>>> Thank you for your attention. >>>> >>>> [1] >>>> https://meetings.opendev.org/meetings/oslo/2021/oslo. >> 2021-10-04-15.00.log.tx >>>> t >>>> [2] >>>> http://lists.openstack.org/pipermail/openstack-discuss/2021-July/ >> 023939.html >> >> > From elod.illes at est.tech Thu Oct 14 14:00:10 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 14 Oct 2021 16:00:10 +0200 Subject: [stable][requirements][zuul] unpinned setuptools dependency on stable In-Reply-To: References: <6J4UZQ.VOBD0LVDTPUX1@est.tech> <827e99c6-99b2-54c8-a627-5153e3b84e6b@est.tech> Message-ID: <0861d9e7-0dc3-683f-ad65-120b156d03a0@est.tech> Hi, First, sorry for the slow response. I think pinning setuptools in requirements for stable branches is also a good idea (up till wallaby). I can accept that. Another thing is that the openstack projects that I've checked don't have issues in their CI regarding the unpinned setuptools. Mostly I saw/see the problem in unit test, static code check and similar tox targets. Anyway, if this issue is there for devstack for others then I think we can cap setuptools, too, in requirements repository, if it is OK for everyone. My only concern is to cap it from the newest relevant stable branch where we need it. If I'm not mistaken most of the projects have fixed their related issue in Xena, so I guess Wallaby should be the first branch to cap setuptools. Thanks, El?d On 2021. 10. 04. 20:16, Neil Jerram wrote: > I can now confirm that > https://review.opendev.org/c/openstack/requirements/+/810859 > fixes > my CI use case.? (By temporarily using a fork of the requirements repo > that includes that change.) > > (Fix detail if needed here: > https://github.com/projectcalico/networking-calico/pull/64/commits/cbed6282405957f7d60b6e0790c91fb852afe84c > ) > > Best wishes. > ? ? ?Neil > > > On Mon, Oct 4, 2021 at 6:28 PM Neil Jerram > wrote: > > Is anyone helping to progress this?? I just checked that > stable/ussuri devstack is still broken. > > Best wishes, > ? ? Neil > > > On Tue, Sep 28, 2021 at 9:20 AM Neil Jerram > wrote: > > But I don't think that solution works for devstack, does it?? > Is there a way to pin setuptools in a stable/ussuri devstack > run, except by changing the stable branch of the requirements > project? > > > On Mon, Sep 27, 2021 at 7:50 PM El?d Ill?s > wrote: > > Hi again, > > as I see there is no objection yet about using gibi's > solution [1] (as I > already summarized the situation in my previous mail [2]) > for a fix for > similar cases, so with a general stable core hat on, I > *suggest* > everyone to use that solution to pin the setuptools in tox > for every > failing cases (so that to avoid similar future errors as > well). > > [1] https://review.opendev.org/810461 > > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2021-September/025059.html > > > El?d > > > On 2021. 09. 27. 14:47, Balazs Gibizer wrote: > > > > > > On Fri, Sep 24 2021 at 10:21:33 PM +0200, Thomas Goirand > > > wrote: > >> Hi Gibi! > >> > >> Thanks for bringing this up. > >> > >> As a distro package maintainer, here's my view. > >> > >> On 9/22/21 2:11 PM, Balazs Gibizer wrote: > >>> ?Option 1: Bump the major version of the decorator > dependency on > >>> stable. > >> > >> Decorator 4.0.11 is even in Debian Stretch (currently > oldoldstable), for > >> which I don't even maintain OpenStack anymore (that's > OpenStack > >> Newton...). So I don't see how switching to decorator > 4.0.0 is a > >> problem, and I don't understand how OpenStack could be > using 3.4.0 which > >> is in Jessie (ie: 6 years old Debian release). > >> > >> PyPi says Decorator 3.4.0 is from 2012: > >> https://pypi.org/project/decorator/#history > > >> > >> Do you have your release numbers correct? If so, then > switching to > >> Decorator 4.4.2 (available in Debian Bullseye (shipped > with Victoria) > >> and Ubuntu >=Focal) looks like reasonable to me... > Sticking with 3.4.0 > >> feels a bit crazy (and I wasn't aware of it). > > > > Thanks for the info. So from Debian perspective it is OK > to bump the > > decorator version on stable. As others noted in this > thread it seems > > to be more than just decorator that broke. :/ > > > >> > >>> ?Option 2: Pin the setuptools version during tox > installation > >> > >> Please don't do this for the master branch, we need > OpenStack to stay > >> current with setuptools (yeah, even if this means > breaking changes...). > > > > I've no intention to pin it on master. Master needs to > work with the > > latest and greatest. Also on master it is easier to fix > / replace the > > dependencies that become broken with new setuptools. > > > >> > >> For already released OpenStack: I don't mind much if > this is done (I > >> could backport fixes if something breaks). > > > > ack > > > >> > >>> ?Option 3: turn off lower-constraints testing > >> > >> I already expressed myself about this: this is > dangerous as distros rely > >> on it for setting lower bounds as low as possible > (which is always > >> preferred from a distro point of view). > >> > >>> ?Option 4: utilize pyproject.toml[6] to specify > build-time requirements > >> > >> I don't know about pyproject.toml. > >> > >> Just my 2 cents, hoping it's useful, > > > > Thanks! > > > > Cheers, > > gibi > > > >> Cheers, > >> > >> Thomas Goirand (zigo) > >> > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.hoffmann at cloudandheat.com Thu Oct 14 15:26:40 2021 From: stefan.hoffmann at cloudandheat.com (Stefan Hoffmann) Date: Thu, 14 Oct 2021 17:26:40 +0200 Subject: [cinder][backup] backup big volumes leads to oom kill of cinder-backup Message-ID: <69539885434df49c08936c91190044caec00876e.camel@cloudandheat.com> Hi cinder team, we have the issue, that doing backups of big volumes (5TB) fails and cinder-backup get oom killed. Looks like cinder-backup is allocating memory but didn't release it correctly. Badly we are still using cinder queens. Is this a known issue and fixed in newer releases or should I create a bug report? We found a similar bug [1] with backup restore, that got fixed. I guess something like this is also needed for backup create. Thanks for you help Stefan [1] https://bugs.launchpad.net/cinder/+bug/1865011 -- Stefan Hoffmann DevOps-Engineer Cloud&Heat Technologies GmbH K?nigsbr?cker Stra?e 96 (Halle 15) | 01099 Dresden +49 351 479 367 36 stefan.hoffmann at cloudandheat.com | www.cloudandheat.com Die gr?ne Cloud f?r KI und ML. Think Green: Mach Deine Anwendung gr?ner. https://thinkgreen.cloudandheat.com/ Commercial Register: District Court Dresden Register Number: HRB 30549 VAT ID No.: DE281093504 Managing Director: Nicolas R?hrs Authorized signatory: Dr. Marius Feldmann Authorized signatory: Kristina R?benkamp -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 878 bytes Desc: This is a digitally signed message part URL: From skaplons at redhat.com Thu Oct 14 16:10:03 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 14 Oct 2021 18:10:03 +0200 Subject: [neutron] CI meeting Tuesday 19.10 Message-ID: <7286472.G0QQBjFxQf@p1> Hi, As we have PTG next week, let's cancel CI meeting. See You all at the PTG sessions and on the CI meeting on Tuesday 26th of October. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From DHilsbos at performair.com Thu Oct 14 16:16:09 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Thu, 14 Oct 2021 16:16:09 +0000 Subject: =?utf-8?B?UkU6IFByb2Jsw6htZSB3aXRoIGltYWdlIGZyb20gc25hcHNob3Q=?= In-Reply-To: <79DEE6DE-47E1-4618-8B26-D4CC1C3EC0F2@univ-grenoble-alpes.fr> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> <0670B960225633449A24709C291A525251CB431E@COM03.performair.local> <79DEE6DE-47E1-4618-8B26-D4CC1C3EC0F2@univ-grenoble-alpes.fr> Message-ID: <0670B960225633449A24709C291A525251CB51C9@COM03.performair.local> Franck; I don't see an option to upload a volume from a snapshot in the Victoria dashboard (Horizon), so I'm going to assume that can't / shouldn't be done. Uploading a volume to an image should be possible, assuming the volume is Available (un-attached). Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] Sent: Wednesday, October 13, 2021 11:29 PM To: Dominic Hilsbos Cc: openstack-discuss at lists.openstack.org Subject: Re: Probl?me with image from snapshot Yes, i?m the cluster admin. My cluster is based on Centos Stream / Kolla-ansible / Wallaby. You?re right, I need to check all the logs.? (/var/log/kolla/cinder for example for me) Or check in containers?. But before, I'am not sure what I am trying to do is possible, and since I am not sure of my explanations (in English), it is difficult to make myself fully understood about the problem. Thank you very much for your help Franck VEDEL Le 13 oct. 2021 ? 22:06, DHilsbos at performair.com a ?crit : Franck; ? What version of OpenStack are you running?? Are you the cluster administrator, or a user of the cluster? ? I?m running Victoria, all tips below assume that major version. ? Can you create an image backed volume outside of the instance creation process? ? Do you have access to the systems running the cluster, can you review logs on the controller computers?? You?re looking for the logs from the glance and cinder services.? Glance?s logs should be somewhere like /var/log/glance/.? I only have api.log for glance.? Cinder?s should be somewhere like /var/log/cinder/. ?I have api.log, backup.log, scheduler.log, and volume.log. ? You should also check your glance and cinder configurations.? They will be at /etc/glance/glance-api.conf and /etc/cinder/cinder.conf. In the glance configuration, you?re looking for the enabled_backends line in the [DEFAULT] section.? If I remember correctly, it?s values has the form :.? The type is the interesting part. Cinder is a little more difficult.? You?re still going to be looking for an enabled_backends line, in the [DEFAULT] section, but it?s value is just a name (enabled_backends = ). ?You need to locate a configuration section which matches the name ([]).? You?ll then be looking for a volume_driver line.? Based on you response, I suspect this will be: volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver. ? I believe the logs will be critical to diagnosing this issue.? I suspect you?ll find the error in the cinder volume.log, though it might also be in scheduler.log, or even in the glance.log. ? Thank you, ? Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com ? ? From:?Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr]? Sent:?Wednesday, October 13, 2021 12:02 PM To:?Dominic Hilsbos Cc:?openstack-discuss at lists.openstack.org Subject:?Re: Probl?me with image from snapshot ? Hi Dominic, and thanks a lot for your help. I only see one issue with what you said, or perhaps didn't say. ?You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Yes yes, i did ?that, sys prep ? generalize ? Regarding OpenStack, could you tell us what glance and cinder drivers you use? i?m not sure? for cinder: LVM on a iscsi bay ? Have you done other volume to image before? No, and it?s a good idea to test with a cirros instance. I will try tomorrow. ? Have you verified that the image finishes creating before trying to create a VM from it? Yes ? I'm not sure that snapshotting before creating an image is necessary. ?It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I just tried with an instance off? same problem, sam error message (Block Device Mapping is Invalid) ? I've done volume to image before, but it's been a little while. ?I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thanks a lot !! Really !! ? Franck VEDEL D?p. R?seaux?Informatiques?& T?l?coms IUT1 - Univ GRENOBLE Alpes 0476824462 Stages, Alternance, Emploi. http://www.rtgrenoble.fr Le 13 oct. 2021 ? 17:16, a ?crit : ? Franck; I only see one issue with what you said, or perhaps didn't say. ?You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? Regarding OpenStack, could you tell us what glance and cinder drivers you use? Have you done other volume to image before? Have you verified that the image finishes creating before trying to create a VM from it? I'm not sure that snapshotting before creating an image is necessary. ?It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. I've done volume to image before, but it's been a little while. ?I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr]? Sent: Wednesday, October 13, 2021 12:58 AM To: openstack-discuss Subject: Probl?me with image from snapshot Hello and first sorry for my english? thanks google. Something is wrong with what I want to do: I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). Here is what I want to do and which does not work as I want: - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. I create the snapshot, I place the "--public" parameter on the new image. I'm trying to create a new instance from this snapshot with the admin account: it works. I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? Thanks if you have ideas for helping me Franck VEDEL From katonalala at gmail.com Thu Oct 14 16:20:23 2021 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 14 Oct 2021 18:20:23 +0200 Subject: [neutron] Team meeting Message-ID: Hi Neutrinos, As we have PTG next week, let's cancel the team meeting. Cheers Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu Oct 14 16:29:43 2021 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 14 Oct 2021 18:29:43 +0200 Subject: [neutron] Drivers meeting agenda - 15.10.2021 Message-ID: Hi Neutron Drivers! As we have no quorum last week to decide on https://bugs.launchpad.net/neutron/+bug/1946251 tomorrow we can check it again and vote. The logs of the meeting from last week: https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-10-08-14.14.log.html (Sorry it is 4 days long as I missed to end the meeting.....) As we have PTG next week, let's cancel drivers meeting for that Friday, and I will be on PTO the week after (29. October) so I can't chair that one. See you online tomorrow. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu Oct 14 16:34:34 2021 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 14 Oct 2021 18:34:34 +0200 Subject: [neutron][PTG] Schedule for yoga PTG Message-ID: Hi, I made a first schedule for the PTG next week, please check it: https://etherpad.opendev.org/p/neutron-yoga-ptg If you have anything to change or add please raise your voice :-) There's still some parts which can change, as we will have multiple cross-project sessions. See you next week. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gustavofaganello.santos at windriver.com Thu Oct 14 18:37:43 2021 From: gustavofaganello.santos at windriver.com (Gustavo Faganello Santos) Date: Thu, 14 Oct 2021 15:37:43 -0300 Subject: [nova][dev] Reattaching mediated devices to instance coming back from suspended state Message-ID: Hello, everyone! I'm working on a solution for Nova to reattach previously used mediated devices (vGPU instances, in my case) to VMs coming back from suspension, which seems to have been left on hold in the past [1] because of an old libvirt limitation, and I'm having a bit of a hard time doing so, since I'm not too familiar with the repo. I have tried creating a function that does the opposite of the mdev detach function, but the get_all_devices method seems to return an empty list when looking for mdevs at the moment of resuming the VM. Looking at the instance's XML file, I noticed that the mdev property remains while the VM is suspended, but it disappears AFTER the whole resume function is executed. I'm failing to understand why the mdev list returns empty, even though the mdev property exists in the instance's XML, and also why the mdev is removed from the XML after the resume function is executed. With that in mind, does anyone know if there's been any attempt to solve this issue since it was left on hold? If not, is there anything I should know while I attempt to do so? Thanks in advance. Gustavo [1] https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L8007 From melwittt at gmail.com Thu Oct 14 19:02:13 2021 From: melwittt at gmail.com (melanie witt) Date: Thu, 14 Oct 2021 12:02:13 -0700 Subject: [nova][dev] Reattaching mediated devices to instance coming back from suspended state In-Reply-To: References: Message-ID: <2940f202-d632-c8f1-a0ed-d4473a9fc9c6@gmail.com> On Thu Oct 14 2021 11:37:43 GMT-0700 (Pacific Daylight Time), Gustavo Faganello Santos wrote: > Hello, everyone! > > I'm working on a solution for Nova to reattach previously used mediated > devices (vGPU instances, in my case) to VMs coming back from suspension, > which seems to have been left on hold in the past [1] because of an old > libvirt limitation, and I'm having a bit of a hard time doing so, since > I'm not too familiar with the repo. > > I have tried creating a function that does the opposite of the mdev > detach function, but the get_all_devices method seems to return an empty > list when looking for mdevs at the moment of resuming the VM. Looking at > the instance's XML file, I noticed that the mdev property remains while > the VM is suspended, but it disappears AFTER the whole resume function > is executed. I'm failing to understand why the mdev list returns empty, > even though the mdev property exists in the instance's XML, and also why > the mdev is removed from the XML after the resume function is executed. > > With that in mind, does anyone know if there's been any attempt to solve > this issue since it was left on hold? If not, is there anything I should > know while I attempt to do so? I'm not sure whether this will be helpful but there is similar (or adjacent?) work currently in progress to handle the case of recreating mediated devices after a compute host reboot [2][3]. The launchpad bug contains some info on workarounds for this case and the proposed patch pulls allocation information from the placement service to recreate the mdevs. -melanie [2] https://bugs.launchpad.net/nova/+bug/1900800 [3] https://review.opendev.org/c/openstack/nova/+/810220 > Thanks in advance. > Gustavo > > [1] > https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L8007 > From franck.vedel at univ-grenoble-alpes.fr Thu Oct 14 19:43:56 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Thu, 14 Oct 2021 21:43:56 +0200 Subject: =?utf-8?Q?Re=3A_Probl=C3=A8me_with_image_from_snapshot?= In-Reply-To: <0670B960225633449A24709C291A525251CB51C9@COM03.performair.local> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB3E83@COM03.performair.local> <0670B960225633449A24709C291A525251CB431E@COM03.performair.local> <79DEE6DE-47E1-4618-8B26-D4CC1C3EC0F2@univ-grenoble-alpes.fr> <0670B960225633449A24709C291A525251CB51C9@COM03.performair.local> Message-ID: <241AA8C1-47B4-450C-9DD0-B49420A4B75F@univ-grenoble-alpes.fr> Dominic, maybe what I want to do is not possible. I check my logs?. thank you very much for your time and your help. Franck > Le 14 oct. 2021 ? 18:16, DHilsbos at performair.com a ?crit : > > Franck; > > I don't see an option to upload a volume from a snapshot in the Victoria dashboard (Horizon), so I'm going to assume that can't / shouldn't be done. > > Uploading a volume to an image should be possible, assuming the volume is Available (un-attached). > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 11:29 PM > To: Dominic Hilsbos > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Probl?me with image from snapshot > > Yes, i?m the cluster admin. My cluster is based on Centos Stream / Kolla-ansible / Wallaby. > You?re right, I need to check all the logs. > (/var/log/kolla/cinder for example for me) > Or check in containers?. > > But before, I'am not sure what I am trying to do is possible, and since I am not sure of my explanations (in English), it is difficult to make myself fully understood about the problem. > > > Thank you very much for your help > > Franck VEDEL > > > > Le 13 oct. 2021 ? 22:06, DHilsbos at performair.com a ?crit : > > Franck; > > What version of OpenStack are you running? Are you the cluster administrator, or a user of the cluster? > > I?m running Victoria, all tips below assume that major version. > > Can you create an image backed volume outside of the instance creation process? > > Do you have access to the systems running the cluster, can you review logs on the controller computers? You?re looking for the logs from the glance and cinder services. Glance?s logs should be somewhere like /var/log/glance/. I only have api.log for glance. Cinder?s should be somewhere like /var/log/cinder/. I have api.log, backup.log, scheduler.log, and volume.log. > > You should also check your glance and cinder configurations. They will be at /etc/glance/glance-api.conf and /etc/cinder/cinder.conf. > In the glance configuration, you?re looking for the enabled_backends line in the [DEFAULT] section. If I remember correctly, it?s values has the form :. The type is the interesting part. > Cinder is a little more difficult. You?re still going to be looking for an enabled_backends line, in the [DEFAULT] section, but it?s value is just a name (enabled_backends = ). You need to locate a configuration section which matches the name ([]). You?ll then be looking for a volume_driver line. Based on you response, I suspect this will be: volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver. > > I believe the logs will be critical to diagnosing this issue. I suspect you?ll find the error in the cinder volume.log, though it might also be in scheduler.log, or even in the glance.log. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 12:02 PM > To: Dominic Hilsbos > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Probl?me with image from snapshot > > Hi Dominic, and thanks a lot for your help. > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? > Yes yes, i did that, sys prep ? generalize > > Regarding OpenStack, could you tell us what glance and cinder drivers you use? > i?m not sure? for cinder: LVM on a iscsi bay > > Have you done other volume to image before? > No, and it?s a good idea to test with a cirros instance. I will try tomorrow. > > Have you verified that the image finishes creating before trying to create a VM from it? > Yes > > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. > I just tried with an instance off? same problem, sam error message (Block Device Mapping is Invalid) > > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. > Thanks a lot !! Really !! > > Franck VEDEL > D?p. R?seaux Informatiques & T?l?coms > IUT1 - Univ GRENOBLE Alpes > 0476824462 > Stages, Alternance, Emploi. > http://www.rtgrenoble.fr > > > > Le 13 oct. 2021 ? 17:16, a ?crit : > > Franck; > > I only see one issue with what you said, or perhaps didn't say. You are aware that it is a very good idea to sysprep --generalize a Windows instance, before making an image from it, yes? > > Regarding OpenStack, could you tell us what glance and cinder drivers you use? > > Have you done other volume to image before? > > Have you verified that the image finishes creating before trying to create a VM from it? > > I'm not sure that snapshotting before creating an image is necessary. It's a good idea (maybe even necessary) to have the instance off when converting a volume to an image, thus, depending on your storage technology, the image might just be a snapshot. > > I've done volume to image before, but it's been a little while. I plan to do this with a Linux instance today, so I'll see if I can do it while the instance is running. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President ? Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > From: Franck VEDEL [mailto:franck.vedel at univ-grenoble-alpes.fr] > Sent: Wednesday, October 13, 2021 12:58 AM > To: openstack-discuss > Subject: Probl?me with image from snapshot > > Hello and first sorry for my english? thanks google. > > Something is wrong with what I want to do: > I use Wallaby, it works very well (apart from VpnaaS, I wasted too much time this summer to make it work, without success, and the bug does not seem to be fixed). > > Here is what I want to do and which does not work as I want: > - With an admin account, I launch a Win10 instance from the image I created. The instance is working but it takes about 10 minutes to get Win10 up and running. > I wanted to take a snapshot of this instance and then create a new image from this snapshot. And that users use this new image. > I create the snapshot, I place the "--public" parameter on the new image. > I'm trying to create a new instance from this snapshot with the admin account: it works. > I create a new user, who has his project, and sees all the images. I try to create an instance with this new image and I get the message: > > Block Device Mapping is Invalid: failed to get snapshot f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) > > Is it a legal problem? Is it possible to do as I do? otherwise how should we do it? > > Thanks if you have ideas for helping me > > > Franck VEDEL > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Thu Oct 14 20:30:12 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 14 Oct 2021 13:30:12 -0700 Subject: [manila][ptg] No IRC meetings on 21st and 28th Oct 2021 Message-ID: Hi Zorillas, Since we'll be at the PTG [1], we'll skip the IRC meeting [2] on the 21st of Oct; and since a number of us may be taking some time off the week after, we'll skip that weekly occurrence (28th Oct) as well. Please feel free to grab attention to any issue via this mailing list, or hop over to #openstack-manila on OFTC. Our next weekly IRC meeting will be on 4th Nov 2021. Thanks, and see you at the PTG! Goutham [1] https://etherpad.opendev.org/p/yoga-ptg-manila-planning [2] https://wiki.openstack.org/wiki/Manila/Meetings -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Oct 14 20:58:44 2021 From: melwittt at gmail.com (melanie witt) Date: Thu, 14 Oct 2021 13:58:44 -0700 Subject: =?UTF-8?Q?Re=3a_Probl=c3=a8me_with_image_from_snapshot?= In-Reply-To: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> Message-ID: On Wed Oct 13 2021 00:57:52 GMT-0700 (Pacific Daylight Time), Franck VEDEL wrote: > Hello and first sorry for my english? thanks google. > > Something is wrong with what I want to do: > I use Wallaby, it works very well (apart from VpnaaS, I wasted too much > time this summer to make it work, without success, and the bug does not > seem to be fixed). > > Here is what I want to do and which does not work as I want: > - With an admin account, I launch a Win10 instance from the image I > created. The instance is working but it takes about 10 minutes to get > Win10 up and running. > I wanted to take a snapshot of this instance and then create a new image > from this snapshot. And that users use this new image. > I create the snapshot, I place the "--public" parameter on the new image. > I'm trying to create a new instance from this snapshot with the admin > account: it works. > I create a new user, who has his project, and sees all the images. I try > to create an instance with this new image and I get the message: > > Block Device Mapping is Invalid: failed to get snapshot > f12c04f2-51e7-4817-ab9b-eda63c5b9aff. (HTTP 400) (Request-ID: > req-c26dab86-c25f-409a-8390-8aa0ea8fe1cb) > > Is it a legal problem? Is it possible to do as I do? otherwise how > should we do it? According to this cinder doc [1], it looks like what you're trying to do is valid, to create an image backed by a volume and boot instances from that image. The problem I see where the "failed to get snapshot" error is raised in nova for the non-admin user, it looks to be a problem with policy access for the GET /snapshots/{snapshot_id} cinder API. Although the image is public, the volume behind it was created by some project and by default the API will allow the admin project or the project that created/owns the volume [2]: volume:get_snapshot Default rule:admin_or_owner Operations GET /snapshots/{snapshot_id} This is why it works when you boot an instance using the admin account. Currently, you would need to change the above rule in the cinder policy.yaml in order to allow a different project than the owner to GET the snapshot. It's possible this is a bug in nova and that we should be using an elevated admin request context to call GET /snapshots/{snapshot_id} if the snapshot is for a volume-backed image. Hopefully I haven't completely misunderstood what is going on here, if so, please ignore me. :) HTH, -melanie [1] https://docs.openstack.org/cinder/wallaby/admin/blockstorage-volume-backed-image.html [2] https://docs.openstack.org/cinder/wallaby/configuration/block-storage/policy.html#cinder > Thanks if you have ideas for helping me > > > Franck VEDEL > From Daniel.Pereira at windriver.com Thu Oct 14 21:27:50 2021 From: Daniel.Pereira at windriver.com (Pereira, Daniel Oliveira) Date: Thu, 14 Oct 2021 21:27:50 +0000 Subject: [dev][cinder] Consultation about new cinder-backup features In-Reply-To: <20211004102331.e3otr2k2mjzglg42@localhost> References: <20210906132813.xsaxbsyyvf4ey4vm@localhost> <20211004102331.e3otr2k2mjzglg42@localhost> Message-ID: Hi all, my team is evaluating the cinder-backup multi backends configuration spec. It seems this spec fulfills our needs as it is, so we are considering working on its implementation, but we cannot at the moment commit to deliver this feature. About the improvement on NFS backup driver to allow backups on private NFS servers, we decided that we won't try to upstream this feature, based on feedback that we received. ? We also won't bring these topics to be discussed on Cinder PTG meeting. I would?like to thank Gorka Eguileor, Brian Rosmaita and Arkady Kanevsky for your comments in this thread. Regards, Daniel Pereira. From: Gorka Eguileor Sent: Monday, October 4, 2021 7:23 AM To: Pereira, Daniel Oliveira Cc: openstack-discuss at lists.openstack.org Subject: Re: [dev][cinder] Consultation about new cinder-backup features ? [Please note: This e-mail is from an EXTERNAL e-mail address] On 30/09, Daniel de Oliveira Pereira wrote: > On 06/09/2021 10:28, Gorka Eguileor wrote: > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > On 27/08, Daniel de Oliveira Pereira wrote: > >> Hello everyone, > >> > >> We have prototyped some new features on Cinder for our clients, and we > >> think that they are nice features and good candidates to be part of > >> upstream Cinder, so we would like to get feedback from OpenStack > >> community about these features and if you would be willing to accept > >> them in upstream OpenStack. > > > > Hi Daniel, > > > > Thank you very much for your willingness to give back!!! > > > > > >> > >> Our team implemented the following features for cinder-backup service: > >> > >>???? 1. A multi-backend backup driver, that allow OpenStack users to > >> choose, via API/CLI/Horizon, which backup driver (Ceph or NFS, in our > >> prototype) will be used during a backup operation to create a new volume > >> backup. > > > > This is a feature that has been discussed before, and e0ne already did > > some of the prerequisites for it. > > > > > >>???? 2. An improved NFS backup driver, that allow OpenStack users to back > >> up their volumes to private NFS servers, providing the NFS hostpath at > >> runtime via API/CLI/Horizon, while creating the volume backup. > >> > > > > What about the username and password? > > Hi Gorka, > > thanks for your feedback. > > Our prototype doesn't support authentication using username/password, > since this is a feature that NFS doesn't provide built-in support. > > > Can backups be restored from a remote location as well? > > Yes, if the location is the one where the backup was originally saved > (same NFS hostpath), as the backup location is stored on Cinder backups > table during the backup creation. It doesn't support restoring the > backup from an arbitrary remote NFS server. > > > > > This sounds like a very cool feature, but I'm not too comfortable with > > having it in Cinder. > > > > The idea is that Cinder provides an abstraction and doesn't let users > > know about implementation details. > > > > With that feature as it is a user could request a backup to an off-site > > location that could result in congestion in one of the outbound > > connections. > > I think this is a very good point, that we weren't taking into > consideration in our prototype. > > > > > I can only think of this being acceptable for admin users, and in that > > case I think it would be best to use the multi-backup destination > > feature instead. > > > > After all, how many times do we have to backup to a different location? > > Maybe I'm missing a use case. > > Our clients have privacy and security concerns with the same NFS server > being shared by OpenStack tenants to store volume backups, so they > required cinder-backup to be able to back up volumes to private NFS servers. > > > > > If the community thinks this as a desired feature I would encourage > > adding it with a policy that disables it by default. > > > > > >> Considering that cinder was configured to use the multi-backend backup > >> driver, this is how it works: > >> > >>???? During a volume backup operation, the user provides a "location" > >> parameter to indicate which backend will be used, and the backup > >> hostpath, if applicable (for NFS driver), to create the volume backup. > >> For instance: > >> > >>???? - Creating a backup using Ceph backend: > >>???? $ openstack volume backup create --name --location > >> ceph > >> > >>???? - Creating a backup using the improved NFS backend: > >>???? $ openstack volume backup create --name --location > >> nfs://my.nfs.server:/backups > >> > >>???? If the user chooses Ceph backend, the Ceph driver will be used to > >> create the backup. If the user chooses the NFS backend, the improved NFS > >> driver, previously mentioned, will be used to create the backup. > >> > >>???? The backup location, if provided, is stored on Cinder database, and > >> can be seen fetching the backup details: > >>???? $ openstack volume backup show > >> > >> Briefly, this is how the features were implemented: > >> > >>???? - Cinder API was updated to add an optional location parameter to > >> "create backup" method. Horizon, and OpenStack and Cinder CLIs were > >> updated accordingly, to handle the new parameter. > >>???? - Cinder backup controller was updated to handle the backup location > >> parameter, and a validator for the parameter was implemented using the > >> oslo config library. > >>???? - Cinder backup object model was updated to add a nullable location > >> property, so that the backup location could be stored on cinder database. > >>???? - a new backup driver base class, that extends BackupDriver and > >> accepts a backup context object, was implemented to handle the backup > >> configuration provided at runtime by the user. This new backup base > >> class requires that the concrete drivers implement a method to validate > >> the backup context (similar to BackupDriver.check_for_setup_error) > >>???? - the 2 new backup drivers, previously mentioned, were implemented > >> using these new backup base class. > >>???? - in BackupManager class, the "service" attribute, that on upstream > >> OpenStack holds the backup driver class name, was re-implemented as a > >> factory function that accepts a backup context object and return an > >> instance of a backup driver, according to the backup driver configured > >> on cinder.conf file and the backup context provided at runtime by the user. > >>???? - All the backup operations continue working as usual. > >> > > > > When this feature was discussed upstream we liked the idea of > > implementing this like we do multi-backends for the volume service, > > adding backup-types. > > I found this approved spec [1] (that, I believe, is product of the work > done by eOne that you mentioned before), but I couldn't find any work > items in progress related to it. > Do you know the current status of this spec? Is it ready to be > implemented or is there some more work to be done until there? If we > decide to work on its implementation, would be required to review, and > possibly update, the spec for the current development cycle? > > [1] > https://specs.openstack.org/openstack/cinder-specs/specs/victoria/backup-backends-configuration.html > Hi, I think all that would need to be done regarding the spec is to submit a patch to move it to the current release directory and fix the formatting issue of the tables from the "Data model impact" section. You'll be able to leverage Ivan's work [1] when implementing the multi-backup feature. Cheers, Gorka. [1]: https://review.opendev.org/c/openstack/cinder/+/630305 > > > > > In latest code backup creation operations have been modified to go > > through the scheduler, so that's a piece that is already implemented. > > > > > >> Could you please let us know your thoughts about these features and if > >> you would be open to adding them to upstream Cinder? If yes, we would be > >> willing to submit the specs and work on the upstream implementation, if > >> they are approved. > >> > >> Regards, > >> Daniel Pereira > >> > > > > I believe you will have the full community's support on the first idea > > (though probably not on the proposed implementation). > > > > I'm not so sure on the second one, iti will most likely depend on the > > use cases.? Many times the reasons why features are dismissed upstream > > is because there are no clear use cases that justify the addition of the > > code. > > > > Looking forward to continuing this conversation at the PTG, IRC, in a > > spec, or through here. > > > > Cheers, > > Gorka. > > > From gmann at ghanshyammann.com Thu Oct 14 22:52:03 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 14 Oct 2021 17:52:03 -0500 Subject: [tc] No TC weekly meeting next week due to meeting in PTG Message-ID: <17c81015efc.f692731b1056175.888775454375504790@ghanshyammann.com> Hello Everyone, As we will be meeting in PTG next week, we are cancelling the TC's next week (21st Oct) IRC meeting. -gmann From franck.vedel at univ-grenoble-alpes.fr Fri Oct 15 06:45:14 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Fri, 15 Oct 2021 08:45:14 +0200 Subject: =?utf-8?Q?Re=3A_Probl=C3=A8me_with_image_from_snapshot?= In-Reply-To: References: <3ACDF5B4-D691-487F-8410-4AAC23D0D999@univ-grenoble-alpes.fr> Message-ID: Melanie, On the contrary, I believe that you have fully understood my problem, and your explanations are very clear. Thank you so much. I looked at the documentation, it is well explained, I understand what to do. I'm using kolla-ansible to deploy Wallaby, it's not going to be easy, because changing the default permissions for cinder doesn't look easy. Thanks again, you've saved me a lot of time, and it's going to help me with what I want to do with my students. Franck > Le 14 oct. 2021 ? 22:58, melanie witt a ?crit : > > According to this cinder doc [1], it looks like what you're trying to do is valid, to create an image backed by a volume and boot instances from that image. > > The problem I see where the "failed to get snapshot" error is raised in nova for the non-admin user, it looks to be a problem with policy access for the GET /snapshots/{snapshot_id} cinder API. Although the image is public, the volume behind it was created by some project and by default the API will allow the admin project or the project that created/owns the volume [2]: > > volume:get_snapshot > Default > rule:admin_or_owner > > Operations > GET /snapshots/{snapshot_id} > > This is why it works when you boot an instance using the admin account. Currently, you would need to change the above rule in the cinder policy.yaml in order to allow a different project than the owner to GET the snapshot. > > It's possible this is a bug in nova and that we should be using an elevated admin request context to call GET /snapshots/{snapshot_id} if the snapshot is for a volume-backed image. > > Hopefully I haven't completely misunderstood what is going on here, if so, please ignore me. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Oct 15 07:32:24 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 15 Oct 2021 09:32:24 +0200 Subject: [neutron] Drivers meeting agenda - 15.10.2021 In-Reply-To: References: Message-ID: <9488604.VV5PYv0bhD@p1> Hi, On czwartek, 14 pa?dziernika 2021 18:29:43 CEST Lajos Katona wrote: > Hi Neutron Drivers! > As we have no quorum last week to decide on > https://bugs.launchpad.net/neutron/+bug/1946251 tomorrow we can check it > again and vote. > The logs of the meeting from last week: > https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers. 202 > 1-10-08-14.14.log.html > > (Sorry it is 4 days long as I missed to end the meeting.....) > > As we have PTG next week, let's cancel drivers meeting for that Friday, and > I will be on PTO the week after (29. October) so I can't chair that one. > > See you online tomorrow. > Lajos Katona (lajoskatona) I will not be able to attend today's meeting. But in general I'm +1 for this RFE as an idea. We can discuss exact way how to do it in the API in the spec's review probably. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ricolin at ricolky.com Fri Oct 15 08:14:06 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 15 Oct 2021 16:14:06 +0800 Subject: [Multi-arch SIG][PTG] PTG plan Message-ID: Dear all Next week, Multi-arch SIG will have PTG at: 10/19 Tuesday 07 - 08 UTC time 10/19 Tuesday 14 -15 UTC time SIG have been low activity for months and we need more volunteers to join. Please sign up for PTG if you're interested. Also feel free to suggest topics PTG etherpad: https://etherpad.opendev.org/p/oct2021-ptg-multi-arch *Rico Lin* OIF Individual Board of directors, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Fri Oct 15 08:16:33 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 15 Oct 2021 16:16:33 +0800 Subject: [Heat][PTG] PTG plan Message-ID: Dear all Apologize for the late notice Next week Heat team will have PTG schedule at: Monday 14 -15 UTC Feel free to join and suggest topic to https://etherpad.opendev.org/p/oct2021-ptg-heat *Rico Lin* OIF Individual Board of directors, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Fri Oct 15 09:17:15 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Fri, 15 Oct 2021 18:17:15 +0900 Subject: [tacker][ptg] Yoga vPTG planning In-Reply-To: <57d5eecf-c468-5bc5-169e-adcd502a0896@gmail.com> References: <57d5eecf-c468-5bc5-169e-adcd502a0896@gmail.com> Message-ID: <4f9ffaa7-423e-5c75-ba86-8ed6864b2d07@gmail.com> Hi tacker team, As a reminder, we'll have PTG next week. Please check etherpad for the details[1]. You can find each link of meeting room at [2]. [1] https://etherpad.opendev.org/p/tacker-yoga-ptg [2] https://ptg.opendev.org/ptg.html Thanks, Yasufumi On 2021/07/19 1:41, yasufum wrote: > Hi everyone, > > The next vPTG will be held on 18-22 October as I shared at the previous > IRC meeting [1]. Registration has already opened [2]. We've decided to > have the next vPTG session on the same timeslots, 6-8 UTC, as previous > for most of us join from India and APAC regions. > > I've prepared etherpad for the next vPTG [3]. If you have any > suggestion, please add your topic on it. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023540.html > > [2] https://openinfra-ptg.eventbrite.com/ > [3] https://etherpad.opendev.org/p/tacker-yoga-ptg > > Thanks, > Yasufumi From mnasiadka at gmail.com Fri Oct 15 09:40:59 2021 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Fri, 15 Oct 2021 11:40:59 +0200 Subject: [kolla] Cancelling next weeks meeting (20 Oct 2021) Message-ID: Hello koalas, Since next week is PTG - I?m cancelling the meeting on 20th Oct 2021. Best regards, Michal From amonster369 at gmail.com Fri Oct 15 10:51:49 2021 From: amonster369 at gmail.com (A Monster) Date: Fri, 15 Oct 2021 11:51:49 +0100 Subject: How to use hosts with no storage disks Message-ID: In Openstack, is it possible to create compute nodes with no hard drives and use PXE in order to boot the host's system and therefore launch instances with no local drive which is needed to boot the VM's image. If not, what's the minimum storage needed to be given to hosts in order to get a fully functional system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Fri Oct 15 12:18:24 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Fri, 15 Oct 2021 14:18:24 +0200 Subject: [telemetry]Yoga vPTG Message-ID: <8FA00180-BE4D-4A80-A5B3-916B41FC996B@matthias-runge.de> Hello there, Next week, we?ll have PTG. There will be a telemetry session on Tuesday from 4pm to 5pm UTC. The planning ether pad used is at https://etherpad.opendev.org/p/telemetry-yoga-ptg Matthias From fungi at yuggoth.org Fri Oct 15 12:52:27 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Oct 2021 12:52:27 +0000 Subject: [ops] How to use hosts with no storage disks In-Reply-To: References: Message-ID: <20211015125226.jkp6b53nzzypabnc@yuggoth.org> On 2021-10-15 11:51:49 +0100 (+0100), A Monster wrote: > In Openstack, is it possible to create compute nodes with no hard > drives and use PXE in order to boot the host's system [...] This question is outside the scope of OpenStack itself, unless you're using another OpenStack deployment to manage the physical servers (for example TripleO has an "undercloud" which uses Ironic to manage the servers which then comprise the "overcloud" presented to users). OpenStack's services start on already booted servers, so you can in theory use any mechanism you like, including PXEboot, to boot those physical servers. I understand OpenStack Ironic is a great solution to this problem though, and can be set up entirely stand-alone with its Bifrost installer. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Fri Oct 15 13:57:50 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 15 Oct 2021 09:57:50 -0400 Subject: [cinder][PTG] yoga PTG schedule Message-ID: <277c3ab8-24b1-f931-4d74-d70bac8297be@gmail.com> The Cinder project team will be meeting on Tuesday 19 October through Friday 22 October from 1300-1700 UTC. For the most part, the scheduling is flexible, and we'll discuss topics roughly in the order given on the etherpad: https://etherpad.opendev.org/p/yoga-ptg-cinder We'll try to keep the "Currently at the PTG" page updated, but you know how that goes. https://ptg.opendev.org/ptg.html For your scheduling convenience, here's an outline of the schedule in a spreadsheet: https://ethercalc.openstack.org/wfno9g46fa7p - topics in red: cross-project, so the times for those are accurate - topics in blue: participatory activities for the team - topics in green: happy hour We'll be meeting in BlueJeans (except for the happy hour, which will be in meetpad). The sessions (except for happy hour) will be recorded. Connection info is on the etherpad: https://etherpad.opendev.org/p/yoga-ptg-cinder All OpenStack community members are welcome (especially for the happy hour). Looking forward to seeing everyone next week! brian From amy at demarco.com Fri Oct 15 14:04:28 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 15 Oct 2021 07:04:28 -0700 Subject: [Diversity] [PTG] Diversity nd Inclusion Session at PTG Message-ID: Hi Everyone, The Diversity and Inclusion WG will be meeting during the PTG on Monday October 18th at 14:00 UTC in the Diablo room. We welcome all Open Infrastructure project to attend and we would be happy to assist you with any questions you might have in regards to the Inclusive Naming initiative that OIF began last year, We plan to do a review of the current CoC or going through the current code base for the projects to find instances where changes are needed to provide both examples and patches where appropriate to assist in these endeavours. The activities we focus on will be determined by attendance during the session, Thanks, Amy (spotz) on behalf of the Diversity and Inclusion WG -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Fri Oct 15 14:32:15 2021 From: james.slagle at gmail.com (James Slagle) Date: Fri, 15 Oct 2021 10:32:15 -0400 Subject: [TripleO] Hackfest at the PTG on Thursday Message-ID: Hi TripleO, As previously mentioned, we're going to have a hackfest on Thursday next week during the PTG from 1300-1700UTC. The topic will be directord+task-core -- the proposed new task execution engine for TripleO. I've prepared an etherpad for the hackfest ahead of time: https://etherpad.opendev.org/p/tripleo-directord-hackfest There are details in the etherpad about how to set up 2 nodes for the hackfest. Virtual machines would work great for this, on any platform. I've run through it on a private OpenStack cloud, and also just using local libvirt. It would be good to get those setup ahead of the hackfest if you have time between now and Thursday. I'm looking forward to Thursday and some informal hacking! -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Fri Oct 15 14:55:22 2021 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 15 Oct 2021 10:55:22 -0400 Subject: OpenStack Xena for Ubuntu 21.10 and Ubuntu 20.04 LTS Message-ID: The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Xena on Ubuntu 21.10 (Impish Indri) and Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of the Xena release can be found at: https://www.openstack.org/software/xena To get access to the Ubuntu Xena packages: == Ubuntu 21.10 == OpenStack Xena is available by default for installation on Ubuntu 21.10. == Ubuntu 20.04 LTS == The Ubuntu Cloud Archive for OpenStack Xena can be enabled on Ubuntu 20.04 by running the following command: sudo add-apt-repository cloud-archive:xena The Ubuntu Cloud Archive for Xena includes updates for: aodh, barbican, ceilometer, ceph (16.2.6), cinder, designate, designate-dashboard, dpdk (20.11.3), glance, gnocchi, heat, heat-dashboard, horizon, ironic, ironic-ui, keystone, magnum, magnum-ui, manila, manila-ui, masakari, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-baremetal, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-sfc, neutron, neutron-dynamic-routing, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, openvswitch (2.16.0), ovn (21.09.0), ovn-octavia-provider, placement, sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, vitrage, watcher, watcher-dashboard, zaqar, and zaqar-ui. For a full list of packages and versions, please refer to: https://openstack-ci-reports.ubuntu.com/reports/cloud-archive/xena_versions.html == Known issues == OVN 21.09.0 coming soon: https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1947003 == Reporting bugs == If you have any issues please report bugs using the ?ubuntu-bug? tool to ensure that bugs get logged in the right place in Launchpad: sudo ubuntu-bug nova-conductor Thank you to everyone who contributed to OpenStack Xena! Corey (on behalf of the Ubuntu OpenStack Engineering team) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Fri Oct 15 08:44:21 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Fri, 15 Oct 2021 09:44:21 +0100 Subject: [wallaby][neutron][ovn] SSL connection to OVN-NB/SB OVSDB In-Reply-To: References: Message-ID: Hi, To configure the OVN Northbound and Southbound databases connection with SSL you need to run: $ ovn-nbctl set-ssl $ ovn-sbctl set-ssl Then, for Neutron you need to set these six configuration options (3 for Northbound and 3 for Southbound): # /etc/neutron/plugins/ml2/ml2_conf.ini [ovn] ovn_sb_ca_cert="" ovn_sb_certificate="" ovn_sb_private_key="" ovn_nb_ca_cert="" ovn_nb_certificate="" ovn_nb_private_key="" And last, configure the OVN metadata agent. Do the same as above at /etc/neutron/neutron_ovn_metadata_agent.ini That should be it! Hope it helps, Lucas On Wed, Oct 13, 2021 at 5:02 PM Faisal Sheikh wrote: > > Hi, > > I am using Openstack Wallaby release with OVN on Ubuntu 20.04. > My environment consists of 2 compute nodes and 1 controller node. > ovs-vswitchd (Open vSwitch) 2.15.0 > Ubuntu Kernel Version: 5.4.0-88-generic > compute node1 172.16.30.1 > compute node2 172.16.30.3 > controller/Network node IP 172.16.30.46 > > I want to configure the ovn southbound and northbound database > to listen on SSL connection. Set a certificate, private key, and CA > certificate on both compute nodes and controller nodes in > /etc/neutron/plugins/ml2/ml2_conf.ini and using string ssl:IP:Port to > connect the southbound/northbound database but I am unable to > establish connection on SSL. It's not connecting to ovsdb-server on > 6641/6642. > Error in the neutron logs is like below: > > 2021-10-12 17:15:27.728 50561 WARNING neutron.quota.resource_registry > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] > security_group_rule is already registered > 2021-10-12 17:15:27.754 50561 WARNING keystonemiddleware.auth_token > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] AuthToken > middleware is set with keystone_authtoken.service_token_roles_required > set to False. This is backwards compatible but deprecated behaviour. > Please set this to True. > 2021-10-12 17:15:27.761 50561 INFO oslo_service.service > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Starting 1 > workers > 2021-10-12 17:15:27.768 50561 INFO neutron.service > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Neutron service > started, listening on 0.0.0.0:9696 > 2021-10-12 17:15:27.776 50561 ERROR ovsdbapp.backend.ovs_idl.idlutils > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unable to open > stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 > 2021-10-12 17:15:27.779 50561 CRITICAL neutron > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unhandled error: > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > 2021-10-12 17:15:27.779 50561 ERROR neutron Traceback (most recent call last): > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/bin/neutron-server", line 10, in > 2021-10-12 17:15:27.779 50561 ERROR neutron sys.exit(main()) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", > line 19, in main > 2021-10-12 17:15:27.779 50561 ERROR neutron > server.boot_server(wsgi_eventlet.eventlet_wsgi_server) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, > in boot_server > 2021-10-12 17:15:27.779 50561 ERROR neutron server_func() > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line > 24, in eventlet_wsgi_server > 2021-10-12 17:15:27.779 50561 ERROR neutron neutron_api = > service.serve_wsgi(service.NeutronApiService) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in > serve_wsgi > 2021-10-12 17:15:27.779 50561 ERROR neutron > registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", > line 60, in publish > 2021-10-12 17:15:27.779 50561 ERROR neutron > _get_callback_manager().publish(resource, event, trigger, > payload=payload) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 149, in publish > 2021-10-12 17:15:27.779 50561 ERROR neutron return > self.notify(resource, event, trigger, payload=payload) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in > _wrapped > 2021-10-12 17:15:27.779 50561 ERROR neutron raise db_exc.RetryRequest(e) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in > __exit__ > 2021-10-12 17:15:27.779 50561 ERROR neutron self.force_reraise() > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in > force_reraise > 2021-10-12 17:15:27.779 50561 ERROR neutron raise self.value > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in > _wrapped > 2021-10-12 17:15:27.779 50561 ERROR neutron return function(*args, **kwargs) > 2021-10-12 17:15:27.779 50561 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 174, in notify > 2021-10-12 17:15:27.779 50561 ERROR neutron raise > exceptions.CallbackFailure(errors=errors) > 2021-10-12 17:15:27.779 50561 ERROR neutron > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > 2021-10-12 17:15:27.779 50561 ERROR neutron > 2021-10-12 17:15:27.783 50572 ERROR ovsdbapp.backend.ovs_idl.idlutils > [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: > Unknown error -1 > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager [-] > Error during notification for > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-373774 > process, after_init: Exception: Could not retrieve schema from > ssl:172.16.30.46:6641 > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > Traceback (most recent call last): > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 197, in _notify_loop > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > callback(resource, event, trigger, **kwargs) > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 294, in post_fork_initialize > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > self._wait_for_pg_drop_event() > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 357, in _wait_for_pg_drop_event > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 136, in nb_schema_helper > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line > 721, in __get__ > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > return self.func(owner) > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", > line 102, in schema_helper > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > line 215, in get_schema_helper > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > return create_schema_helper(fetch_schema_json(connection, > schema_name)) > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > line 204, in fetch_schema_json > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > raise Exception("Could not retrieve schema from %s" % connection) > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > Exception: Could not retrieve schema from ssl:172.16.30.46:6641 > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > 2021-10-12 17:15:27.787 50572 INFO neutron.wsgi [-] (50572) wsgi > starting up on http://0.0.0.0:9696 > 2021-10-12 17:15:27.924 50572 INFO oslo_service.service [-] Parent > process has died unexpectedly, exiting > 2021-10-12 17:15:27.925 50572 INFO neutron.wsgi [-] (50572) wsgi > exited, is_accepting=True > 2021-10-12 17:15:29.709 50573 INFO neutron.common.config [-] Logging enabled! > 2021-10-12 17:15:29.710 50573 INFO neutron.common.config [-] > /usr/bin/neutron-server version 18.0.0 > 2021-10-12 17:15:29.712 50573 INFO neutron.common.config [-] Logging enabled! > 2021-10-12 17:15:29.713 50573 INFO neutron.common.config [-] > /usr/bin/neutron-server version 18.0.0 > 2021-10-12 17:15:29.899 50573 INFO keyring.backend [-] Loading KWallet > 2021-10-12 17:15:29.904 50573 INFO keyring.backend [-] Loading SecretService > 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading Windows > 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading chainer > 2021-10-12 17:15:29.908 50573 INFO keyring.backend [-] Loading macOS > 2021-10-12 17:15:29.927 50573 INFO neutron.manager [-] Loading core plugin: ml2 > 2021-10-12 17:15:30.355 50573 INFO neutron.plugins.ml2.managers [-] > Configured type driver names: ['flat', 'geneve'] > 2021-10-12 17:15:30.357 50573 INFO > neutron.plugins.ml2.drivers.type_flat [-] Arbitrary flat > physical_network names allowed > 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] > Loaded type driver names: ['flat', 'geneve'] > 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] > Registered types: dict_keys(['flat', 'geneve']) > 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] > Tenant network_types: ['geneve'] > 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] > Configured extension driver names: ['port_security', 'qos'] > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > Loaded extension driver names: ['port_security', 'qos'] > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > Registered extension drivers: ['port_security', 'qos'] > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > Configured mechanism driver names: ['ovn'] > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] > Loaded mechanism driver names: ['ovn'] > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] > Registered mechanism drivers: ['ovn'] > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] No > mechanism drivers provide segment reachability information for agent > scheduling. > 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] > Initializing driver for type 'flat' > 2021-10-12 17:15:30.456 50573 INFO > neutron.plugins.ml2.drivers.type_flat [-] ML2 FlatTypeDriver > initialization complete > 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] > Initializing driver for type 'geneve' > 2021-10-12 17:15:30.456 50573 INFO > neutron.plugins.ml2.drivers.type_tunnel [-] geneve ID ranges: [(1, > 65536)] > 2021-10-12 17:15:32.555 50573 INFO neutron.plugins.ml2.managers > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > extension driver 'port_security' > 2021-10-12 17:15:32.555 50573 INFO > neutron.plugins.ml2.extensions.port_security > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] > PortSecurityExtensionDriver initialization complete > 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > extension driver 'qos' > 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > mechanism driver 'ovn' > 2021-10-12 17:15:32.556 50573 INFO > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting > OVNMechanismDriver > 2021-10-12 17:15:32.562 50573 WARNING > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Firewall driver > configuration is ignored > 2021-10-12 17:15:32.586 50573 INFO > neutron.services.logapi.drivers.ovn.driver > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] OVN logging > driver registered > 2021-10-12 17:15:32.588 50573 INFO neutron.plugins.ml2.plugin > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Modular L2 Plugin > initialization complete > 2021-10-12 17:15:32.589 50573 INFO neutron.plugins.ml2.managers > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Got port-security > extension from driver 'port_security' > 2021-10-12 17:15:32.589 50573 INFO neutron.extensions.vlantransparent > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Disabled > vlantransparent extension. > 2021-10-12 17:15:32.589 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > ovn-router > 2021-10-12 17:15:32.597 50573 INFO neutron.services.ovn_l3.plugin > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting > OVNL3RouterPlugin > 2021-10-12 17:15:32.597 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > qos > 2021-10-12 17:15:32.600 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > metering > 2021-10-12 17:15:32.603 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > port_forwarding > 2021-10-12 17:15:32.605 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading service > plugin ovn-router, it is required by port_forwarding > 2021-10-12 17:15:32.606 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > segments > 2021-10-12 17:15:32.684 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > auto_allocate > 2021-10-12 17:15:32.685 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > tag > 2021-10-12 17:15:32.687 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > timestamp > 2021-10-12 17:15:32.689 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > network_ip_availability > 2021-10-12 17:15:32.691 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > flavors > 2021-10-12 17:15:32.693 50573 INFO neutron.manager > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > revisions > 2021-10-12 17:15:32.695 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > extension manager. > 2021-10-12 17:15:32.696 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > address-group not supported by any of loaded plugins > 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > address-scope > 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > router-admin-state-down-before-update not supported by any of loaded > plugins > 2021-10-12 17:15:32.698 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > agent > 2021-10-12 17:15:32.699 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > agent-resources-synced not supported by any of loaded plugins > 2021-10-12 17:15:32.700 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > allowed-address-pairs > 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > auto-allocated-topology > 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > availability_zone > 2021-10-12 17:15:32.702 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > availability_zone_filter not supported by any of loaded plugins > 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > data-plane-status not supported by any of loaded plugins > 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > default-subnetpools > 2021-10-12 17:15:32.704 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > dhcp_agent_scheduler not supported by any of loaded plugins > 2021-10-12 17:15:32.705 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > dns-integration not supported by any of loaded plugins > 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > dns-domain-ports not supported by any of loaded plugins > 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dvr not > supported by any of loaded plugins > 2021-10-12 17:15:32.707 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > empty-string-filtering not supported by any of loaded plugins > 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > expose-l3-conntrack-helper not supported by any of loaded plugins > 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > expose-port-forwarding-in-fip > 2021-10-12 17:15:32.709 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > external-net > 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > extra_dhcp_opt > 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > extraroute > 2021-10-12 17:15:32.711 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > extraroute-atomic not supported by any of loaded plugins > 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > filter-validation not supported by any of loaded plugins > 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > floating-ip-port-forwarding-description > 2021-10-12 17:15:32.713 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > fip-port-details > 2021-10-12 17:15:32.714 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > flavors > 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > floating-ip-port-forwarding > 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > floatingip-pools not supported by any of loaded plugins > 2021-10-12 17:15:32.716 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > ip_allocation > 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > ip-substring-filtering not supported by any of loaded plugins > 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > l2_adjacency > 2021-10-12 17:15:32.718 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > router > 2021-10-12 17:15:32.719 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > l3-conntrack-helper not supported by any of loaded plugins > 2021-10-12 17:15:32.720 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > ext-gw-mode > 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-ha > not supported by any of loaded plugins > 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > l3-flavors not supported by any of loaded plugins > 2021-10-12 17:15:32.722 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > l3-port-ip-change-not-allowed not supported by any of loaded plugins > 2021-10-12 17:15:32.723 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > l3_agent_scheduler not supported by any of loaded plugins > 2021-10-12 17:15:32.724 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension logging > not supported by any of loaded plugins > 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > metering > 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > metering_source_and_destination_fields > 2021-10-12 17:15:32.726 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > multi-provider > 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > net-mtu > 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > net-mtu-writable > 2021-10-12 17:15:32.728 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > network_availability_zone > 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > network-ip-availability > 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > network-segment-range not supported by any of loaded plugins > 2021-10-12 17:15:32.730 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > pagination > 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > port-device-profile > 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > port-mac-address-regenerate not supported by any of loaded plugins > 2021-10-12 17:15:32.732 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > port-numa-affinity-policy > 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > port-resource-request > 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > binding > 2021-10-12 17:15:32.734 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > binding-extended not supported by any of loaded plugins > 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > port-security > 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > project-id > 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > provider > 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos > 2021-10-12 17:15:32.737 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-bw-limit-direction > 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-bw-minimum-ingress > 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-default > 2021-10-12 17:15:32.739 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-fip > 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > qos-gateway-ip not supported by any of loaded plugins > 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-port-network-policy > 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-rule-type-details > 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > qos-rules-alias > 2021-10-12 17:15:32.742 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > quotas > 2021-10-12 17:15:32.743 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > quota_details > 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > rbac-policies > 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > rbac-address-group not supported by any of loaded plugins > 2021-10-12 17:15:32.745 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > rbac-address-scope > 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > rbac-security-groups not supported by any of loaded plugins > 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > rbac-subnetpool not supported by any of loaded plugins > 2021-10-12 17:15:32.747 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > revision-if-match > 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > standard-attr-revisions > 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > router_availability_zone > 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > router-service-type not supported by any of loaded plugins > 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > security-groups-normalized-cidr > 2021-10-12 17:15:32.750 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > port-security-groups-filtering not supported by any of loaded plugins > 2021-10-12 17:15:32.751 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > security-groups-remote-address-group > 2021-10-12 17:15:32.756 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > security-group > 2021-10-12 17:15:32.757 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > segment > 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > segments-peer-subnet-host-routes > 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > service-type > 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > sorting > 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > standard-attr-segment > 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > standard-attr-description > 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > stateful-security-group not supported by any of loaded plugins > 2021-10-12 17:15:32.761 50573 WARNING neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Did not find > expected name "Stdattrs_common" in > /usr/lib/python3/dist-packages/neutron/extensions/stdattrs_common.py > 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > subnet-dns-publish-fixed-ip not supported by any of loaded plugins > 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > subnet_onboard not supported by any of loaded plugins > 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > subnet-segmentid-writable > 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > subnet-service-types not supported by any of loaded plugins > 2021-10-12 17:15:32.764 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > subnet_allocation > 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > subnetpool-prefix-ops not supported by any of loaded plugins > 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > tag-ports-during-bulk-creation not supported by any of loaded plugins > 2021-10-12 17:15:32.766 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > standard-attr-tag > 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > standard-attr-timestamp > 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension trunk > not supported by any of loaded plugins > 2021-10-12 17:15:32.768 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > trunk-details not supported by any of loaded plugins > 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > uplink-status-propagation not supported by any of loaded plugins > 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > vlan-transparent not supported by any of loaded plugins > 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:network > 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:subnet > 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:subnetpool > 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:port > 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:router > 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:floatingip > 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of CountableResource for resource:rbac_policy > 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:security_group > 2021-10-12 17:15:32.779 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:security_group_rule > 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:router > 2021-10-12 17:15:32.781 50573 WARNING neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] router is already > registered > 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:floatingip > 2021-10-12 17:15:32.782 50573 WARNING neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] floatingip is > already registered > 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of CountableResource for resource:rbac_policy > 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] rbac_policy is > already registered > 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:security_group > 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] security_group is > already registered > 2021-10-12 17:15:32.784 50573 INFO neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > of TrackedResource for resource:security_group_rule > 2021-10-12 17:15:32.784 50573 WARNING neutron.quota.resource_registry > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] > security_group_rule is already registered > 2021-10-12 17:15:32.810 50573 WARNING keystonemiddleware.auth_token > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] AuthToken > middleware is set with keystone_authtoken.service_token_roles_required > set to False. This is backwards compatible but deprecated behaviour. > Please set this to True. > 2021-10-12 17:15:32.816 50573 INFO oslo_service.service > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting 1 > workers > 2021-10-12 17:15:32.824 50573 INFO neutron.service > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Neutron service > started, listening on 0.0.0.0:9696 > 2021-10-12 17:15:32.831 50573 ERROR ovsdbapp.backend.ovs_idl.idlutils > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unable to open > stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 > 2021-10-12 17:15:32.834 50573 CRITICAL neutron > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unhandled error: > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > 2021-10-12 17:15:32.834 50573 ERROR neutron Traceback (most recent call last): > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/bin/neutron-server", line 10, in > 2021-10-12 17:15:32.834 50573 ERROR neutron sys.exit(main()) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", > line 19, in main > 2021-10-12 17:15:32.834 50573 ERROR neutron > server.boot_server(wsgi_eventlet.eventlet_wsgi_server) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, > in boot_server > 2021-10-12 17:15:32.834 50573 ERROR neutron server_func() > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line > 24, in eventlet_wsgi_server > 2021-10-12 17:15:32.834 50573 ERROR neutron neutron_api = > service.serve_wsgi(service.NeutronApiService) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in > serve_wsgi > 2021-10-12 17:15:32.834 50573 ERROR neutron > registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", > line 60, in publish > 2021-10-12 17:15:32.834 50573 ERROR neutron > _get_callback_manager().publish(resource, event, trigger, > payload=payload) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 149, in publish > 2021-10-12 17:15:32.834 50573 ERROR neutron return > self.notify(resource, event, trigger, payload=payload) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in > _wrapped > 2021-10-12 17:15:32.834 50573 ERROR neutron raise db_exc.RetryRequest(e) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in > __exit__ > 2021-10-12 17:15:32.834 50573 ERROR neutron self.force_reraise() > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in > force_reraise > 2021-10-12 17:15:32.834 50573 ERROR neutron raise self.value > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in > _wrapped > 2021-10-12 17:15:32.834 50573 ERROR neutron return function(*args, **kwargs) > 2021-10-12 17:15:32.834 50573 ERROR neutron File > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 174, in notify > 2021-10-12 17:15:32.834 50573 ERROR neutron raise > exceptions.CallbackFailure(errors=errors) > 2021-10-12 17:15:32.834 50573 ERROR neutron > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > 2021-10-12 17:15:32.834 50573 ERROR neutron > 2021-10-12 17:15:32.838 50582 ERROR ovsdbapp.backend.ovs_idl.idlutils > [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: > Unknown error -1 > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager [-] > Error during notification for > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-904522 > process, after_init: Exception: Could not retrieve schema from > ssl:172.16.30.46:6641 > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > Traceback (most recent call last): > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > line 197, in _notify_loop > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > callback(resource, event, trigger, **kwargs) > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 294, in post_fork_initialize > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > self._wait_for_pg_drop_event() > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 357, in _wait_for_pg_drop_event > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > line 136, in nb_schema_helper > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line > 721, in __get__ > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > return self.func(owner) > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", > line 102, in schema_helper > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > line 215, in get_schema_helper > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > return create_schema_helper(fetch_schema_json(connection, > schema_name)) > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > line 204, in fetch_schema_json > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > raise Exception("Could not retrieve schema from %s" % connection) > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > Exception: Could not retrieve schema from ssl:172.16.30.46:6641 > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > 2021-10-12 17:15:32.842 50582 INFO neutron.wsgi [-] (50582) wsgi > starting up on http://0.0.0.0:9696 > 2021-10-12 17:15:32.961 50582 INFO oslo_service.service [-] Parent > process has died unexpectedly, exiting > 2021-10-12 17:15:32.963 50582 INFO neutron.wsgi [-] (50582) wsgi > exited, is_accepting=True > 2021-10-12 17:15:34.722 50583 INFO neutron.common.config [-] Logging enabled! > > I would really appreciate any input in this regard. > > Best regards, > Faisal Sheikh > From faisalsheikh.cyber at gmail.com Fri Oct 15 09:59:33 2021 From: faisalsheikh.cyber at gmail.com (Faisal Sheikh) Date: Fri, 15 Oct 2021 14:59:33 +0500 Subject: [wallaby][neutron][ovn] SSL connection to OVN-NB/SB OVSDB In-Reply-To: References: Message-ID: Hi Lucas, Thanks for your help. I was missing these two commands. $ ovn-nbctl set-ssl $ ovn-sbctl set-ssl It worked for me and now SSL connection is established with OVN NB/SB. kudos. BR, Muhammad Faisal Sheikh On Fri, Oct 15, 2021 at 1:44 PM Lucas Alvares Gomes wrote: > > Hi, > > To configure the OVN Northbound and Southbound databases connection > with SSL you need to run: > > $ ovn-nbctl set-ssl > $ ovn-sbctl set-ssl > > Then, for Neutron you need to set these six configuration options (3 > for Northbound and 3 for Southbound): > > # /etc/neutron/plugins/ml2/ml2_conf.ini > [ovn] > ovn_sb_ca_cert="" > ovn_sb_certificate="" > ovn_sb_private_key="" > ovn_nb_ca_cert="" > ovn_nb_certificate="" > ovn_nb_private_key="" > > And last, configure the OVN metadata agent. Do the same as above at > /etc/neutron/neutron_ovn_metadata_agent.ini > > That should be it! > > Hope it helps, > Lucas > > > > On Wed, Oct 13, 2021 at 5:02 PM Faisal Sheikh > wrote: > > > > Hi, > > > > I am using Openstack Wallaby release with OVN on Ubuntu 20.04. > > My environment consists of 2 compute nodes and 1 controller node. > > ovs-vswitchd (Open vSwitch) 2.15.0 > > Ubuntu Kernel Version: 5.4.0-88-generic > > compute node1 172.16.30.1 > > compute node2 172.16.30.3 > > controller/Network node IP 172.16.30.46 > > > > I want to configure the ovn southbound and northbound database > > to listen on SSL connection. Set a certificate, private key, and CA > > certificate on both compute nodes and controller nodes in > > /etc/neutron/plugins/ml2/ml2_conf.ini and using string ssl:IP:Port to > > connect the southbound/northbound database but I am unable to > > establish connection on SSL. It's not connecting to ovsdb-server on > > 6641/6642. > > Error in the neutron logs is like below: > > > > 2021-10-12 17:15:27.728 50561 WARNING neutron.quota.resource_registry > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] > > security_group_rule is already registered > > 2021-10-12 17:15:27.754 50561 WARNING keystonemiddleware.auth_token > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] AuthToken > > middleware is set with keystone_authtoken.service_token_roles_required > > set to False. This is backwards compatible but deprecated behaviour. > > Please set this to True. > > 2021-10-12 17:15:27.761 50561 INFO oslo_service.service > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Starting 1 > > workers > > 2021-10-12 17:15:27.768 50561 INFO neutron.service > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Neutron service > > started, listening on 0.0.0.0:9696 > > 2021-10-12 17:15:27.776 50561 ERROR ovsdbapp.backend.ovs_idl.idlutils > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unable to open > > stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 > > 2021-10-12 17:15:27.779 50561 CRITICAL neutron > > [req-94a8acc2-e0ca-4431-8bed-f26864e7740a - - - - -] Unhandled error: > > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 > > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > > 2021-10-12 17:15:27.779 50561 ERROR neutron Traceback (most recent call last): > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/bin/neutron-server", line 10, in > > 2021-10-12 17:15:27.779 50561 ERROR neutron sys.exit(main()) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", > > line 19, in main > > 2021-10-12 17:15:27.779 50561 ERROR neutron > > server.boot_server(wsgi_eventlet.eventlet_wsgi_server) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, > > in boot_server > > 2021-10-12 17:15:27.779 50561 ERROR neutron server_func() > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line > > 24, in eventlet_wsgi_server > > 2021-10-12 17:15:27.779 50561 ERROR neutron neutron_api = > > service.serve_wsgi(service.NeutronApiService) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in > > serve_wsgi > > 2021-10-12 17:15:27.779 50561 ERROR neutron > > registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", > > line 60, in publish > > 2021-10-12 17:15:27.779 50561 ERROR neutron > > _get_callback_manager().publish(resource, event, trigger, > > payload=payload) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 149, in publish > > 2021-10-12 17:15:27.779 50561 ERROR neutron return > > self.notify(resource, event, trigger, payload=payload) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in > > _wrapped > > 2021-10-12 17:15:27.779 50561 ERROR neutron raise db_exc.RetryRequest(e) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in > > __exit__ > > 2021-10-12 17:15:27.779 50561 ERROR neutron self.force_reraise() > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in > > force_reraise > > 2021-10-12 17:15:27.779 50561 ERROR neutron raise self.value > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in > > _wrapped > > 2021-10-12 17:15:27.779 50561 ERROR neutron return function(*args, **kwargs) > > 2021-10-12 17:15:27.779 50561 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 174, in notify > > 2021-10-12 17:15:27.779 50561 ERROR neutron raise > > exceptions.CallbackFailure(errors=errors) > > 2021-10-12 17:15:27.779 50561 ERROR neutron > > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-373793 > > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > > 2021-10-12 17:15:27.779 50561 ERROR neutron > > 2021-10-12 17:15:27.783 50572 ERROR ovsdbapp.backend.ovs_idl.idlutils > > [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: > > Unknown error -1 > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager [-] > > Error during notification for > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-373774 > > process, after_init: Exception: Could not retrieve schema from > > ssl:172.16.30.46:6641 > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > Traceback (most recent call last): > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 197, in _notify_loop > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > callback(resource, event, trigger, **kwargs) > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 294, in post_fork_initialize > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > self._wait_for_pg_drop_event() > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 357, in _wait_for_pg_drop_event > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 136, in nb_schema_helper > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line > > 721, in __get__ > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > return self.func(owner) > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", > > line 102, in schema_helper > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > > line 215, in get_schema_helper > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > return create_schema_helper(fetch_schema_json(connection, > > schema_name)) > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > > line 204, in fetch_schema_json > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > raise Exception("Could not retrieve schema from %s" % connection) > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > Exception: Could not retrieve schema from ssl:172.16.30.46:6641 > > 2021-10-12 17:15:27.784 50572 ERROR neutron_lib.callbacks.manager > > 2021-10-12 17:15:27.787 50572 INFO neutron.wsgi [-] (50572) wsgi > > starting up on http://0.0.0.0:9696 > > 2021-10-12 17:15:27.924 50572 INFO oslo_service.service [-] Parent > > process has died unexpectedly, exiting > > 2021-10-12 17:15:27.925 50572 INFO neutron.wsgi [-] (50572) wsgi > > exited, is_accepting=True > > 2021-10-12 17:15:29.709 50573 INFO neutron.common.config [-] Logging enabled! > > 2021-10-12 17:15:29.710 50573 INFO neutron.common.config [-] > > /usr/bin/neutron-server version 18.0.0 > > 2021-10-12 17:15:29.712 50573 INFO neutron.common.config [-] Logging enabled! > > 2021-10-12 17:15:29.713 50573 INFO neutron.common.config [-] > > /usr/bin/neutron-server version 18.0.0 > > 2021-10-12 17:15:29.899 50573 INFO keyring.backend [-] Loading KWallet > > 2021-10-12 17:15:29.904 50573 INFO keyring.backend [-] Loading SecretService > > 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading Windows > > 2021-10-12 17:15:29.907 50573 INFO keyring.backend [-] Loading chainer > > 2021-10-12 17:15:29.908 50573 INFO keyring.backend [-] Loading macOS > > 2021-10-12 17:15:29.927 50573 INFO neutron.manager [-] Loading core plugin: ml2 > > 2021-10-12 17:15:30.355 50573 INFO neutron.plugins.ml2.managers [-] > > Configured type driver names: ['flat', 'geneve'] > > 2021-10-12 17:15:30.357 50573 INFO > > neutron.plugins.ml2.drivers.type_flat [-] Arbitrary flat > > physical_network names allowed > > 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] > > Loaded type driver names: ['flat', 'geneve'] > > 2021-10-12 17:15:30.358 50573 INFO neutron.plugins.ml2.managers [-] > > Registered types: dict_keys(['flat', 'geneve']) > > 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] > > Tenant network_types: ['geneve'] > > 2021-10-12 17:15:30.359 50573 INFO neutron.plugins.ml2.managers [-] > > Configured extension driver names: ['port_security', 'qos'] > > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > > Loaded extension driver names: ['port_security', 'qos'] > > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > > Registered extension drivers: ['port_security', 'qos'] > > 2021-10-12 17:15:30.360 50573 INFO neutron.plugins.ml2.managers [-] > > Configured mechanism driver names: ['ovn'] > > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] > > Loaded mechanism driver names: ['ovn'] > > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] > > Registered mechanism drivers: ['ovn'] > > 2021-10-12 17:15:30.415 50573 INFO neutron.plugins.ml2.managers [-] No > > mechanism drivers provide segment reachability information for agent > > scheduling. > > 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] > > Initializing driver for type 'flat' > > 2021-10-12 17:15:30.456 50573 INFO > > neutron.plugins.ml2.drivers.type_flat [-] ML2 FlatTypeDriver > > initialization complete > > 2021-10-12 17:15:30.456 50573 INFO neutron.plugins.ml2.managers [-] > > Initializing driver for type 'geneve' > > 2021-10-12 17:15:30.456 50573 INFO > > neutron.plugins.ml2.drivers.type_tunnel [-] geneve ID ranges: [(1, > > 65536)] > > 2021-10-12 17:15:32.555 50573 INFO neutron.plugins.ml2.managers > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > > extension driver 'port_security' > > 2021-10-12 17:15:32.555 50573 INFO > > neutron.plugins.ml2.extensions.port_security > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] > > PortSecurityExtensionDriver initialization complete > > 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > > extension driver 'qos' > > 2021-10-12 17:15:32.556 50573 INFO neutron.plugins.ml2.managers > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > > mechanism driver 'ovn' > > 2021-10-12 17:15:32.556 50573 INFO > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting > > OVNMechanismDriver > > 2021-10-12 17:15:32.562 50573 WARNING > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Firewall driver > > configuration is ignored > > 2021-10-12 17:15:32.586 50573 INFO > > neutron.services.logapi.drivers.ovn.driver > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] OVN logging > > driver registered > > 2021-10-12 17:15:32.588 50573 INFO neutron.plugins.ml2.plugin > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Modular L2 Plugin > > initialization complete > > 2021-10-12 17:15:32.589 50573 INFO neutron.plugins.ml2.managers > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Got port-security > > extension from driver 'port_security' > > 2021-10-12 17:15:32.589 50573 INFO neutron.extensions.vlantransparent > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Disabled > > vlantransparent extension. > > 2021-10-12 17:15:32.589 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > ovn-router > > 2021-10-12 17:15:32.597 50573 INFO neutron.services.ovn_l3.plugin > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting > > OVNL3RouterPlugin > > 2021-10-12 17:15:32.597 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > qos > > 2021-10-12 17:15:32.600 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > metering > > 2021-10-12 17:15:32.603 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > port_forwarding > > 2021-10-12 17:15:32.605 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading service > > plugin ovn-router, it is required by port_forwarding > > 2021-10-12 17:15:32.606 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > segments > > 2021-10-12 17:15:32.684 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > auto_allocate > > 2021-10-12 17:15:32.685 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > tag > > 2021-10-12 17:15:32.687 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > timestamp > > 2021-10-12 17:15:32.689 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > network_ip_availability > > 2021-10-12 17:15:32.691 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > flavors > > 2021-10-12 17:15:32.693 50573 INFO neutron.manager > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loading Plugin: > > revisions > > 2021-10-12 17:15:32.695 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Initializing > > extension manager. > > 2021-10-12 17:15:32.696 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > address-group not supported by any of loaded plugins > > 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > address-scope > > 2021-10-12 17:15:32.697 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > router-admin-state-down-before-update not supported by any of loaded > > plugins > > 2021-10-12 17:15:32.698 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > agent > > 2021-10-12 17:15:32.699 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > agent-resources-synced not supported by any of loaded plugins > > 2021-10-12 17:15:32.700 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > allowed-address-pairs > > 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > auto-allocated-topology > > 2021-10-12 17:15:32.701 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > availability_zone > > 2021-10-12 17:15:32.702 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > availability_zone_filter not supported by any of loaded plugins > > 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > data-plane-status not supported by any of loaded plugins > > 2021-10-12 17:15:32.703 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > default-subnetpools > > 2021-10-12 17:15:32.704 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > dhcp_agent_scheduler not supported by any of loaded plugins > > 2021-10-12 17:15:32.705 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > dns-integration not supported by any of loaded plugins > > 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > dns-domain-ports not supported by any of loaded plugins > > 2021-10-12 17:15:32.706 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension dvr not > > supported by any of loaded plugins > > 2021-10-12 17:15:32.707 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > empty-string-filtering not supported by any of loaded plugins > > 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > expose-l3-conntrack-helper not supported by any of loaded plugins > > 2021-10-12 17:15:32.708 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > expose-port-forwarding-in-fip > > 2021-10-12 17:15:32.709 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > external-net > > 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > extra_dhcp_opt > > 2021-10-12 17:15:32.710 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > extraroute > > 2021-10-12 17:15:32.711 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > extraroute-atomic not supported by any of loaded plugins > > 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > filter-validation not supported by any of loaded plugins > > 2021-10-12 17:15:32.712 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > floating-ip-port-forwarding-description > > 2021-10-12 17:15:32.713 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > fip-port-details > > 2021-10-12 17:15:32.714 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > flavors > > 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > floating-ip-port-forwarding > > 2021-10-12 17:15:32.715 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > floatingip-pools not supported by any of loaded plugins > > 2021-10-12 17:15:32.716 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > ip_allocation > > 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > ip-substring-filtering not supported by any of loaded plugins > > 2021-10-12 17:15:32.717 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > l2_adjacency > > 2021-10-12 17:15:32.718 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > router > > 2021-10-12 17:15:32.719 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > l3-conntrack-helper not supported by any of loaded plugins > > 2021-10-12 17:15:32.720 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > ext-gw-mode > > 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension l3-ha > > not supported by any of loaded plugins > > 2021-10-12 17:15:32.721 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > l3-flavors not supported by any of loaded plugins > > 2021-10-12 17:15:32.722 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > l3-port-ip-change-not-allowed not supported by any of loaded plugins > > 2021-10-12 17:15:32.723 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > l3_agent_scheduler not supported by any of loaded plugins > > 2021-10-12 17:15:32.724 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension logging > > not supported by any of loaded plugins > > 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > metering > > 2021-10-12 17:15:32.725 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > metering_source_and_destination_fields > > 2021-10-12 17:15:32.726 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > multi-provider > > 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > net-mtu > > 2021-10-12 17:15:32.727 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > net-mtu-writable > > 2021-10-12 17:15:32.728 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > network_availability_zone > > 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > network-ip-availability > > 2021-10-12 17:15:32.729 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > network-segment-range not supported by any of loaded plugins > > 2021-10-12 17:15:32.730 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > pagination > > 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > port-device-profile > > 2021-10-12 17:15:32.731 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > port-mac-address-regenerate not supported by any of loaded plugins > > 2021-10-12 17:15:32.732 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > port-numa-affinity-policy > > 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > port-resource-request > > 2021-10-12 17:15:32.733 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > binding > > 2021-10-12 17:15:32.734 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > binding-extended not supported by any of loaded plugins > > 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > port-security > > 2021-10-12 17:15:32.735 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > project-id > > 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > provider > > 2021-10-12 17:15:32.736 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos > > 2021-10-12 17:15:32.737 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-bw-limit-direction > > 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-bw-minimum-ingress > > 2021-10-12 17:15:32.738 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-default > > 2021-10-12 17:15:32.739 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-fip > > 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > qos-gateway-ip not supported by any of loaded plugins > > 2021-10-12 17:15:32.740 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-port-network-policy > > 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-rule-type-details > > 2021-10-12 17:15:32.741 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > qos-rules-alias > > 2021-10-12 17:15:32.742 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > quotas > > 2021-10-12 17:15:32.743 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > quota_details > > 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > rbac-policies > > 2021-10-12 17:15:32.744 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > rbac-address-group not supported by any of loaded plugins > > 2021-10-12 17:15:32.745 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > rbac-address-scope > > 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > rbac-security-groups not supported by any of loaded plugins > > 2021-10-12 17:15:32.746 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > rbac-subnetpool not supported by any of loaded plugins > > 2021-10-12 17:15:32.747 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > revision-if-match > > 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > standard-attr-revisions > > 2021-10-12 17:15:32.748 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > router_availability_zone > > 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > router-service-type not supported by any of loaded plugins > > 2021-10-12 17:15:32.749 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > security-groups-normalized-cidr > > 2021-10-12 17:15:32.750 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > port-security-groups-filtering not supported by any of loaded plugins > > 2021-10-12 17:15:32.751 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > security-groups-remote-address-group > > 2021-10-12 17:15:32.756 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > security-group > > 2021-10-12 17:15:32.757 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > segment > > 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > segments-peer-subnet-host-routes > > 2021-10-12 17:15:32.758 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > service-type > > 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > sorting > > 2021-10-12 17:15:32.759 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > standard-attr-segment > > 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > standard-attr-description > > 2021-10-12 17:15:32.760 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > stateful-security-group not supported by any of loaded plugins > > 2021-10-12 17:15:32.761 50573 WARNING neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Did not find > > expected name "Stdattrs_common" in > > /usr/lib/python3/dist-packages/neutron/extensions/stdattrs_common.py > > 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > subnet-dns-publish-fixed-ip not supported by any of loaded plugins > > 2021-10-12 17:15:32.762 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > subnet_onboard not supported by any of loaded plugins > > 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > subnet-segmentid-writable > > 2021-10-12 17:15:32.763 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > subnet-service-types not supported by any of loaded plugins > > 2021-10-12 17:15:32.764 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > subnet_allocation > > 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > subnetpool-prefix-ops not supported by any of loaded plugins > > 2021-10-12 17:15:32.765 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > tag-ports-during-bulk-creation not supported by any of loaded plugins > > 2021-10-12 17:15:32.766 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > standard-attr-tag > > 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Loaded extension: > > standard-attr-timestamp > > 2021-10-12 17:15:32.767 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension trunk > > not supported by any of loaded plugins > > 2021-10-12 17:15:32.768 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > trunk-details not supported by any of loaded plugins > > 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > uplink-status-propagation not supported by any of loaded plugins > > 2021-10-12 17:15:32.769 50573 INFO neutron.api.extensions > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Extension > > vlan-transparent not supported by any of loaded plugins > > 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:network > > 2021-10-12 17:15:32.771 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:subnet > > 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:subnetpool > > 2021-10-12 17:15:32.772 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:port > > 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:router > > 2021-10-12 17:15:32.774 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:floatingip > > 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of CountableResource for resource:rbac_policy > > 2021-10-12 17:15:32.778 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:security_group > > 2021-10-12 17:15:32.779 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:security_group_rule > > 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:router > > 2021-10-12 17:15:32.781 50573 WARNING neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] router is already > > registered > > 2021-10-12 17:15:32.781 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:floatingip > > 2021-10-12 17:15:32.782 50573 WARNING neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] floatingip is > > already registered > > 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of CountableResource for resource:rbac_policy > > 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] rbac_policy is > > already registered > > 2021-10-12 17:15:32.783 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:security_group > > 2021-10-12 17:15:32.783 50573 WARNING neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] security_group is > > already registered > > 2021-10-12 17:15:32.784 50573 INFO neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Creating instance > > of TrackedResource for resource:security_group_rule > > 2021-10-12 17:15:32.784 50573 WARNING neutron.quota.resource_registry > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] > > security_group_rule is already registered > > 2021-10-12 17:15:32.810 50573 WARNING keystonemiddleware.auth_token > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] AuthToken > > middleware is set with keystone_authtoken.service_token_roles_required > > set to False. This is backwards compatible but deprecated behaviour. > > Please set this to True. > > 2021-10-12 17:15:32.816 50573 INFO oslo_service.service > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Starting 1 > > workers > > 2021-10-12 17:15:32.824 50573 INFO neutron.service > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Neutron service > > started, listening on 0.0.0.0:9696 > > 2021-10-12 17:15:32.831 50573 ERROR ovsdbapp.backend.ovs_idl.idlutils > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unable to open > > stream to ssl:172.16.30.46:6641 to retrieve schema: Unknown error -1 > > 2021-10-12 17:15:32.834 50573 CRITICAL neutron > > [req-06f63d07-b8d8-4c20-aa87-bdf06a3b17f5 - - - - -] Unhandled error: > > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 > > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > > 2021-10-12 17:15:32.834 50573 ERROR neutron Traceback (most recent call last): > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/bin/neutron-server", line 10, in > > 2021-10-12 17:15:32.834 50573 ERROR neutron sys.exit(main()) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/cmd/eventlet/server/__init__.py", > > line 19, in main > > 2021-10-12 17:15:32.834 50573 ERROR neutron > > server.boot_server(wsgi_eventlet.eventlet_wsgi_server) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/server/__init__.py", line 68, > > in boot_server > > 2021-10-12 17:15:32.834 50573 ERROR neutron server_func() > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/server/wsgi_eventlet.py", line > > 24, in eventlet_wsgi_server > > 2021-10-12 17:15:32.834 50573 ERROR neutron neutron_api = > > service.serve_wsgi(service.NeutronApiService) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron/service.py", line 94, in > > serve_wsgi > > 2021-10-12 17:15:32.834 50573 ERROR neutron > > registry.publish(resources.PROCESS, events.BEFORE_SPAWN, service) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/registry.py", > > line 60, in publish > > 2021-10-12 17:15:32.834 50573 ERROR neutron > > _get_callback_manager().publish(resource, event, trigger, > > payload=payload) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 149, in publish > > 2021-10-12 17:15:32.834 50573 ERROR neutron return > > self.notify(resource, event, trigger, payload=payload) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 110, in > > _wrapped > > 2021-10-12 17:15:32.834 50573 ERROR neutron raise db_exc.RetryRequest(e) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in > > __exit__ > > 2021-10-12 17:15:32.834 50573 ERROR neutron self.force_reraise() > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in > > force_reraise > > 2021-10-12 17:15:32.834 50573 ERROR neutron raise self.value > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/db/utils.py", line 105, in > > _wrapped > > 2021-10-12 17:15:32.834 50573 ERROR neutron return function(*args, **kwargs) > > 2021-10-12 17:15:32.834 50573 ERROR neutron File > > "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 174, in notify > > 2021-10-12 17:15:32.834 50573 ERROR neutron raise > > exceptions.CallbackFailure(errors=errors) > > 2021-10-12 17:15:32.834 50573 ERROR neutron > > neutron_lib.callbacks.exceptions.CallbackFailure: Callback > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-904549 > > failed with "Could not retrieve schema from ssl:172.16.30.46:6641" > > 2021-10-12 17:15:32.834 50573 ERROR neutron > > 2021-10-12 17:15:32.838 50582 ERROR ovsdbapp.backend.ovs_idl.idlutils > > [-] Unable to open stream to ssl:172.16.30.46:6641 to retrieve schema: > > Unknown error -1 > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager [-] > > Error during notification for > > neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-904522 > > process, after_init: Exception: Could not retrieve schema from > > ssl:172.16.30.46:6641 > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > Traceback (most recent call last): > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", > > line 197, in _notify_loop > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > callback(resource, event, trigger, **kwargs) > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 294, in post_fork_initialize > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > self._wait_for_pg_drop_event() > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 357, in _wait_for_pg_drop_event > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > ovn_conf.get_ovn_nb_connection(), self.nb_schema_helper, self, > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py", > > line 136, in nb_schema_helper > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > return impl_idl_ovn.OvsdbNbOvnIdl.schema_helper > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/common/utils.py", line > > 721, in __get__ > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > return self.func(owner) > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py", > > line 102, in schema_helper > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > cls._schema_helper = idlutils.get_schema_helper(cls.connection_string, > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > > line 215, in get_schema_helper > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > return create_schema_helper(fetch_schema_json(connection, > > schema_name)) > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > File "/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > > line 204, in fetch_schema_json > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > raise Exception("Could not retrieve schema from %s" % connection) > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > Exception: Could not retrieve schema from ssl:172.16.30.46:6641 > > 2021-10-12 17:15:32.840 50582 ERROR neutron_lib.callbacks.manager > > 2021-10-12 17:15:32.842 50582 INFO neutron.wsgi [-] (50582) wsgi > > starting up on http://0.0.0.0:9696 > > 2021-10-12 17:15:32.961 50582 INFO oslo_service.service [-] Parent > > process has died unexpectedly, exiting > > 2021-10-12 17:15:32.963 50582 INFO neutron.wsgi [-] (50582) wsgi > > exited, is_accepting=True > > 2021-10-12 17:15:34.722 50583 INFO neutron.common.config [-] Logging enabled! > > > > I would really appreciate any input in this regard. > > > > Best regards, > > Faisal Sheikh > > From gustavofaganello.santos at windriver.com Fri Oct 15 15:23:07 2021 From: gustavofaganello.santos at windriver.com (Gustavo Faganello Santos) Date: Fri, 15 Oct 2021 12:23:07 -0300 Subject: [nova][dev] Reattaching mediated devices to instance coming back from suspended state In-Reply-To: <2940f202-d632-c8f1-a0ed-d4473a9fc9c6@gmail.com> References: <2940f202-d632-c8f1-a0ed-d4473a9fc9c6@gmail.com> Message-ID: <38d1c208-a522-6e17-a469-1c069c04051a@windriver.com> On 14/10/2021 16:02, melanie witt wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > On Thu Oct 14 2021 11:37:43 GMT-0700 (Pacific Daylight Time), Gustavo > Faganello Santos wrote: >> Hello, everyone! >> >> I'm working on a solution for Nova to reattach previously used mediated >> devices (vGPU instances, in my case) to VMs coming back from suspension, >> which seems to have been left on hold in the past [1] because of an old >> libvirt limitation, and I'm having a bit of a hard time doing so, since >> I'm not too familiar with the repo. >> >> I have tried creating a function that does the opposite of the mdev >> detach function, but the get_all_devices method seems to return an empty >> list when looking for mdevs at the moment of resuming the VM. Looking at >> the instance's XML file, I noticed that the mdev property remains while >> the VM is suspended, but it disappears AFTER the whole resume function >> is executed. I'm failing to understand why the mdev list returns empty, >> even though the mdev property exists in the instance's XML, and also why >> the mdev is removed from the XML after the resume function is executed. >> >> With that in mind, does anyone know if there's been any attempt to solve >> this issue since it was left on hold? If not, is there anything I should >> know while I attempt to do so? > > I'm not sure whether this will be helpful but there is similar (or > adjacent?) work currently in progress to handle the case of recreating > mediated devices after a compute host reboot [2][3]. The launchpad bug > contains some info on workarounds for this case and the proposed patch > pulls allocation information from the placement service to recreate the > mdevs. Thank you for your reply! I'm aware of that work, but I'm afraid that it unfortunately does not relate too much to what I'm going for. > > -melanie > > [2] https://bugs.launchpad.net/nova/+bug/1900800 > [3] https://review.opendev.org/c/openstack/nova/+/810220 > >> Thanks in advance. >> Gustavo >> >> [1] >> https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L8007 >> >> > From iurygregory at gmail.com Fri Oct 15 15:24:39 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Fri, 15 Oct 2021 17:24:39 +0200 Subject: [ironic] Yoga PTG schedule In-Reply-To: References: Message-ID: Hello ironicers, We have some changes in our schedule: *Monday (18 Oct) - Room Juno 15:00 - 17:00 UTC* * Support OpenBMC * Persistent memory Support * Redfish Host Connection Interface * Boot from Volume + UEFI *Tuesday (19 Oct) - Room Juno 14:00 - 17:00 UTC* * The rise of compossible hardware, again * Self-configuring Ironic Service + Eliminate manual commands * Is there any way we can drive a co-operative use mode of ironic amongst some of the users? *Wednesday (18 Oct) - Room Juno 14:00 - 16:00 UTC* * Main operator areas of interest for improvement - documentation / graphical console support / performance resource tracker benchmarking / nova integration * Bulk operations * Prioritize 3rd party CI in a box *Thursday (18 Oct) - Room Kilo 14:00 - 16:00 UTC* * Secure RBAC items in Yoga * having to go look at logs is an antipattern * pxe-grub *Friday (22 Oct) - Room Kilo 14:00 - 16:00 UTC* * Remove instance (non-BFV, non-ramdisk) networking booting * Direct SDN Integrations The new schedule is already available in the etherpad [1] [1] https://etherpad.opendev.org/p/ironic-yoga-ptg Em sex., 8 de out. de 2021 ?s 17:57, Iury Gregory escreveu: > Hello Ironicers! > > In our etherpad [1] we have 18 topics for this PTG and we have a total of > 11 slots. > This is the proposed schedule (we will discuss in our upstream meeting on > Monday). > > *Monday (18 Oct) - Room Juno 15:00 - 17:00 UTC* > * Support OpenBMC > * Persistent memory Support > * Redfish Host Connection Interface > * Boot from Volume + UEFI > > *Tuesday (19 Oct) - Room Juno 14:00 - 17:00 UTC* > * Posting to placement ourselves > * The rise of compossible hardware, again > * Self-configuring Ironic Service > * Is there any way we can drive a co-operative use mode of ironic amongst > some of the users? > > *Wednesday (18 Oct) - Room Juno 14:00 - 16:00 UTC* > * Prioritize 3rd party CI in a box > * Secure RBAC items in Yoga > * Bulk operations > > *Thursday (18 Oct) - Room Kilo 14:00 - 16:00 UTC* > * having to go look at logs is an antipattern > * pxe-grub > * Remove instance (non-BFV, non-ramdisk) networking booting > * Direct SDN Integrations > > *Friday (22 Oct) - Room Kilo 14:00 - 16:00 UTC* > * Eliminate manual commands > * Certificate Management > * Stopping use of wiki.openstack.org > > In case we don't have enough time we can book more slots if the community > is ok and the slots are available. We will also have a section in the > etherpad for last-minute topics =) > > [1] https://etherpad.opendev.org/p/ironic-yoga-ptg > > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Part of the ironic-core and puppet-manager-core team in OpenStack* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Oct 15 16:07:29 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 15 Oct 2021 11:07:29 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min Message-ID: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * TC this week IRC meeting held on Oct 14th Thursday. * Most of the meeting discussions are summarized below (Completed or in-progress activities section). Meeting full logs are available @ - https://meetings.opendev.org/meetings/tc/2021/tc.2021-10-14-15.00.log.html * Next week's meeting is cancelled as we are meeting in PTG. We will have the next IRC meeting on Oct 28th, Thursday 15:00 UTC, feel free the topic on agenda[1] by Oct 27th. 2. What we completed this week: ========================= * None in this week. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ * TC is using the etherpad[2] for Xena cycle working item. We will be checking it in PTG * Current status is: 9 completed, 3 to-be-discussed in PTG, 1 in-progress Open Reviews ----------------- * Six open reviews for ongoing activities[3]. New project 'Skyline' proposal ------------------------------------ * You might be aware of this new dashboard proposal in the previous month's discussion. * A new project 'Skyline: an OpenStack dashboard optimized by UI and UE' is now proposed in governance to be an official OpenStack project[4]. * Skyline team is planning to meet in PTG on Tue, Wed and Thu at 5UTC, please ask your queries or have feedback/discussion with the team next week. Place to maintain the external hosted ELK, E-R, O-H services ------------------------------------------------------------------------- * We had a final discussion or I will say just a status update on this which was mentioned in last week's email summary[5]. * Now onwards, discussion and migration work will be done in TACT SIG (#openstack-infra IRC channel). Add project health check tool ----------------------------------- * No updates on this, we will continue discussing it in PTG for the next steps on this and what to do with TC liaison things. * Meanwhile, we are reviewing Rico proposal on collecting stats tools [6]. Stable Core team process change --------------------------------------- * Current proposal is under review[7]. Feel free to provide early feedback if you have any. Call for 'Technical Writing' SIG Chair/Maintainers ---------------------------------------------------------- * As agreed in last week's TC meeting, we will be moving this SIG work towards TC. * TC members have been added to the core members list in the SIG repos. * We will be discussing where to move the training repos/work in PTG. TC tags analysis ------------------- * Operator feedback is asked on open infra newsletter too, and we will continue the discussion in PTG and will take the final decision based on feedback we receive, if any[9]. Complete the policy pop up team ---------------------------------------- * Policy pop team has served its purpose and we have new RBAC as one of the the community-wide goal for the Yoga cycle. * We are marking this popup team as completed[10]. Project updates ------------------- * Retiring js-openstack-lib [11] Yoga release community-wide goal ----------------------------------------- * Please add the possible candidates in this etherpad [12]. * Current status: "Secure RBAC" is selected for Yoga cycle[13]. PTG planning ---------------- * We will be meeting in PTG next week, please check the details in this etherpad [14] * Do not forget to join the TC+community leaders sessions on Monday, Oct 18 15 UTC - 17 UTC. Test support for TLS default: ---------------------------------- * Rico has started a separate email thread over testing with tls-proxy enabled[15], we encourage projects to participate in that testing and help to enable the tls-proxy in gate testing. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[16]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [17] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [18] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://etherpad.opendev.org/p/tc-xena-tracke [3] https://review.opendev.org/q/projects:openstack/governance+status:open [4] https://review.opendev.org/c/openstack/governance/+/814037 [5]http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025251.html [6] https://review.opendev.org/c/openstack/governance/+/810037 [7] https://review.opendev.org/c/openstack/governance/+/810721 [9] https://governance.openstack.org/tc/reference/tags/index.html [10] https://review.opendev.org/c/openstack/governance/+/814186 [11] https://review.opendev.org/c/openstack/governance/+/798540 [12] https://review.opendev.org/c/openstack/governance/+/807163 [13] https://etherpad.opendev.org/p/y-series-goals [14] https://etherpad.opendev.org/p/tc-yoga-ptg [15] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023000.html [16] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [17] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [18] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From laurentfdumont at gmail.com Fri Oct 15 16:31:02 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 15 Oct 2021 12:31:02 -0400 Subject: [ops] How to use hosts with no storage disks In-Reply-To: <20211015125226.jkp6b53nzzypabnc@yuggoth.org> References: <20211015125226.jkp6b53nzzypabnc@yuggoth.org> Message-ID: If we break it down, I'm not sure a VM will be able to boot with no volume/root disk from an image though? I guess you could have the root VM drive all in RAM, but I don't think that Openstack understands that. On Fri, Oct 15, 2021 at 8:55 AM Jeremy Stanley wrote: > On 2021-10-15 11:51:49 +0100 (+0100), A Monster wrote: > > In Openstack, is it possible to create compute nodes with no hard > > drives and use PXE in order to boot the host's system > [...] > > This question is outside the scope of OpenStack itself, unless > you're using another OpenStack deployment to manage the physical > servers (for example TripleO has an "undercloud" which uses Ironic > to manage the servers which then comprise the "overcloud" presented > to users). OpenStack's services start on already booted servers, so > you can in theory use any mechanism you like, including PXEboot, to > boot those physical servers. I understand OpenStack Ironic is a > great solution to this problem though, and can be set up entirely > stand-alone with its Bifrost installer. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Oct 15 16:56:30 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Oct 2021 16:56:30 +0000 Subject: [ops] How to use hosts with no storage disks In-Reply-To: References: <20211015125226.jkp6b53nzzypabnc@yuggoth.org> Message-ID: <20211015165630.vtx2khqluctlowh5@yuggoth.org> On 2021-10-15 12:31:02 -0400 (-0400), Laurent Dumont wrote: > If we break it down, I'm not sure a VM will be able to boot with > no volume/root disk from an image though? > > I guess you could have the root VM drive all in RAM, but I don't > think that Openstack understands that. [...] Well, the question seemed to be primarily about booting the underlying hardware (compute nodes) over the network. This is actually pretty commonly done, at least for provisioning, but could certainly also be used to get enough of a kernel running to find the root disk over iSCSI or whatever. As for the virtual machines (server instances), you can boot-from-volume and use any sort of remote storage Cinder supports, right? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jpenick at gmail.com Fri Oct 15 17:06:21 2021 From: jpenick at gmail.com (James Penick) Date: Fri, 15 Oct 2021 10:06:21 -0700 Subject: How to use hosts with no storage disks In-Reply-To: References: Message-ID: This is something we've talked about doing at Yahoo some day. There are three separate problems to solve: 1. Diskless booting the compute node off the network. Mechanically this is possible via a number of approaches. You'd have a ramdisk with the necessary components baked in, so once the ramdisk loaded you'd be in the OS. I'm not sure if this can be fully accomplished via Ironic as yet. I'd need to ask an Ironic expert to weigh in. 2. Configuration of the compute node. Either a CI job which is aware of the compute node coming up and pushing configuration via something like Ansible, or perhaps using cloud-init with the necessary pieces loaded into a config-drive image which is provided as a part of the boot process. If we can have Ironic manage diskless booting systems then this would be a solved problem with user data. 3. VM storage could either be "local" via a large ramdisk partition (assuming you have a sufficient quantity of ram in your compute nodes), an NFS share which is mounted to the compute node, or volume backed instances. We were investigating this earlier this year and got stuck on the third problem. Local storage via ramdisk isn't really an option for us, since we already pack our compute nodes with a lot of ram, and we need that memory for the instances. NFS has issues with security, since we don't want one giant volume exported to all compute nodes due to security concerns, and a per-compute node export would need to be orchestrated. Volume backed instances seemed ideal, however we ran into some issues there, which are partially related to the block storage product we use. I'm hopeful we'll get back to this next year, a class of instance flavors booted on diskless compute nodes would allow us to offer even more cost-effective options for our customers. -James On Fri, Oct 15, 2021 at 3:54 AM A Monster wrote: > In Openstack, is it possible to create compute nodes with no hard drives > and use PXE in order to boot the host's system and therefore launch > instances with no local drive which is needed to boot the VM's image. > > If not, what's the minimum storage needed to be given to hosts in order to > get a fully functional system. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Oct 15 17:13:08 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 15 Oct 2021 19:13:08 +0200 Subject: How to use hosts with no storage disks In-Reply-To: References: Message-ID: Hi, On Fri, Oct 15, 2021 at 7:10 PM James Penick wrote: > This is something we've talked about doing at Yahoo some day. There are > three separate problems to solve: > > 1. Diskless booting the compute node off the network. Mechanically this is > possible via a number of approaches. You'd have a ramdisk with the > necessary components baked in, so once the ramdisk loaded you'd be in the > OS. I'm not sure if this can be fully accomplished via Ironic as yet. I'd > need to ask an Ironic expert to weigh in. > https://docs.openstack.org/ironic/latest/admin/ramdisk-boot.html Dmitry > 2. Configuration of the compute node. Either a CI job which is aware of > the compute node coming up and pushing configuration via something like > Ansible, or perhaps using cloud-init with the necessary pieces loaded into > a config-drive image which is provided as a part of the boot process. If we > can have Ironic manage diskless booting systems then this would be a solved > problem with user data. > 3. VM storage could either be "local" via a large ramdisk partition > (assuming you have a sufficient quantity of ram in your compute nodes), an > NFS share which is mounted to the compute node, or volume backed instances. > > We were investigating this earlier this year and got stuck on the third > problem. Local storage via ramdisk isn't really an option for us, since we > already pack our compute nodes with a lot of ram, and we need that memory > for the instances. NFS has issues with security, since we don't want one > giant volume exported to all compute nodes due to security concerns, and a > per-compute node export would need to be orchestrated. Volume backed > instances seemed ideal, however we ran into some issues there, which are > partially related to the block storage product we use. I'm hopeful we'll > get back to this next year, a class of instance flavors booted on diskless > compute nodes would allow us to offer even more cost-effective options for > our customers. > > -James > > > On Fri, Oct 15, 2021 at 3:54 AM A Monster wrote: > >> In Openstack, is it possible to create compute nodes with no hard drives >> and use PXE in order to boot the host's system and therefore launch >> instances with no local drive which is needed to boot the VM's image. >> >> If not, what's the minimum storage needed to be given to hosts in order >> to get a fully functional system. >> > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Oct 15 17:17:32 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 15 Oct 2021 19:17:32 +0200 Subject: [release] Release countdown for week R-24, Oct 11-15 Message-ID: <87c555e5-bb3d-2cc4-3ba1-a07e898fa9db@est.tech> Welcome back to the release countdown emails! These will be sent at major points in the Yogadevelopment cycle, which should conclude with a final release on March 30, 2022. Development Focus ----------------- At this stage in the release cycle, focus should be on planning the Yogadevelopment cycle, assessing Yogacommunity goals and approving Yogaspecs. General Information ------------------- Yoga is a 25 weeks long development cycle.In case you haven't seen it yet, please take a look over the schedule for this release: https://releases.openstack.org/ yoga /schedule.html By default, the team PTL is responsible for handling the release cycle and approving release requests. This task can (and probably should) be delegated to release liaisons. Now is a good time to review release liaison information for your team and make sure it is up to date: https://opendev.org/openstack/releases/src/branch/master/data/release_liaisons.yaml By default, all your team deliverables from the Yogarelease are continued in Yogawith a similar release model. Upcoming Deadlines & Dates -------------------------- Yoga PTG: October 18-22 Yoga-1 milestone:November 18, 2021 El?d Ill?s irc: elodilles -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri Oct 15 17:23:08 2021 From: zigo at debian.org (Thomas Goirand) Date: Fri, 15 Oct 2021 19:23:08 +0200 Subject: [all][tc] Skyline as a new official project [was: What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min] In-Reply-To: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> References: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> Message-ID: <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> On 10/15/21 6:07 PM, Ghanshyam Mann wrote: > New project 'Skyline' proposal > ------------------------------------ > * You might be aware of this new dashboard proposal in the previous month's > discussion. > * A new project 'Skyline: an OpenStack dashboard optimized by UI and UE' is > now proposed in governance to be an official OpenStack project[4]. > * Skyline team is planning to meet in PTG on Tue, Wed and Thu at 5UTC, please > ask your queries or have feedback/discussion with the team next week. Skyline looks nice. However, looking nice isn't enough. Before it becomes an OpenStack official component, maybe it should first try to reach our standard. I'm namely thinking about having a proper setuptool integration (using PBR?) for example, and starting tagging releases. I'm very much interested in packaging this for Debian/Ubuntu, if it's not a JS dependency hell. Though the current Makefile thingy doesn't look appealing. I've seen the console has at least 40 JS direct dependency. How many indirect dependency is this? Has anyone looked into it? Is the team ready to help making it package-able in a distro policy compliant way? Your thoughts? Cheers, Thomas Goirand (zigo) From amonster369 at gmail.com Fri Oct 15 18:15:54 2021 From: amonster369 at gmail.com (A Monster) Date: Fri, 15 Oct 2021 19:15:54 +0100 Subject: How to use hosts with no storage disks In-Reply-To: References: Message-ID: As far as I know, ironic aims to provision bare metal machines instead of virtual machines, in my case, what I want to accomplish is to boot the host's operating system through network, and then use either a remote disk in which the image service copies the vm's image to, and then boot from that image, or if it's possible, use the ram instead of a disk for that task, and that would allow me to use diskless computer nodes (hosts). On Fri, 15 Oct 2021 at 18:06, James Penick wrote: > This is something we've talked about doing at Yahoo some day. There are > three separate problems to solve: > > 1. Diskless booting the compute node off the network. Mechanically this is > possible via a number of approaches. You'd have a ramdisk with the > necessary components baked in, so once the ramdisk loaded you'd be in the > OS. I'm not sure if this can be fully accomplished via Ironic as yet. I'd > need to ask an Ironic expert to weigh in. > 2. Configuration of the compute node. Either a CI job which is aware of > the compute node coming up and pushing configuration via something like > Ansible, or perhaps using cloud-init with the necessary pieces loaded into > a config-drive image which is provided as a part of the boot process. If we > can have Ironic manage diskless booting systems then this would be a solved > problem with user data. > 3. VM storage could either be "local" via a large ramdisk partition > (assuming you have a sufficient quantity of ram in your compute nodes), an > NFS share which is mounted to the compute node, or volume backed instances. > > We were investigating this earlier this year and got stuck on the third > problem. Local storage via ramdisk isn't really an option for us, since we > already pack our compute nodes with a lot of ram, and we need that memory > for the instances. NFS has issues with security, since we don't want one > giant volume exported to all compute nodes due to security concerns, and a > per-compute node export would need to be orchestrated. Volume backed > instances seemed ideal, however we ran into some issues there, which are > partially related to the block storage product we use. I'm hopeful we'll > get back to this next year, a class of instance flavors booted on diskless > compute nodes would allow us to offer even more cost-effective options for > our customers. > > -James > > > On Fri, Oct 15, 2021 at 3:54 AM A Monster wrote: > >> In Openstack, is it possible to create compute nodes with no hard drives >> and use PXE in order to boot the host's system and therefore launch >> instances with no local drive which is needed to boot the VM's image. >> >> If not, what's the minimum storage needed to be given to hosts in order >> to get a fully functional system. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpenick at gmail.com Fri Oct 15 18:18:02 2021 From: jpenick at gmail.com (James Penick) Date: Fri, 15 Oct 2021 11:18:02 -0700 Subject: How to use hosts with no storage disks In-Reply-To: References: Message-ID: You are correct, I meant you would use Ironic to provision the compute node, which Nova would then use to provision VMs. On Fri, Oct 15, 2021 at 11:16 AM A Monster wrote: > As far as I know, ironic aims to provision bare metal machines instead of > virtual machines, in my case, what I want to accomplish is to boot the > host's operating system through network, and then use either a remote disk > in which the image service copies the vm's image to, and then boot from > that image, or if it's possible, use the ram instead of a disk for that > task, and that would allow me to use diskless computer nodes (hosts). > > > > On Fri, 15 Oct 2021 at 18:06, James Penick wrote: > >> This is something we've talked about doing at Yahoo some day. There are >> three separate problems to solve: >> >> 1. Diskless booting the compute node off the network. Mechanically this >> is possible via a number of approaches. You'd have a ramdisk with the >> necessary components baked in, so once the ramdisk loaded you'd be in the >> OS. I'm not sure if this can be fully accomplished via Ironic as yet. I'd >> need to ask an Ironic expert to weigh in. >> 2. Configuration of the compute node. Either a CI job which is aware of >> the compute node coming up and pushing configuration via something like >> Ansible, or perhaps using cloud-init with the necessary pieces loaded into >> a config-drive image which is provided as a part of the boot process. If we >> can have Ironic manage diskless booting systems then this would be a solved >> problem with user data. >> 3. VM storage could either be "local" via a large ramdisk partition >> (assuming you have a sufficient quantity of ram in your compute nodes), an >> NFS share which is mounted to the compute node, or volume backed instances. >> >> We were investigating this earlier this year and got stuck on the third >> problem. Local storage via ramdisk isn't really an option for us, since we >> already pack our compute nodes with a lot of ram, and we need that memory >> for the instances. NFS has issues with security, since we don't want one >> giant volume exported to all compute nodes due to security concerns, and a >> per-compute node export would need to be orchestrated. Volume backed >> instances seemed ideal, however we ran into some issues there, which are >> partially related to the block storage product we use. I'm hopeful we'll >> get back to this next year, a class of instance flavors booted on diskless >> compute nodes would allow us to offer even more cost-effective options for >> our customers. >> >> -James >> >> >> On Fri, Oct 15, 2021 at 3:54 AM A Monster wrote: >> >>> In Openstack, is it possible to create compute nodes with no hard >>> drives and use PXE in order to boot the host's system and therefore launch >>> instances with no local drive which is needed to boot the VM's image. >>> >>> If not, what's the minimum storage needed to be given to hosts in order >>> to get a fully functional system. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Fri Oct 15 20:35:30 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 15 Oct 2021 20:35:30 +0000 Subject: How to use hosts with no storage disks In-Reply-To: References: Message-ID: <0670B960225633449A24709C291A525251CB632A@COM03.performair.local> Issue 3, as laid out below, can be addressed using Ceph RBD. Use it behind cinder & glance, and no local storage is required. Our OpenStack cluster has small OS drives, and doesn't store either volumes or images locally. Thank you, Dominic L. Hilsbos, MBA Vice President ? Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: James Penick [mailto:jpenick at gmail.com] Sent: Friday, October 15, 2021 11:18 AM To: A Monster Cc: openstack-discuss Subject: Re: How to use hosts with no storage disks You are correct, I meant you would use Ironic to provision the compute node, which Nova would then use to provision VMs.? On Fri, Oct 15, 2021 at 11:16 AM A Monster wrote: As far as I know, ironic aims to provision bare metal machines instead of virtual machines, in my case, what I want to accomplish is to boot the host's operating?system through network, and then use either a remote disk in which the image service copies the vm's image to, and then boot from that image, or if it's possible, use the ram instead of a disk for that task, and that would allow me to use diskless computer nodes (hosts). On Fri, 15 Oct 2021 at 18:06, James Penick wrote: This is something we've talked about doing at Yahoo some day. There are three separate problems to solve: 1. Diskless booting the compute node off the network. Mechanically this is possible via a number of approaches. You'd have a ramdisk with the necessary components baked in, so once the ramdisk loaded you'd be in the OS. I'm not sure if this can be fully accomplished via Ironic as yet. I'd need to ask an Ironic expert to weigh in. 2. Configuration of the compute node. Either a CI job which is aware of the compute node coming up and pushing configuration via something like Ansible, or perhaps using cloud-init with the necessary pieces loaded into a config-drive image which is provided as a part of the boot process. If we can have Ironic?manage diskless booting systems then this would be a solved problem with user data. 3. VM storage could either be "local" via a large ramdisk partition (assuming you have a sufficient quantity of ram in your compute nodes), an NFS share which is mounted to the compute node, or volume backed instances. We were investigating this earlier this year and got stuck on the third problem. Local storage via ramdisk isn't really an option for us, since we already pack our compute nodes with a lot of ram, and we need that memory for the instances. NFS has issues with security, since we don't want one giant volume exported to all compute nodes due to security concerns, and a per-compute node export would need to be orchestrated. Volume backed instances seemed ideal, however we ran into some issues there, which are partially related to the block storage product we use. I'm hopeful we'll get back to this next year, a class of instance flavors booted on diskless compute nodes would allow us to offer even more cost-effective options for our customers. -James On Fri, Oct 15, 2021 at 3:54 AM A Monster wrote: In?Openstack, is it possible to create compute nodes with no hard drives and use PXE in order to boot the host's system and therefore launch instances with no local drive which is needed to boot the VM's image. If not, what's the minimum storage needed to be given to hosts in order to get a fully functional system. From amonster369 at gmail.com Sat Oct 16 20:44:45 2021 From: amonster369 at gmail.com (A Monster) Date: Sat, 16 Oct 2021 21:44:45 +0100 Subject: The best linux distribution on which to deploy Openstack Message-ID: As a centos 7 user I have many experience using this distribution however centos 7 doesn't support the newest openstack releases ( after train ) and centos 8 will soon lose the support from Redhat since it's EOL is scheduled for 31/12/2021 and the centos stream distributions are upstreams for RHEL therefor is most likely unstable. So which distribution should I use ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat Oct 16 22:17:01 2021 From: zigo at debian.org (Thomas Goirand) Date: Sun, 17 Oct 2021 00:17:01 +0200 Subject: The best linux distribution on which to deploy Openstack In-Reply-To: References: Message-ID: <61626810-32c6-e62a-5736-dc56ed82eff8@debian.org> On 10/16/21 10:44 PM, A Monster wrote: > As a centos 7 user I have many experience?using this distribution > however centos 7 doesn't support the newest openstack releases?( after > train ) and centos 8 will soon lose the support from Redhat since it's > EOL is scheduled for 31/12/2021 and the centos stream distributions are > upstreams for RHEL therefor is most likely unstable. > > So which distribution should I use ??? Debian? :) Thomas From Charles.Short at windriver.com Sat Oct 16 23:01:47 2021 From: Charles.Short at windriver.com (Short, Charles) Date: Sat, 16 Oct 2021 23:01:47 +0000 Subject: The best linux distribution on which to deploy Openstack In-Reply-To: References: Message-ID: From: A Monster Sent: Saturday, October 16, 2021 4:45 PM To: openstack-discuss at lists.openstack.org Subject: The best linux distribution on which to deploy Openstack [Please note: This e-mail is from an EXTERNAL e-mail address] As a centos 7 user I have many experience using this distribution however centos 7 doesn't support the newest openstack releases ( after train ) and centos 8 will soon lose the support from Redhat since it's EOL is scheduled for 31/12/2021 and the centos stream distributions are upstreams for RHEL therefor is most likely unstable. So which distribution should I use ? The answer is to use the one your are the most comfortable with. They all do the same thing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sun Oct 17 07:19:43 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 17 Oct 2021 09:19:43 +0200 Subject: The best linux distribution on which to deploy Openstack In-Reply-To: References: Message-ID: On Sat, 16 Oct 2021 at 22:46, A Monster wrote: > > As a centos 7 user I have many experience using this distribution however centos 7 doesn't support the newest openstack releases ( after train ) and centos 8 will soon lose the support from Redhat since it's EOL is scheduled for 31/12/2021 and the centos stream distributions are upstreams for RHEL therefor is most likely unstable. > > So which distribution should I use ? Use the one that you are most familiar/comfortable with and that is supported by OpenStack deployment projects. For example, with Kolla Ansible, at the moment, you can choose from CentOS Stream 8, Debian Bullseye and Ubuntu 20.04 (sorted alphabetically; all have equal support). Soon, it will support Rocky Linux 8 as well (and then newer releases as they start coming). Kolla Ansible docs for Xena: https://docs.openstack.org/kolla-ansible/xena/ -yoctozepto From radoslaw.piliszek at gmail.com Sun Oct 17 08:30:06 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 17 Oct 2021 10:30:06 +0200 Subject: The best linux distribution on which to deploy Openstack In-Reply-To: References: Message-ID: On Sun, 17 Oct 2021 at 10:03, A Monster wrote: > > What about Centos 7 , what are the openstack releases that it supports ? You have said that already. Train is the latest release on CentOS 7. -yoctozepto > On Sun, 17 Oct 2021 at 08:19, Rados?aw Piliszek wrote: >> >> On Sat, 16 Oct 2021 at 22:46, A Monster wrote: >> > >> > As a centos 7 user I have many experience using this distribution however centos 7 doesn't support the newest openstack releases ( after train ) and centos 8 will soon lose the support from Redhat since it's EOL is scheduled for 31/12/2021 and the centos stream distributions are upstreams for RHEL therefor is most likely unstable. >> > >> > So which distribution should I use ? >> >> Use the one that you are most familiar/comfortable with and that is >> supported by OpenStack deployment projects. >> >> For example, with Kolla Ansible, at the moment, you can choose from >> CentOS Stream 8, Debian Bullseye and Ubuntu 20.04 (sorted >> alphabetically; all have equal support). >> Soon, it will support Rocky Linux 8 as well (and then newer releases >> as they start coming). >> >> Kolla Ansible docs for Xena: https://docs.openstack.org/kolla-ansible/xena/ >> >> -yoctozepto From seenafallah at gmail.com Sat Oct 16 21:24:27 2021 From: seenafallah at gmail.com (Seena Fallah) Date: Sun, 17 Oct 2021 00:54:27 +0330 Subject: [dev][cinder] snapshot revert to any point Message-ID: Hi, There is a lack of feature to revert to any snapshot point in supported drivers like RBD. I've made a change to support this feature. Can someone please review them? https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/812032 https://review.opendev.org/c/openstack/cinder/+/806807 Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiujunting at inspur.com Mon Oct 18 02:23:53 2021 From: qiujunting at inspur.com (=?gb2312?B?SnVudGluZ3FpdSBRaXVqdW50aW5nICjH8b785sMp?=) Date: Mon, 18 Oct 2021 02:23:53 +0000 Subject: [Sahara]Sahara project PTG meeting from 3:00 to 5:00 PM on October 19, 2021. Message-ID: Hi all I'm very sorry: I missed the scheduled sahara PTG meeting time. We tentatively schedule the Sahara project PTG meeting from 3:00 to 5:00 PM on October 19, 2021. Use IRC channel:#openstack-sahara to conduct PTG conferences. My topics are as follows: 1. Sahara supports the creation of cloud hosts by specifying system volumes. 2. Sahara deploys a dedicated cluster through cloud host VM tools (qemu-guest-agent). ???: Juntingqiu Qiujunting (???) ????: 2021?9?24? 18:05 ???: 'jeremyfreudberg at gmail.com' ; Faling Rui (???) ; 'ltoscano at redhat.com' ??: 'openstack-discuss at lists.openstack.org' ??: [Sahara]Currently about the development of the Sahara community there are some points Hi all: Currently about the development of the Sahara community there are some points as following: 1. About the schedule of the regular meeting of the Sahara project? What is your suggestion? How about the regular meeting time every Wednesday afternoon 15:00 to 16:30? 2. Regarding the Sahara project maintenance switch from StoryBoard to launchpad. https://storyboard.openstack.org/ https://blueprints.launchpad.net/openstack/ The reasons are as follows: 1. OpenSatck core projects are maintained on launchpad, such as nova, cinder, neutron, etc. 2. Most OpenStack contributors are used to working on launchpad. 3. Do you have any suggestions? If you think this is feasible, I will post this content in the Sahara community later. Thank you for your help. Thank you Fossen. --------------------------------- Fossen Qiu | ??? CBRD | ?????????? T: 18249256272 E: qiujunting at inspur.com ???? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3519 bytes Desc: image001.jpg URL: From gao.hanxiang at 99cloud.net Mon Oct 18 03:22:11 2021 From: gao.hanxiang at 99cloud.net (=?utf-8?B?6auY54Ca57+U?=) Date: Mon, 18 Oct 2021 11:22:11 +0800 Subject: [all][tc] Skyline as a new official project [was: What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min] In-Reply-To: <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> References: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> Message-ID: <93C08133-972B-44BE-9F2A-661A1B86651F@99cloud.net> Skyline-apiserver is a pure Python code project, following the Python wheel packaging standard, using pip for installation, and the dependency management of the project using poetry[1] Skyline-console uses npm for dependency management, development and testing. During the packaging and distribution process, webpack will be used to process the source code and dependent library code first, and output the packaged static resource files. These static resource files will be stored in an empty Python module[2]. The file directory is for example: - skyline_console - __init__.py - __main__.py - static - index.html - some_a.css - some_b.js ... Pack this empty module in Python wheel, and additionally include these static resources as "data_files"[3][4][5], so that it can be distributed like a normal Python package without having to deal with JS dependencies. When deploying with Nginx, when you need to fill in the static resource path, use "python -m skyline_console" to find it. There is a packed skyline packag[6] on "tarballs.opendev.org" for you to preview. [1] https://python-poetry.org/ [2] https://opendev.org/skyline/skyline-console/src/branch/master/Makefile#L73-L77 [3] https://packaging.python.org/guides/distributing-packages-using-setuptools/#data-files [4] https://setuptools.pypa.io/en/latest/deprecated/distutils/setupscript.html#distutils-additional-files [5] https://opendev.org/skyline/skyline-console/src/branch/master/pyproject.toml#L6 [6] https://tarballs.opendev.org/skyline/skyline-apiserver/ > 2021?10?16? 01:23?Thomas Goirand ??? > > On 10/15/21 6:07 PM, Ghanshyam Mann wrote: >> New project 'Skyline' proposal >> ------------------------------------ >> * You might be aware of this new dashboard proposal in the previous month's >> discussion. >> * A new project 'Skyline: an OpenStack dashboard optimized by UI and UE' is >> now proposed in governance to be an official OpenStack project[4]. >> * Skyline team is planning to meet in PTG on Tue, Wed and Thu at 5UTC, please >> ask your queries or have feedback/discussion with the team next week. > > Skyline looks nice. However, looking nice isn't enough. Before it > becomes an OpenStack official component, maybe it should first try to > reach our standard. I'm namely thinking about having a proper setuptool > integration (using PBR?) for example, and starting tagging releases. > > I'm very much interested in packaging this for Debian/Ubuntu, if it's > not a JS dependency hell. Though the current Makefile thingy doesn't > look appealing. > > I've seen the console has at least 40 JS direct dependency. How many > indirect dependency is this? Has anyone looked into it? > > Is the team ready to help making it package-able in a distro policy > compliant way? > > Your thoughts? > > Cheers, > > Thomas Goirand (zigo) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Mon Oct 18 06:42:30 2021 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Mon, 18 Oct 2021 12:12:30 +0530 Subject: [Openstack-victoria] [LIBVIRT] Live migration doesn't work, error in libvirt Message-ID: Hi, I am using openstack victoria and i am facing an issue when using the live-migration feature. After choosing the live-migration of an instance from Compute1 to compute2 i am getting an error in nova-compute.log (of compute 1) stating: *ERROR nova.virt.libvirt.driver [-] [instance: 59c95d46-2cbc-4787-89f7-8b36b826ffad] Live Migration failure: operation failed: Failed to connect to remote libvirt URI qemu+tcp://compute2/system: unable to connect to server at 'compute2:16509': Connection refused: libvirt.libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+tcp://compute2/system: unable to connect to server at 'compute2:16509': Connection refused* Which states that the libvirtd.tcp socket is not running (libvirtd should run on 16509 port inorder for live-migration to succeed). Journalctl -xe output *Oct 18 06:32:52 compute2 systemd[1]: libvirtd-tcp.socket: Socket service libvirtd.service already active, refusing.Oct 18 06:32:52 compute2 systemd[1]: Failed to listen on Libvirt non-TLS IP socket.* I am trying to solve the above issue by adding --listen in the libvirtd_opts parameter in the service file and also in the /etc/default/libvirtd file. But after doing that the libvirtd service doesn't start. Can someone suggest a way forward for this? Thank you With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Mon Oct 18 06:51:23 2021 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Mon, 18 Oct 2021 12:21:23 +0530 Subject: [SOLVED] [Openstack-victoria] [LIBVIRT] Live migration doesn't work, error in libvirt In-Reply-To: References: Message-ID: The issue was in the starting order of the service. To fix the issue stop the Libvirt service and start the service by issuing the command: systemctl start libvirtd-tcp.socket On Mon, Oct 18, 2021 at 12:12 PM Swogat Pradhan wrote: > Hi, > I am using openstack victoria and i am facing an issue when using the > live-migration feature. > After choosing the live-migration of an instance from Compute1 to compute2 > i am getting an error in nova-compute.log (of compute 1) stating: > > *ERROR nova.virt.libvirt.driver [-] [instance: > 59c95d46-2cbc-4787-89f7-8b36b826ffad] Live Migration failure: operation > failed: Failed to connect to remote libvirt URI qemu+tcp://compute2/system: > unable to connect to server at 'compute2:16509': Connection refused: > libvirt.libvirtError: operation failed: Failed to connect to remote libvirt > URI qemu+tcp://compute2/system: unable to connect to server at > 'compute2:16509': Connection refused* > > Which states that the libvirtd.tcp socket is not running (libvirtd should > run on 16509 port inorder for live-migration to succeed). > > Journalctl -xe output > > *Oct 18 06:32:52 compute2 systemd[1]: libvirtd-tcp.socket: Socket service > libvirtd.service already active, refusing.Oct 18 06:32:52 compute2 > systemd[1]: Failed to listen on Libvirt non-TLS IP socket.* > > I am trying to solve the above issue by adding --listen in the > libvirtd_opts parameter in the service file and also in the > /etc/default/libvirtd file. > But after doing that the libvirtd service doesn't start. > > Can someone suggest a way forward for this? > > Thank you > With regards, > Swogat Pradhan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikarnatathe at gmail.com Mon Oct 18 06:58:31 2021 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Mon, 18 Oct 2021 12:28:31 +0530 Subject: Openstack magnum Message-ID: Hello All, I am trying to create a kubernetes cluster using magnum. Image: fedora-coreos. The stack gets stucked in CREATE_IN_PROGRESS. See the output below. openstack coe cluster list +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | uuid | name | keypair | node_count | master_count | status | health_status | +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | 2 | 1 | CREATE_IN_PROGRESS | None | +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'refs_map': None, 'removed_rsrc_list': [], 'attributes': None, 'refs': None} | | creation_time | 2021-10-18T06:44:02Z | | description | | | links | [{'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', 'rel': 'self'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', 'rel': 'stack'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', 'rel': 'nested'}] | | logical_resource_id | kube_masters | | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 | | required_by | ['kube_cluster_deploy', 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] | | resource_name | kube_masters | | resource_status | CREATE_IN_PROGRESS | | resource_status_reason | state changed | | resource_type | OS::Heat::ResourceGroup | | updated_time | 2021-10-18T06:44:02Z | +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Vikarna -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Mon Oct 18 07:28:44 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 18 Oct 2021 12:28:44 +0500 Subject: [xena][glance] Upgrade to Xena Shows Error Message-ID: Hi, I am trying to upgrade glance from wallaby to xena. The package upgrade goes successful. Vut When I am doing database upgrade. Its showing me below error. Can you guys please advise on it. su -s /bin/bash glance -c "glance-manage db_upgrade" 2021-10-18 12:23:59.852 20534 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:314 2021-10-18 12:23:59.868 20534 CRITICAL glance [-] Unhandled error: TypeError: argument of type 'NoneType' is not iterable 2021-10-18 12:23:59.868 20534 ERROR glance Traceback (most recent call last): 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/bin/glance-manage", line 10, in 2021-10-18 12:23:59.868 20534 ERROR glance sys.exit(main()) 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 557, in main 2021-10-18 12:23:59.868 20534 ERROR glance return CONF.command.action_fn() 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 385, in upgrade 2021-10-18 12:23:59.868 20534 ERROR glance self.command_object.upgrade(CONF.command.version) 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 127, in upgrade 2021-10-18 12:23:59.868 20534 ERROR glance self._sync(version) 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/lib/python3/dist-packages/glance/cmd/manage.py", line 176, in _sync 2021-10-18 12:23:59.868 20534 ERROR glance alembic_command.upgrade(a_config, version) 2021-10-18 12:23:59.868 20534 ERROR glance File "/usr/lib/python3/dist-packages/alembic/command.py", line 277, in upgrade 2021-10-18 12:23:59.868 20534 ERROR glance if ":" in revision: 2021-10-18 12:23:59.868 20534 ERROR glance TypeError: argument of type 'NoneType' is not iterable 2021-10-18 12:23:59.868 20534 ERROR glance However the db_sync was successful and below is the DB version detail. su -s /bin/bash glance -c "glance-manage db_version" 2021-10-18 12:25:14.780 20683 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:314 2021-10-18 12:25:14.783 20683 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2021-10-18 12:25:14.784 20683 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. wallaby_contract01 su -s /bin/bash glance -c "glance-manage db_sync" 2021-10-18 12:25:25.773 20712 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:314 2021-10-18 12:25:25.776 20712 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2021-10-18 12:25:25.776 20712 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. Database is up to date. No migrations needed. -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Mon Oct 18 08:19:19 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Mon, 18 Oct 2021 10:19:19 +0200 Subject: How to force kolla-ansible to rotate logs with given interval [kolla-ansible] Message-ID: Hi, different services in kolla-ansible have different log rotation policies and I?d like to make the logs easier to maintain and search (tried central logging with Kibana, but somehow I don?t like this solution). So I tried to write common config file for all logs. As I understand all logs should be rotated by cron container, inside of which there?s logrotate.conf file (and as I can see the logs are rotated according to this file). So I?ve copied this file, modified according to my needs and put it ina /etc/kolla/config with the name cron-logrotate-global.conf (as documentation says). And? nothing. I?ve checked permissions of this file - everything seems to be ok, so what?s the problem? Below is my logrotate.conf file Best regards, Adam Tomas cat /etc/kolla/config/cron-logrotate-global.conf daily rotate 31 copytruncate compress delaycompress notifempty missingok minsize 0M maxsize 100M su root kolla "/var/log/kolla/ansible.log" { } "/var/log/kolla/aodh/*.log" { } "/var/log/kolla/barbican/*.log" { } "/var/log/kolla/ceilometer/*.log" { } "/var/log/kolla/chrony/*.log" { } "/var/log/kolla/cinder/*.log" { } "/var/log/kolla/cloudkitty/*.log" { } "/var/log/kolla/designate/*.log" { } "/var/log/kolla/elasticsearch/*.log" { } "/var/log/kolla/fluentd/*.log" { } "/var/log/kolla/glance/*.log" { } "/var/log/kolla/haproxy/haproxy.log" { } "/var/log/kolla/heat/*.log" { } "/var/log/kolla/horizon/*.log" { } "/var/log/kolla/influxdb/*.log" { } "/var/log/kolla/iscsi/iscsi.log" { } "/var/log/kolla/kafka/*.log" { } "/var/log/kolla/keepalived/keepalived.log" { } "/var/log/kolla/keystone/*.log" { } "/var/log/kolla/kibana/*.log" { } "/var/log/kolla/magnum/*.log" { } "/var/log/kolla/mariadb/*.log" { } "/var/log/kolla/masakari/*.log" { } "/var/log/kolla/monasca/*.log" { } "/var/log/kolla/neutron/*.log" { postrotate chmod 644 /var/log/kolla/neutron/*.log endscript } "/var/log/kolla/nova/*.log" { } "/var/log/kolla/octavia/*.log" { } "/var/log/kolla/rabbitmq/*.log" { } "/var/log/kolla/rally/*.log" { } "/var/log/kolla/skydive/*.log" { } "/var/log/kolla/storm/*.log" { } "/var/log/kolla/swift/*.log" { } "/var/log/kolla/vitrage/*.log" { } "/var/log/kolla/zookeeper/*.log" { } From syedammad83 at gmail.com Mon Oct 18 08:32:39 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 18 Oct 2021 13:32:39 +0500 Subject: Openstack magnum In-Reply-To: References: Message-ID: Hi, Can you check if the master server is deployed as a nova instance ? if yes, then login to the instance and check cloud-init and heat agent logs to see the errors. Ammad On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe wrote: > Hello All, > > I am trying to create a kubernetes cluster using magnum. Image: > fedora-coreos. > > > The stack gets stucked in CREATE_IN_PROGRESS. See the output below. > openstack coe cluster list > > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > | uuid | name | keypair | > node_count | master_count | status | health_status | > > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | > 2 | 1 | CREATE_IN_PROGRESS | None | > > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > > openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters > > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > > > > > > | > > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | attributes | {'refs_map': None, 'removed_rsrc_list': [], > 'attributes': None, 'refs': None} > > > > > > | > | creation_time | 2021-10-18T06:44:02Z > > > > > > > | > | description | > > > > > > > | > | links | [{'href': ' > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', > 'rel': 'self'}, {'href': ' > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', > 'rel': 'stack'}, {'href': ' > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', > 'rel': 'nested'}] | > | logical_resource_id | kube_masters > > > > > > > | > | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 > > > > > > > | > | required_by | ['kube_cluster_deploy', > 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] > > > > > > | > | resource_name | kube_masters > > > > > > > | > | resource_status | CREATE_IN_PROGRESS > > > > > > > | > | resource_status_reason | state changed > > > > > > > | > | resource_type | OS::Heat::ResourceGroup > > > > > > > | > | updated_time | 2021-10-18T06:44:02Z > > > > > > > | > > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > Vikarna > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Mon Oct 18 08:36:49 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 18 Oct 2021 14:06:49 +0530 Subject: [glance] PTL on vacation - weekly meetings update Message-ID: Hi All, I'm starting my vacation from 25th October and will be back on November 15th. Please direct any issues to the rest of the core team. Also there will be no weekly meeting on 28th October and tentative cancellation of 4th November and 11th November unless there is something in the agenda by Tuesday 2nd November and 9th November EOB [1]. [1] https://etherpad.opendev.org/p/glance-team-meeting-agenda Thank you, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikarnatathe at gmail.com Mon Oct 18 08:39:08 2021 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Mon, 18 Oct 2021 14:09:08 +0530 Subject: Openstack magnum In-Reply-To: References: Message-ID: > > > Hi Ammad, > > Thanks for responding. > > Yes the instance is getting created, but i am unable to login though i > have generated the keypair. There is no default password for this image to > login via console. > > openstack server list > > +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ > | ID | Name > | Status | Networks | Image | > Flavor | > > +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ > | cf955a75-8cd2-4f91-a01f-677159b57cb2 | > k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, > 10.14.20.181 | fedora-coreos-latest | m1.large | > > > ssh -i id_rsa core at 10.14.20.181 > The authenticity of host '10.14.20.181 (10.14.20.181)' can't be > established. > ECDSA key fingerprint is > SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. > Are you sure you want to continue connecting (yes/no/[fingerprint])? yes > Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known > hosts. > core at 10.14.20.181: Permission denied > (publickey,gssapi-keyex,gssapi-with-mic). > > On Mon, 18 Oct 2021 at 14:02, Ammad Syed wrote: > >> Hi, >> >> Can you check if the master server is deployed as a nova instance ? if >> yes, then login to the instance and check cloud-init and heat agent logs to >> see the errors. >> >> Ammad >> >> On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe >> wrote: >> >>> Hello All, >>> >>> I am trying to create a kubernetes cluster using magnum. Image: >>> fedora-coreos. >>> >>> >>> The stack gets stucked in CREATE_IN_PROGRESS. See the output below. >>> openstack coe cluster list >>> >>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>> | uuid | name | keypair | >>> node_count | master_count | status | health_status | >>> >>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>> | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | >>> 2 | 1 | CREATE_IN_PROGRESS | None | >>> >>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>> >>> openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters >>> >>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | Field | Value >>> >>> >>> >>> >>> >>> >>> | >>> >>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> | attributes | {'refs_map': None, 'removed_rsrc_list': [], >>> 'attributes': None, 'refs': None} >>> >>> >>> >>> >>> >>> | >>> | creation_time | 2021-10-18T06:44:02Z >>> >>> >>> >>> >>> >>> >>> | >>> | description | >>> >>> >>> >>> >>> >>> >>> | >>> | links | [{'href': ' >>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', >>> 'rel': 'self'}, {'href': ' >>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', >>> 'rel': 'stack'}, {'href': ' >>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', >>> 'rel': 'nested'}] | >>> | logical_resource_id | kube_masters >>> >>> >>> >>> >>> >>> >>> | >>> | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 >>> >>> >>> >>> >>> >>> >>> | >>> | required_by | ['kube_cluster_deploy', >>> 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] >>> >>> >>> >>> >>> >>> | >>> | resource_name | kube_masters >>> >>> >>> >>> >>> >>> >>> | >>> | resource_status | CREATE_IN_PROGRESS >>> >>> >>> >>> >>> >>> >>> | >>> | resource_status_reason | state changed >>> >>> >>> >>> >>> >>> >>> | >>> | resource_type | OS::Heat::ResourceGroup >>> >>> >>> >>> >>> >>> >>> | >>> | updated_time | 2021-10-18T06:44:02Z >>> >>> >>> >>> >>> >>> >>> | >>> >>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>> >>> Vikarna >>> >> >> >> -- >> Regards, >> >> >> Syed Ammad Ali >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Mon Oct 18 09:31:53 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Mon, 18 Oct 2021 12:31:53 +0300 Subject: [OpenStack-Ansible] LXC containers apt upgrade In-Reply-To: <2cce6f95893340dcba81c88e278213b8@elca.ch> References: <2cce6f95893340dcba81c88e278213b8@elca.ch> Message-ID: <1243101634549390@mail.yandex.ru> An HTML attachment was scrubbed... URL: From gao.hanxiang at 99cloud.net Mon Oct 18 10:44:08 2021 From: gao.hanxiang at 99cloud.net (=?UTF-8?B?6auY54Ca57+U?=) Date: Mon, 18 Oct 2021 18:44:08 +0800 (GMT+08:00) Subject: =?UTF-8?B?W3RjXVtob3Jpem9uXVtza3lsaW5lXSBXZWxjb21lIHRvIHRoZSBTa3lsaW5lIFBURw==?= Message-ID: Hi all, Skyline project members will hold their own PTG this week (Tuesday, Wednesday and Thursday at 5 UTC). At present, the skyline project has submitted an application to become an official OpenStack project, and we also welcome more friends to join us. Skyline is an OpenStack dashboard optimized by UI and UE. It has a modern technology stack and ecology, is easier for developers to maintain and operate by users, and has higher concurrency performance. Here are two videos to preview Skyline: - Skyline technical overview[1]. - Skyline dashboard operating demo[2]. Skyline has the following technical advantages: 1. Separation of concerns, front-end focus on functional design and user experience, back-end focus on data logic. 2. Embrace modern browser technology and ecology: React, Ant Design, and Mobx. 3. Most functions directly call OpenStack-API, the call chain is simple, the logic is clearer, and the API responds quickly. 4. Use React component to process rendering, the page display process is fast and smooth, bringing users a better UI and UE experience. At present, Skyline has completed the function development of OpenStack core component, as well as most of the functions of VPNaaS, Octavia and other components. corresponding automated test jobs[3][4] are also integrated on Zuul, and there is good code coverage. Devstack deployment integration has also been completed, and integration of kolla and kolla-ansible will complete pending patch[5][6] after Skyline becomes an official project. Skyline?s next roadmap will be to cover all existing functions of Horizon and complete the page development of other OpenStack components. [1] https://www.youtube.com/watch?v=Ro8tROYKDlE [2] https://www.youtube.com/watch?v=pFAJLwzxv0 [3] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-apiserver [4] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-console [5] https://review.opendev.org/c/openstack/kolla/+/810796 [6] https://review.opendev.org/c/openstack/kolla-ansible/+/810566 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Mon Oct 18 10:58:23 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 18 Oct 2021 11:58:23 +0100 Subject: [neutron] Bug Deputy Report October 11 - 18 Message-ID: High: * https://bugs.launchpad.net/neutron/+bug/1946588 - "[OVN]Metadata get warn logs after boot instance server about "MetadataServiceReadyWaitTimeoutException"" - Assigned to: hailun huang * https://bugs.launchpad.net/neutron/+bug/1946748 - " [stable/stein] neutron-tempest-plugin jobs fail with "AttributeError: module 'tempest.common.utils' has no attribute 'is_network_feature_enabled'"" - Assigned to: Bernard Cafarelli Medium: * https://bugs.launchpad.net/neutron/+bug/1946589 - "[OVN] localport might not be updated when create multiple subnets for its network" - Unassigned * https://bugs.launchpad.net/neutron/+bug/1946666 - "[ovn] neutron_ovn_db_sync_util crashes (ACL already exists)" - Assigned to: Daniel Speichert * https://bugs.launchpad.net/neutron/+bug/1946713 - "[ovn]Network's availability_zones is empty" - Assigned to: hailun huang * https://bugs.launchpad.net/neutron/+bug/1947334 - "[OVN] Migration to OVN does not create the OVN QoS DB registers" - Assigned to: Rodolfo Alonso * https://bugs.launchpad.net/neutron/+bug/1947366 - "[OVN] Migration to OVN removes "connectivity" parameter from VIF details" - Assigned to: Rodolfo Alonso * https://bugs.launchpad.net/neutron/+bug/1947378 - " [OVN] VIF details "connectivity" parameter is not correctly populated" - Assigned to: Rodolfo Alonso Needs further triage: * https://bugs.launchpad.net/neutron/+bug/1946624 - "OVSDB Error: Transaction causes multiple rows in "Port_Group" table to have identical values" - Marked as Incomplete * https://bugs.launchpad.net/neutron/+bug/1946764 - "[OVN]Any dhcp options which are string type should be escape" * https://bugs.launchpad.net/neutron/+bug/1946781 - " Appropriate way to allocate /64 ipv6 per instance" Cheers, Lucas From fungi at yuggoth.org Mon Oct 18 12:18:18 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 18 Oct 2021 12:18:18 +0000 Subject: [all][tc] Skyline as a new official project [was: What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min] In-Reply-To: <93C08133-972B-44BE-9F2A-661A1B86651F@99cloud.net> References: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> <93C08133-972B-44BE-9F2A-661A1B86651F@99cloud.net> Message-ID: <20211018121818.rerqlp7ek7z3rnya@yuggoth.org> On 2021-10-18 11:22:11 +0800 (+0800), ??? wrote: > Skyline-apiserver is a pure Python code project, following the > Python wheel packaging standard, using pip for installation, and > the dependency management of the project using poetry[1] > > Skyline-console uses npm for dependency management, development > and testing. During the packaging and distribution process, > webpack will be used to process the source code and dependent > library code first, and output the packaged static resource files. > These static resource files will be stored in an empty Python > module[2]. [...] GNU/Linux distributions like Debian are going to want to separately package the original source code for all of these Web components and their dependencies, and recreate them at the time the distro's binary packages are built. I believe the concerns are making it easy for them to find the source for all of it, and to attempt to use dependencies which these distributions already package in order to reduce their workload. Further, it helps to make sure the software is capable of using multiple versions of its dependencies when possible, because it's going to be installed into shared environments with other software which may have some of the same dependencies, so may need to be able to agree on common versions they all support. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sbauza at redhat.com Mon Oct 18 14:59:19 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 18 Oct 2021 16:59:19 +0200 Subject: [nova] Yoga PTG schedule Message-ID: Hello folks, Not sure people know about our etherpad for the Yoga PTG. This is this one : https://etherpad.opendev.org/p/nova-yoga-ptg You can see the schedule above but, here is there : PTG Schedule https://www.openstack.org/ptg/#tab_schedule https://ptg.opendev.org/ Fancy rendered PDF with hyperlinks for the schedule: https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf Connection details https://meet.jit.si/vPTG-Newton - *Monday*: project support team discussions, e.g. SIGs, QA, Infra, Release mgmt, Oslo - *Tuesday* *13:00 UTC - 17:00 UTC* - Nova (Placement) sessions - 13:00 - 14: 00 UTC Cyborg - Nova cross project mini-session - 14:00 - 14:30 UTC Oslo - Nova cross project mini-session - 15:00 - 16:00 UTC RBAC discussions with popup team - *Wednesday 14:00 UTC - 17:00 UTC*: Nova (Placement) sessions - 14:00 - 15:00 UTC Neutron - Nova cross project mini-session - 15:00 - 15:30 UTC Interop discussion with Arkady - *Thursday 14:00 UTC - 17:00 UTC* - Nova (Placement) sessions - 16:00 - 17:00 UTC Cinder - Nova cross project mini-session - *Friday 14:00 UTC - 17:00 UTC* - Nova (Placement) sessions See you then tomorrow at 1pm UTC ! -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjoen at dds.nl Mon Oct 18 16:38:17 2021 From: tjoen at dds.nl (tjoen) Date: Mon, 18 Oct 2021 18:38:17 +0200 Subject: [xena][glance] Upgrade to Xena Shows Error In-Reply-To: References: Message-ID: <5329490b-9c7f-8c4e-c382-424fcd0de035@dds.nl> On 10/18/21 09:28, Ammad Syed wrote: > I am trying to upgrade glance from wallaby to xena. The package upgrade > goes successful. Vut When I am doing database upgrade. Its showing me below > error. Can you guys please advise on it. > > su -s /bin/bash glance -c "glance-manage db_upgrade" > 2021-10-18 12:23:59.852 20534 DEBUG oslo_db.sqlalchemy.engines [-] MySQL > server mode set to > STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION > _check_effective_sql_mode Not sure if ty was the same error I encountered. I found in my notes that I needed to do # mysql_upgrade -u root -p From ignaziocassano at gmail.com Mon Oct 18 16:54:22 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 18 Oct 2021 18:54:22 +0200 Subject: [openstack][manila] queens netapp share migration Message-ID: Hello all, I have an installation of openstack queens and manila is using netapp fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. When I try share migration it fails. manila migration-start --preserve-metadata False --preserve-snapshots False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 #aggr_fas04_MANILA_TO2_UNITY600_Mixed In the share log file I read: 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager NetAppException: Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in Vserver svmp2-nfs-1138 is not part of any data motion operations. The svmp2-nfs-1138 is the share type where migration start from. Both source and destination are on netapp. Any help, please? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Mon Oct 18 18:37:39 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Mon, 18 Oct 2021 14:37:39 -0400 Subject: [TripleO] Gate blocker - please hold rechecks - tripleo-ci-centos-8-scenario001-standalone Message-ID: Hello All, We have a gate blocker for tripleo at: https://bugs.launchpad.net/tripleo/+bug/1947548 tripleo-ci-centos-8-scenario001-standalone is failing. We are testing some reverts. Please hold rechecks if you are rechecking for this failure. We will update this list when the error is cleared. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Mon Oct 18 18:59:36 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 19 Oct 2021 03:59:36 +0900 Subject: [tacker] Skip weekly IRC meeting Message-ID: <3c9f372f-8597-1be3-0238-6c60c97003df@gmail.com> Hi team, Since we are going to have PTG sessions thorough this week, I'd like to skip IRC meeting on Oct 19. Thanks, Yasufumi From ignaziocassano at gmail.com Mon Oct 18 19:26:08 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 18 Oct 2021 21:26:08 +0200 Subject: [openstack][manila] data-node ? Message-ID: Hello, I need to migrate some share in host assisted mode, but seems I need a data-node. I am using openstack queens on centos 7. How can I install a data-node ? I cannot find any manila packages related to it? Please, anyone can send me some documentation link ? I found only manila-scheduler, manila-api end manila-share services Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Oct 18 19:30:38 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 18 Oct 2021 21:30:38 +0200 Subject: [openstack][manila] data-node ? In-Reply-To: References: Message-ID: PS I found it under systemd but I did nod find any documentation for configuring it. Thanks Ignazio Il giorno lun 18 ott 2021 alle ore 21:26 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello, > I need to migrate some share in host assisted mode, but seems I need a > data-node. > I am using openstack queens on centos 7. > How can I install a data-node ? > I cannot find any manila packages related to it? > Please, anyone can send me some documentation link ? > I found only manila-scheduler, manila-api end manila-share services > Thanks > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From felipefuty01 at gmail.com Mon Oct 18 19:34:59 2021 From: felipefuty01 at gmail.com (Felipe Rodrigues) Date: Mon, 18 Oct 2021 16:34:59 -0300 Subject: [openstack][manila] queens netapp share migration In-Reply-To: References: Message-ID: Hi Ignazio, It seems like a bug, since NetApp driver does not support storage assisted migration across backends (SVMs).. We'll check it and open a bug to it. Just a note: there is a bug with the same error opened [1]. It may be the same as yours. Please, check there and mark as affecting you too. [1] https://bugs.launchpad.net/manila/+bug/1723513 Best regards, Felipe. On Mon, Oct 18, 2021 at 1:57 PM Ignazio Cassano wrote: > Hello all, > I have an installation of openstack queens and manila is using netapp > fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. > When I try share migration it fails. > > manila migration-start --preserve-metadata False --preserve-snapshots > False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 > c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 > #aggr_fas04_MANILA_TO2_UNITY600_Mixed > > > In the share log file I read: > 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager NetAppException: > Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in Vserver svmp2-nfs-1138 > is not part of any data motion operations. > > The svmp2-nfs-1138 is the share type where migration start from. > Both source and destination are on netapp. > Any help, please? > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Oct 18 21:47:07 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 18 Oct 2021 23:47:07 +0200 Subject: [openstack][manila] queens netapp share migration In-Reply-To: References: Message-ID: Yes, Felipe. It is the same bug. I posted my comment. Thanks Ignazio Il Lun 18 Ott 2021, 21:35 Felipe Rodrigues ha scritto: > Hi Ignazio, > > It seems like a bug, since NetApp driver does not support storage assisted > migration across backends (SVMs).. > > We'll check it and open a bug to it. > > Just a note: there is a bug with the same error opened [1]. It may be the > same as yours. Please, check there and mark as affecting you too. > > [1] https://bugs.launchpad.net/manila/+bug/1723513 > > Best regards, Felipe. > > > On Mon, Oct 18, 2021 at 1:57 PM Ignazio Cassano > wrote: > >> Hello all, >> I have an installation of openstack queens and manila is using netapp >> fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. >> When I try share migration it fails. >> >> manila migration-start --preserve-metadata False --preserve-snapshots >> False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 >> c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 >> #aggr_fas04_MANILA_TO2_UNITY600_Mixed >> >> >> In the share log file I read: >> 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager NetAppException: >> Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in Vserver svmp2-nfs-1138 >> is not part of any data motion operations. >> >> The svmp2-nfs-1138 is the share type where migration start from. >> Both source and destination are on netapp. >> Any help, please? >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.carden at gmail.com Mon Oct 18 21:34:23 2021 From: mike.carden at gmail.com (Mike Carden) Date: Tue, 19 Oct 2021 08:34:23 +1100 Subject: [tc][horizon][skyline] Welcome to the Skyline PTG In-Reply-To: References: Message-ID: Hi. The video [2] https://www.youtube.com/watch?v=pFAJLwzxv0 is coming up on YouTube as 'Video Unavailable'. -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Tue Oct 19 00:28:24 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 18 Oct 2021 17:28:24 -0700 Subject: [openstack][manila] data-node ? In-Reply-To: References: Message-ID: On Mon, Oct 18, 2021 at 12:37 PM Ignazio Cassano wrote: > PS > I found it under systemd but I did nod find any documentation for > configuring it. > We've done a poor job of documenting this in our install guide: https://docs.openstack.org/manila/latest/install/ However, https://docs.openstack.org/manila/queens/admin/shared-file-systems-share-migration.html#configuration should speak to the configuration necessary. I've added a tracker for improving the install doc: https://bugs.launchpad.net/manila/+bug/1947644 > Thanks > Ignazio > > Il giorno lun 18 ott 2021 alle ore 21:26 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> Hello, >> I need to migrate some share in host assisted mode, but seems I need a >> data-node. >> I am using openstack queens on centos 7. >> How can I install a data-node ? >> I cannot find any manila packages related to it? >> Please, anyone can send me some documentation link ? >> I found only manila-scheduler, manila-api end manila-share services >> Thanks >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Tue Oct 19 01:10:46 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 19 Oct 2021 01:10:46 +0000 Subject: [cyborg][ptg] Yoga PTG meeting Message-ID: <607e5faa6b554392897fa8d963dabbff@inspur.com> Hello, As part of the Yoga PTG, the Cyborg team project will meet on Wednesday October 19 , from 6UTC-8UTC,( https://ethercalc.openstack.org/8tum5yl1bx43 report 503 now), but you can join us on #openstack-cyborg channel. We have created an Etherpad to define the agenda: https://etherpad.opendev.org/p/cyborg-yoga-ptg Feel free to add topics you would like to see discussed. Thanks Brin Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Tue Oct 19 01:42:19 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 19 Oct 2021 01:42:19 +0000 Subject: [nova][cyborg] No meeting today due virtual PTG Message-ID: Hi all, As agreed Cyborg Team with today meeting [1], the meeting is *CANCELLED* as all of us will be attending the virtual PTG today. If you have any idea or feature/issue want to discuss, you can add it to etherpad [1], whether you can join or not, but you should describe its details, we can talk and give a reply. [1] https://etherpad.opendev.org/p/cyborg-yoga-ptg Thanks brinzhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 19 04:19:43 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 19 Oct 2021 06:19:43 +0200 Subject: [openstack][manila] data-node ? In-Reply-To: References: Message-ID: Thanks, I'll check it out. Ignazio Il Mar 19 Ott 2021, 02:28 Goutham Pacha Ravi ha scritto: > > On Mon, Oct 18, 2021 at 12:37 PM Ignazio Cassano > wrote: > >> PS >> I found it under systemd but I did nod find any documentation for >> configuring it. >> > > We've done a poor job of documenting this in our install guide: > https://docs.openstack.org/manila/latest/install/ > > However, > https://docs.openstack.org/manila/queens/admin/shared-file-systems-share-migration.html#configuration > should speak to the configuration necessary. > I've added a tracker for improving the install doc: > https://bugs.launchpad.net/manila/+bug/1947644 > > > > >> Thanks >> Ignazio >> >> Il giorno lun 18 ott 2021 alle ore 21:26 Ignazio Cassano < >> ignaziocassano at gmail.com> ha scritto: >> >>> Hello, >>> I need to migrate some share in host assisted mode, but seems I need a >>> data-node. >>> I am using openstack queens on centos 7. >>> How can I install a data-node ? >>> I cannot find any manila packages related to it? >>> Please, anyone can send me some documentation link ? >>> I found only manila-scheduler, manila-api end manila-share services >>> Thanks >>> Ignazio >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Oct 19 05:51:53 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 19 Oct 2021 00:51:53 -0500 Subject: [openstack-helm] No Meeting Oct 19th Message-ID: Hey team, Since this week is the PTG, the meeting for this week is cancelled. We will meet for our session on Wednesday Oct 20th, then resume normal schedule next week. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 19 06:21:18 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 19 Oct 2021 08:21:18 +0200 Subject: [openstack][manila] queens netapp share migration In-Reply-To: References: Message-ID: Hi Felipe, the problem is if it is a bug or if it is not supported by design. The error in the bug you reported is the same I am facing but the bug mentions a situation where controller is busy. Our controller is always very busy. Ignazio Il Lun 18 Ott 2021, 21:35 Felipe Rodrigues ha scritto: > Hi Ignazio, > > It seems like a bug, since NetApp driver does not support storage assisted > migration across backends (SVMs).. > > We'll check it and open a bug to it. > > Just a note: there is a bug with the same error opened [1]. It may be the > same as yours. Please, check there and mark as affecting you too. > > [1] https://bugs.launchpad.net/manila/+bug/1723513 > > Best regards, Felipe. > > > On Mon, Oct 18, 2021 at 1:57 PM Ignazio Cassano > wrote: > >> Hello all, >> I have an installation of openstack queens and manila is using netapp >> fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. >> When I try share migration it fails. >> >> manila migration-start --preserve-metadata False --preserve-snapshots >> False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 >> c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 >> #aggr_fas04_MANILA_TO2_UNITY600_Mixed >> >> >> In the share log file I read: >> 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager NetAppException: >> Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in Vserver svmp2-nfs-1138 >> is not part of any data motion operations. >> >> The svmp2-nfs-1138 is the share type where migration start from. >> Both source and destination are on netapp. >> Any help, please? >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 19 06:26:19 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 19 Oct 2021 08:26:19 +0200 Subject: [openstack][manila] data-node ? In-Reply-To: References: Message-ID: Hello, the doc is very poor. The manila-data service is mentioned but there are not configurazion instructions. I think this servirce is important where share driver assisted migration is not supported. Ignazio Il Mar 19 Ott 2021, 02:28 Goutham Pacha Ravi ha scritto: > > On Mon, Oct 18, 2021 at 12:37 PM Ignazio Cassano > wrote: > >> PS >> I found it under systemd but I did nod find any documentation for >> configuring it. >> > > We've done a poor job of documenting this in our install guide: > https://docs.openstack.org/manila/latest/install/ > > However, > https://docs.openstack.org/manila/queens/admin/shared-file-systems-share-migration.html#configuration > should speak to the configuration necessary. > I've added a tracker for improving the install doc: > https://bugs.launchpad.net/manila/+bug/1947644 > > > > >> Thanks >> Ignazio >> >> Il giorno lun 18 ott 2021 alle ore 21:26 Ignazio Cassano < >> ignaziocassano at gmail.com> ha scritto: >> >>> Hello, >>> I need to migrate some share in host assisted mode, but seems I need a >>> data-node. >>> I am using openstack queens on centos 7. >>> How can I install a data-node ? >>> I cannot find any manila packages related to it? >>> Please, anyone can send me some documentation link ? >>> I found only manila-scheduler, manila-api end manila-share services >>> Thanks >>> Ignazio >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 19 07:39:28 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 19 Oct 2021 09:39:28 +0200 Subject: [openstack][manila] queens netapp share migration In-Reply-To: References: Message-ID: Hello Felipe, I am ttryng on stein beacause net netapp attached to this installation is not so busy. I do not know if is because netapp is not so busy or if the openstack version is stein and nont queens, but share migration seems to work but the export location is not changed fron to to destination. The svm source has an address on a vlan and destination on another vlan. Why it does not change the export location ? Ignazio Il giorno lun 18 ott 2021 alle ore 21:35 Felipe Rodrigues < felipefuty01 at gmail.com> ha scritto: > Hi Ignazio, > > It seems like a bug, since NetApp driver does not support storage assisted > migration across backends (SVMs).. > > We'll check it and open a bug to it. > > Just a note: there is a bug with the same error opened [1]. It may be the > same as yours. Please, check there and mark as affecting you too. > > [1] https://bugs.launchpad.net/manila/+bug/1723513 > > Best regards, Felipe. > > > On Mon, Oct 18, 2021 at 1:57 PM Ignazio Cassano > wrote: > >> Hello all, >> I have an installation of openstack queens and manila is using netapp >> fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. >> When I try share migration it fails. >> >> manila migration-start --preserve-metadata False --preserve-snapshots >> False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 >> c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 >> #aggr_fas04_MANILA_TO2_UNITY600_Mixed >> >> >> In the share log file I read: >> 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager NetAppException: >> Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in Vserver svmp2-nfs-1138 >> is not part of any data motion operations. >> >> The svmp2-nfs-1138 is the share type where migration start from. >> Both source and destination are on netapp. >> Any help, please? >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Oct 19 07:47:43 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 19 Oct 2021 08:47:43 +0100 Subject: How to force kolla-ansible to rotate logs with given interval [kolla-ansible] In-Reply-To: References: Message-ID: Hi Adam, We don't currently support customising this file via /etc/kolla/config/cron-logrotate-global.conf. Where did you see it documented? It would be possible to support it by adding a with_first_found loop to the cron task in ansible/roles/common/tasks/config.yml. Mark On Mon, 18 Oct 2021 at 09:23, Adam Tomas wrote: > > Hi, > different services in kolla-ansible have different log rotation policies and I?d like to make the logs easier to maintain and search (tried central logging with Kibana, but somehow I don?t like this solution). So I tried to write common config file for all logs. > As I understand all logs should be rotated by cron container, inside of which there?s logrotate.conf file (and as I can see the logs are rotated according to this file). So I?ve copied this file, modified according to my needs and put it ina /etc/kolla/config with the name cron-logrotate-global.conf (as documentation says). And? nothing. I?ve checked permissions of this file - everything seems to be ok, so what?s the problem? Below is my logrotate.conf file > > Best regards, > Adam Tomas > > cat /etc/kolla/config/cron-logrotate-global.conf > > daily > rotate 31 > copytruncate > compress > delaycompress > notifempty > missingok > minsize 0M > maxsize 100M > su root kolla > "/var/log/kolla/ansible.log" > { > } > "/var/log/kolla/aodh/*.log" > { > } > "/var/log/kolla/barbican/*.log" > { > } > "/var/log/kolla/ceilometer/*.log" > { > } > "/var/log/kolla/chrony/*.log" > { > } > "/var/log/kolla/cinder/*.log" > { > } > "/var/log/kolla/cloudkitty/*.log" > { > } > "/var/log/kolla/designate/*.log" > { > } > "/var/log/kolla/elasticsearch/*.log" > { > } > "/var/log/kolla/fluentd/*.log" > { > } > "/var/log/kolla/glance/*.log" > { > } > "/var/log/kolla/haproxy/haproxy.log" > { > } > "/var/log/kolla/heat/*.log" > { > } > "/var/log/kolla/horizon/*.log" > { > } > "/var/log/kolla/influxdb/*.log" > { > } > "/var/log/kolla/iscsi/iscsi.log" > { > } > "/var/log/kolla/kafka/*.log" > { > } > "/var/log/kolla/keepalived/keepalived.log" > { > } > "/var/log/kolla/keystone/*.log" > { > } > "/var/log/kolla/kibana/*.log" > { > } > "/var/log/kolla/magnum/*.log" > { > } > "/var/log/kolla/mariadb/*.log" > { > } > "/var/log/kolla/masakari/*.log" > { > } > "/var/log/kolla/monasca/*.log" > { > } > "/var/log/kolla/neutron/*.log" > { > postrotate > chmod 644 /var/log/kolla/neutron/*.log > endscript > } > "/var/log/kolla/nova/*.log" > { > } > "/var/log/kolla/octavia/*.log" > { > } > "/var/log/kolla/rabbitmq/*.log" > { > } > "/var/log/kolla/rally/*.log" > { > } > "/var/log/kolla/skydive/*.log" > { > } > "/var/log/kolla/storm/*.log" > { > } > "/var/log/kolla/swift/*.log" > { > } > "/var/log/kolla/vitrage/*.log" > { > } > "/var/log/kolla/zookeeper/*.log" > { > } From bkslash at poczta.onet.pl Tue Oct 19 08:40:20 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 19 Oct 2021 10:40:20 +0200 Subject: How to force kolla-ansible to rotate logs with given interval [kolla-ansible] In-Reply-To: References: Message-ID: <7E3D9425-7CE1-4189-9AA8-FF74DCCD873B@poczta.onet.pl> Hi Mark, thank you for the answer. I?ve changed ansible/roles/common/templates/cron-logrotate-global.conf.j2 (minsize 0M) and ansible/roles/common/defaults/main.yml (changing cron_logrotate_rotation_interval to ?daily? and cron_logrotate_rotation_count to ?31") and logrotate.conf inside cron container now has my settings, but? logs still rotates according to default rules (every 6 weeks and only if the log is > than 30M). > Wiadomo?? napisana przez Mark Goddard w dniu 19.10.2021, o godz. 09:47: > > Hi Adam, > > We don't currently support customising this file via > /etc/kolla/config/cron-logrotate-global.conf. Where did you see it > documented? > > It would be possible to support it by adding a with_first_found loop > to the cron task in ansible/roles/common/tasks/config.yml. > how exactly? Best regards, Adam Tomas > Mark > > On Mon, 18 Oct 2021 at 09:23, Adam Tomas wrote: >> >> Hi, >> different services in kolla-ansible have different log rotation policies and I?d like to make the logs easier to maintain and search (tried central logging with Kibana, but somehow I don?t like this solution). So I tried to write common config file for all logs. >> As I understand all logs should be rotated by cron container, inside of which there?s logrotate.conf file (and as I can see the logs are rotated according to this file). So I?ve copied this file, modified according to my needs and put it ina /etc/kolla/config with the name cron-logrotate-global.conf (as documentation says). And? nothing. I?ve checked permissions of this file - everything seems to be ok, so what?s the problem? Below is my logrotate.conf file >> >> Best regards, >> Adam Tomas >> >> cat /etc/kolla/config/cron-logrotate-global.conf >> >> daily >> rotate 31 >> copytruncate >> compress >> delaycompress >> notifempty >> missingok >> minsize 0M >> maxsize 100M >> su root kolla >> "/var/log/kolla/ansible.log" >> { >> } >> "/var/log/kolla/aodh/*.log" >> { >> } >> "/var/log/kolla/barbican/*.log" >> { >> } >> "/var/log/kolla/ceilometer/*.log" >> { >> } >> "/var/log/kolla/chrony/*.log" >> { >> } >> "/var/log/kolla/cinder/*.log" >> { >> } >> "/var/log/kolla/cloudkitty/*.log" >> { >> } >> "/var/log/kolla/designate/*.log" >> { >> } >> "/var/log/kolla/elasticsearch/*.log" >> { >> } >> "/var/log/kolla/fluentd/*.log" >> { >> } >> "/var/log/kolla/glance/*.log" >> { >> } >> "/var/log/kolla/haproxy/haproxy.log" >> { >> } >> "/var/log/kolla/heat/*.log" >> { >> } >> "/var/log/kolla/horizon/*.log" >> { >> } >> "/var/log/kolla/influxdb/*.log" >> { >> } >> "/var/log/kolla/iscsi/iscsi.log" >> { >> } >> "/var/log/kolla/kafka/*.log" >> { >> } >> "/var/log/kolla/keepalived/keepalived.log" >> { >> } >> "/var/log/kolla/keystone/*.log" >> { >> } >> "/var/log/kolla/kibana/*.log" >> { >> } >> "/var/log/kolla/magnum/*.log" >> { >> } >> "/var/log/kolla/mariadb/*.log" >> { >> } >> "/var/log/kolla/masakari/*.log" >> { >> } >> "/var/log/kolla/monasca/*.log" >> { >> } >> "/var/log/kolla/neutron/*.log" >> { >> postrotate >> chmod 644 /var/log/kolla/neutron/*.log >> endscript >> } >> "/var/log/kolla/nova/*.log" >> { >> } >> "/var/log/kolla/octavia/*.log" >> { >> } >> "/var/log/kolla/rabbitmq/*.log" >> { >> } >> "/var/log/kolla/rally/*.log" >> { >> } >> "/var/log/kolla/skydive/*.log" >> { >> } >> "/var/log/kolla/storm/*.log" >> { >> } >> "/var/log/kolla/swift/*.log" >> { >> } >> "/var/log/kolla/vitrage/*.log" >> { >> } >> "/var/log/kolla/zookeeper/*.log" >> { >> } From vikarnatathe at gmail.com Tue Oct 19 09:16:40 2021 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Tue, 19 Oct 2021 14:46:40 +0530 Subject: Openstack magnum In-Reply-To: References: Message-ID: Hi All, I was able to login to the instance. I see that kubelet service is in activating state. When I checked the journalctl, found the below. *Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=125/n/aOct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via Hyperkube (System Container).* Executed the below command to fix this issue. *mkdir -p /sys/fs/cgroup/systemd* Now I am getiing the below error. Has anybody seen this issue. *failed to get the kubelet's cgroup: mountpoint for cpu not found. Kubelet system container metrics may be missing.failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. failed to run Kubelet: mountpoint for not found* On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe wrote: > >> Hi Ammad, >> >> Thanks for responding. >> >> Yes the instance is getting created, but i am unable to login though i >> have generated the keypair. There is no default password for this image to >> login via console. >> >> openstack server list >> >> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >> | ID | Name >> | Status | Networks | Image | >> Flavor | >> >> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >> | cf955a75-8cd2-4f91-a01f-677159b57cb2 | >> k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, >> 10.14.20.181 | fedora-coreos-latest | m1.large | >> >> >> ssh -i id_rsa core at 10.14.20.181 >> The authenticity of host '10.14.20.181 (10.14.20.181)' can't be >> established. >> ECDSA key fingerprint is >> SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. >> Are you sure you want to continue connecting (yes/no/[fingerprint])? yes >> Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known >> hosts. >> core at 10.14.20.181: Permission denied >> (publickey,gssapi-keyex,gssapi-with-mic). >> >> On Mon, 18 Oct 2021 at 14:02, Ammad Syed wrote: >> >>> Hi, >>> >>> Can you check if the master server is deployed as a nova instance ? if >>> yes, then login to the instance and check cloud-init and heat agent logs to >>> see the errors. >>> >>> Ammad >>> >>> On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe >>> wrote: >>> >>>> Hello All, >>>> >>>> I am trying to create a kubernetes cluster using magnum. Image: >>>> fedora-coreos. >>>> >>>> >>>> The stack gets stucked in CREATE_IN_PROGRESS. See the output below. >>>> openstack coe cluster list >>>> >>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>> | uuid | name | keypair | >>>> node_count | master_count | status | health_status | >>>> >>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>> | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | >>>> 2 | 1 | CREATE_IN_PROGRESS | None | >>>> >>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>> >>>> openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters >>>> >>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value >>>> >>>> >>>> >>>> >>>> >>>> >>>> | >>>> >>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> | attributes | {'refs_map': None, 'removed_rsrc_list': [], >>>> 'attributes': None, 'refs': None} >>>> >>>> >>>> >>>> >>>> >>>> | >>>> | creation_time | 2021-10-18T06:44:02Z >>>> >>>> >>>> >>>> >>>> >>>> >>>> | >>>> | description | >>>> >>>> >>>> >>>> >>>> >>>> >>>> | >>>> | links | [{'href': ' >>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', >>>> 'rel': 'self'}, {'href': ' >>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', >>>> 'rel': 'stack'}, {'href': ' >>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', >>>> 'rel': 'nested'}] | >>>> | logical_resource_id | kube_masters >>>> >>>> >>>> >>>> >>>> >>>> >>>> | >>>> | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 >>>> >>>> >>>> >>>> >>>> >>>> >>>> | >>>> | required_by | ['kube_cluster_deploy', >>>> 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] >>>> >>>> >>>> >>>> >>>> >>>> | >>>> | resource_name | kube_masters >>>> >>>> >>>> >>>> >>>> >>>> >>>> | >>>> | resource_status | CREATE_IN_PROGRESS >>>> >>>> >>>> >>>> >>>> >>>> >>>> | >>>> | resource_status_reason | state changed >>>> >>>> >>>> >>>> >>>> >>>> >>>> | >>>> | resource_type | OS::Heat::ResourceGroup >>>> >>>> >>>> >>>> >>>> >>>> >>>> | >>>> | updated_time | 2021-10-18T06:44:02Z >>>> >>>> >>>> >>>> >>>> >>>> >>>> | >>>> >>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> Vikarna >>>> >>> >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Tue Oct 19 09:22:37 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 19 Oct 2021 14:22:37 +0500 Subject: Openstack magnum In-Reply-To: References: Message-ID: Hi, Which fcos image you are using ? It looks like you are using fcos 34. Which is currently not supported. Use fcos 33. On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe wrote: > Hi All, > > I was able to login to the instance. I see that kubelet service is in > activating state. When I checked the journalctl, found the below. > > > > > > > *Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: > Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 > kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs > /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 > kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main > process exited, code=exited, status=125/n/aOct 19 05:18:34 > kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: > Failed with result 'exit-code'.Oct 19 05:18:44 > kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: > Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 > kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via > Hyperkube (System Container).* > > Executed the below command to fix this issue. > *mkdir -p /sys/fs/cgroup/systemd* > > > Now I am getiing the below error. Has anybody seen this issue. > > > > *failed to get the kubelet's cgroup: mountpoint for cpu not found. > Kubelet system container metrics may be missing.failed to get the container > runtime's cgroup: failed to get container name for docker process: > mountpoint for cpu not found. failed to run Kubelet: mountpoint for not > found* > > On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe > wrote: > >> >>> Hi Ammad, >>> >>> Thanks for responding. >>> >>> Yes the instance is getting created, but i am unable to login though i >>> have generated the keypair. There is no default password for this image to >>> login via console. >>> >>> openstack server list >>> >>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>> | ID | Name >>> | Status | Networks | Image >>> | Flavor | >>> >>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>> | cf955a75-8cd2-4f91-a01f-677159b57cb2 | >>> k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, >>> 10.14.20.181 | fedora-coreos-latest | m1.large | >>> >>> >>> ssh -i id_rsa core at 10.14.20.181 >>> The authenticity of host '10.14.20.181 (10.14.20.181)' can't be >>> established. >>> ECDSA key fingerprint is >>> SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. >>> Are you sure you want to continue connecting (yes/no/[fingerprint])? yes >>> Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known >>> hosts. >>> core at 10.14.20.181: Permission denied >>> (publickey,gssapi-keyex,gssapi-with-mic). >>> >>> On Mon, 18 Oct 2021 at 14:02, Ammad Syed wrote: >>> >>>> Hi, >>>> >>>> Can you check if the master server is deployed as a nova instance ? if >>>> yes, then login to the instance and check cloud-init and heat agent logs to >>>> see the errors. >>>> >>>> Ammad >>>> >>>> On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe >>>> wrote: >>>> >>>>> Hello All, >>>>> >>>>> I am trying to create a kubernetes cluster using magnum. Image: >>>>> fedora-coreos. >>>>> >>>>> >>>>> The stack gets stucked in CREATE_IN_PROGRESS. See the output below. >>>>> openstack coe cluster list >>>>> >>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>> | uuid | name | keypair | >>>>> node_count | master_count | status | health_status | >>>>> >>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>> | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | >>>>> 2 | 1 | CREATE_IN_PROGRESS | None | >>>>> >>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>> >>>>> openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters >>>>> >>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | Field | Value >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> >>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> | attributes | {'refs_map': None, 'removed_rsrc_list': [], >>>>> 'attributes': None, 'refs': None} >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> | creation_time | 2021-10-18T06:44:02Z >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> | description | >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> | links | [{'href': ' >>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', >>>>> 'rel': 'self'}, {'href': ' >>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', >>>>> 'rel': 'stack'}, {'href': ' >>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', >>>>> 'rel': 'nested'}] | >>>>> | logical_resource_id | kube_masters >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> | required_by | ['kube_cluster_deploy', >>>>> 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> | resource_name | kube_masters >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> | resource_status | CREATE_IN_PROGRESS >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> | resource_status_reason | state changed >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> | resource_type | OS::Heat::ResourceGroup >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> | updated_time | 2021-10-18T06:44:02Z >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> | >>>>> >>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>> >>>>> Vikarna >>>>> >>>> >>>> >>>> -- >>>> Regards, >>>> >>>> >>>> Syed Ammad Ali >>>> >>> -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikarnatathe at gmail.com Tue Oct 19 09:30:03 2021 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Tue, 19 Oct 2021 15:00:03 +0530 Subject: Openstack magnum In-Reply-To: References: Message-ID: Hi Ammad, Yes, fcos34. Let me try with fcos33. Thanks On Tue, 19 Oct 2021 at 14:52, Ammad Syed wrote: > Hi, > > Which fcos image you are using ? It looks like you are using fcos 34. > Which is currently not supported. Use fcos 33. > > On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe > wrote: > >> Hi All, >> >> I was able to login to the instance. I see that kubelet service is in >> activating state. When I checked the journalctl, found the below. >> >> >> >> >> >> >> *Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: >> Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 >> kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs >> /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 >> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main >> process exited, code=exited, status=125/n/aOct 19 05:18:34 >> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: >> Failed with result 'exit-code'.Oct 19 05:18:44 >> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: >> Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 >> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via >> Hyperkube (System Container).* >> >> Executed the below command to fix this issue. >> *mkdir -p /sys/fs/cgroup/systemd* >> >> >> Now I am getiing the below error. Has anybody seen this issue. >> >> >> >> *failed to get the kubelet's cgroup: mountpoint for cpu not found. >> Kubelet system container metrics may be missing.failed to get the container >> runtime's cgroup: failed to get container name for docker process: >> mountpoint for cpu not found. failed to run Kubelet: mountpoint for not >> found* >> >> On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe >> wrote: >> >>> >>>> Hi Ammad, >>>> >>>> Thanks for responding. >>>> >>>> Yes the instance is getting created, but i am unable to login though i >>>> have generated the keypair. There is no default password for this image to >>>> login via console. >>>> >>>> openstack server list >>>> >>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>>> | ID | Name >>>> | Status | Networks | Image >>>> | Flavor | >>>> >>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>>> | cf955a75-8cd2-4f91-a01f-677159b57cb2 | >>>> k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, >>>> 10.14.20.181 | fedora-coreos-latest | m1.large | >>>> >>>> >>>> ssh -i id_rsa core at 10.14.20.181 >>>> The authenticity of host '10.14.20.181 (10.14.20.181)' can't be >>>> established. >>>> ECDSA key fingerprint is >>>> SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. >>>> Are you sure you want to continue connecting (yes/no/[fingerprint])? yes >>>> Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known >>>> hosts. >>>> core at 10.14.20.181: Permission denied >>>> (publickey,gssapi-keyex,gssapi-with-mic). >>>> >>>> On Mon, 18 Oct 2021 at 14:02, Ammad Syed wrote: >>>> >>>>> Hi, >>>>> >>>>> Can you check if the master server is deployed as a nova instance ? if >>>>> yes, then login to the instance and check cloud-init and heat agent logs to >>>>> see the errors. >>>>> >>>>> Ammad >>>>> >>>>> On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe >>>>> wrote: >>>>> >>>>>> Hello All, >>>>>> >>>>>> I am trying to create a kubernetes cluster using magnum. Image: >>>>>> fedora-coreos. >>>>>> >>>>>> >>>>>> The stack gets stucked in CREATE_IN_PROGRESS. See the output below. >>>>>> openstack coe cluster list >>>>>> >>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>> | uuid | name | keypair | >>>>>> node_count | master_count | status | health_status | >>>>>> >>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>> | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | >>>>>> 2 | 1 | CREATE_IN_PROGRESS | None | >>>>>> >>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>> >>>>>> openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters >>>>>> >>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> | Field | Value >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> >>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> | attributes | {'refs_map': None, 'removed_rsrc_list': >>>>>> [], 'attributes': None, 'refs': None} >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> | creation_time | 2021-10-18T06:44:02Z >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> | description | >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> | links | [{'href': ' >>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', >>>>>> 'rel': 'self'}, {'href': ' >>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', >>>>>> 'rel': 'stack'}, {'href': ' >>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', >>>>>> 'rel': 'nested'}] | >>>>>> | logical_resource_id | kube_masters >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> | required_by | ['kube_cluster_deploy', >>>>>> 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> | resource_name | kube_masters >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> | resource_status | CREATE_IN_PROGRESS >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> | resource_status_reason | state changed >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> | resource_type | OS::Heat::ResourceGroup >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> | updated_time | 2021-10-18T06:44:02Z >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> | >>>>>> >>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>> >>>>>> Vikarna >>>>>> >>>>> >>>>> >>>>> -- >>>>> Regards, >>>>> >>>>> >>>>> Syed Ammad Ali >>>>> >>>> -- > Regards, > > > Syed Ammad Ali > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Tue Oct 19 10:08:44 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 19 Oct 2021 13:08:44 +0300 Subject: [openstack-ansible][osa][ptg] Yoga PTG session Message-ID: <404691634637142@mail.yandex.ru> An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 19 11:24:29 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 19 Oct 2021 13:24:29 +0200 Subject: [openstack][manila] queens netapp share migration In-Reply-To: References: Message-ID: I am sorry . In my previous email there are a lot of errors. Anycase on stein the migration of share does not return errors but in this test I am using another fas cluster which is not so busy. However looking on netapp gui the migrated share does not appear on destination svm:-( Ignazio Il Mar 19 Ott 2021, 09:39 Ignazio Cassano ha scritto: > Hello Felipe, I am ttryng on stein beacause net netapp attached to this > installation is not so busy. > I do not know if is because netapp is not so busy or if the openstack > version is stein and nont queens, but share migration seems to work but the > export location is not changed fron to to destination. > The svm source has an address on a vlan and destination on another vlan. > Why it does not change the export location ? > Ignazio > > Il giorno lun 18 ott 2021 alle ore 21:35 Felipe Rodrigues < > felipefuty01 at gmail.com> ha scritto: > >> Hi Ignazio, >> >> It seems like a bug, since NetApp driver does not support storage >> assisted migration across backends (SVMs).. >> >> We'll check it and open a bug to it. >> >> Just a note: there is a bug with the same error opened [1]. It may be the >> same as yours. Please, check there and mark as affecting you too. >> >> [1] https://bugs.launchpad.net/manila/+bug/1723513 >> >> Best regards, Felipe. >> >> >> On Mon, Oct 18, 2021 at 1:57 PM Ignazio Cassano >> wrote: >> >>> Hello all, >>> I have an installation of openstack queens and manila is using netapp >>> fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. >>> When I try share migration it fails. >>> >>> manila migration-start --preserve-metadata False --preserve-snapshots >>> False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 >>> c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 >>> #aggr_fas04_MANILA_TO2_UNITY600_Mixed >>> >>> >>> In the share log file I read: >>> 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager >>> NetAppException: Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in >>> Vserver svmp2-nfs-1138 is not part of any data motion operations. >>> >>> The svmp2-nfs-1138 is the share type where migration start from. >>> Both source and destination are on netapp. >>> Any help, please? >>> Ignazio >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Tue Oct 19 12:30:34 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Tue, 19 Oct 2021 14:30:34 +0200 Subject: [tripleo][release] tripleo-operator-ansible release job failure In-Reply-To: References: Message-ID: Hi TripleO team, With the latest release [1] the release job failed [2] for tripleo-operator-ansible, specifically the "ansible-galaxy" command failed with: "ERROR! Unexpected Exception, this is probably a bug: [Errno 13] Permission denied: '/tmp/collection_built/tripleo-operator-0.7.0.tar.gz'" [3]. Have you seen this kind of error before? Could you have a look at the error? Thanks in advance, El?d [1] https://review.opendev.org/c/openstack/releases/+/813852 [2] see forwarded mail below [3] https://zuul.opendev.org/t/openstack/build/2dd817ac52824bc886c1839680cb44f1/console -------- Forwarded Message -------- Subject: [Release-job-failures] Tag of openstack/tripleo-operator-ansible for ref refs/tags/0.7.0 failed Date: Mon, 18 Oct 2021 18:09:47 +0000 From: zuul at openstack.org Reply-To: openstack-discuss at lists.openstack.org To: release-job-failures at lists.openstack.org Build failed. - publish-openstack-releasenotes-python3 https://zuul.opendev.org/t/openstack/build/b34e7dfd6d4348aaaf3d7c25415cc801 : SUCCESS in 5m 00s - tripleo-operator-ansible-release https://zuul.opendev.org/t/openstack/build/2dd817ac52824bc886c1839680cb44f1 : FAILURE in 3m 18s _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Oct 19 13:01:06 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 19 Oct 2021 15:01:06 +0200 Subject: [nova] Yoga PTG schedule In-Reply-To: References: Message-ID: On Mon, Oct 18, 2021 at 4:59 PM Sylvain Bauza wrote: > Hello folks, > > Not sure people know about our etherpad for the Yoga PTG. This is this one > : > https://etherpad.opendev.org/p/nova-yoga-ptg > > You can see the schedule above but, here is there : > PTG Schedule > https://www.openstack.org/ptg/#tab_schedule > https://ptg.opendev.org/ > Fancy rendered PDF with hyperlinks for the schedule: > https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf > > Connection details > https://meet.jit.si/vPTG-Newton > > > Some confusion occured to me. Looks like the Newton room is actually a Zoom meeting. Please follow the link given in the PTG schedule, which is https://www.openstack.org/ptg/rooms/newton Thanks, -Sylvain > - *Monday*: project support team discussions, e.g. SIGs, QA, Infra, > Release mgmt, Oslo > > > - *Tuesday* *13:00 UTC - 17:00 UTC* - Nova (Placement) sessions > > > - 13:00 - 14: 00 UTC Cyborg - Nova cross project mini-session > > > - 14:00 - 14:30 UTC Oslo - Nova cross project mini-session > > > - 15:00 - 16:00 UTC RBAC discussions with popup team > > > - *Wednesday 14:00 UTC - 17:00 UTC*: Nova (Placement) sessions > > > - 14:00 - 15:00 UTC Neutron - Nova cross project mini-session > > > - 15:00 - 15:30 UTC Interop discussion with Arkady > > > - *Thursday 14:00 UTC - 17:00 UTC* - Nova (Placement) sessions > > > - 16:00 - 17:00 UTC Cinder - Nova cross project mini-session > > > - *Friday 14:00 UTC - 17:00 UTC* - Nova (Placement) sessions > > > See you then tomorrow at 1pm UTC ! > -Sylvain > -------------- next part -------------- An HTML attachment was scrubbed... URL: From felipefuty01 at gmail.com Tue Oct 19 13:17:04 2021 From: felipefuty01 at gmail.com (Felipe Rodrigues) Date: Tue, 19 Oct 2021 10:17:04 -0300 Subject: [openstack][manila] queens netapp share migration In-Reply-To: References: Message-ID: Please, report your findings in the launchpad bug tracker. It seems like a bug! In fact, we do not support migration across vservers/clusters using the storage assisted mechanism. Prefer the host assisted one. Best regards, Felipe. On Tue, Oct 19, 2021 at 8:24 AM Ignazio Cassano wrote: > I am sorry . In my previous email there are a lot of errors. > Anycase on stein the migration of share does not return errors but in this > test I am using another fas cluster which is not so busy. > > However looking on netapp gui the migrated share does not appear on > destination svm:-( > Ignazio > > Il Mar 19 Ott 2021, 09:39 Ignazio Cassano ha > scritto: > >> Hello Felipe, I am ttryng on stein beacause net netapp attached to this >> installation is not so busy. >> I do not know if is because netapp is not so busy or if the openstack >> version is stein and nont queens, but share migration seems to work but the >> export location is not changed fron to to destination. >> The svm source has an address on a vlan and destination on another vlan. >> Why it does not change the export location ? >> Ignazio >> >> Il giorno lun 18 ott 2021 alle ore 21:35 Felipe Rodrigues < >> felipefuty01 at gmail.com> ha scritto: >> >>> Hi Ignazio, >>> >>> It seems like a bug, since NetApp driver does not support storage >>> assisted migration across backends (SVMs).. >>> >>> We'll check it and open a bug to it. >>> >>> Just a note: there is a bug with the same error opened [1]. It may be >>> the same as yours. Please, check there and mark as affecting you too. >>> >>> [1] https://bugs.launchpad.net/manila/+bug/1723513 >>> >>> Best regards, Felipe. >>> >>> >>> On Mon, Oct 18, 2021 at 1:57 PM Ignazio Cassano < >>> ignaziocassano at gmail.com> wrote: >>> >>>> Hello all, >>>> I have an installation of openstack queens and manila is using netapp >>>> fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. >>>> When I try share migration it fails. >>>> >>>> manila migration-start --preserve-metadata False --preserve-snapshots >>>> False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 >>>> c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 >>>> #aggr_fas04_MANILA_TO2_UNITY600_Mixed >>>> >>>> >>>> In the share log file I read: >>>> 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager >>>> NetAppException: Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in >>>> Vserver svmp2-nfs-1138 is not part of any data motion operations. >>>> >>>> The svmp2-nfs-1138 is the share type where migration start from. >>>> Both source and destination are on netapp. >>>> Any help, please? >>>> Ignazio >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikarnatathe at gmail.com Tue Oct 19 13:23:20 2021 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Tue, 19 Oct 2021 18:53:20 +0530 Subject: Openstack magnum In-Reply-To: References: Message-ID: Hi Ammad, Thanks!!! It worked. On Tue, 19 Oct 2021 at 15:00, Vikarna Tathe wrote: > Hi Ammad, > > Yes, fcos34. Let me try with fcos33. Thanks > > On Tue, 19 Oct 2021 at 14:52, Ammad Syed wrote: > >> Hi, >> >> Which fcos image you are using ? It looks like you are using fcos 34. >> Which is currently not supported. Use fcos 33. >> >> On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe >> wrote: >> >>> Hi All, >>> >>> I was able to login to the instance. I see that kubelet service is in >>> activating state. When I checked the journalctl, found the below. >>> >>> >>> >>> >>> >>> >>> *Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: >>> Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 >>> kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs >>> /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 >>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main >>> process exited, code=exited, status=125/n/aOct 19 05:18:34 >>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: >>> Failed with result 'exit-code'.Oct 19 05:18:44 >>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: >>> Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 >>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via >>> Hyperkube (System Container).* >>> >>> Executed the below command to fix this issue. >>> *mkdir -p /sys/fs/cgroup/systemd* >>> >>> >>> Now I am getiing the below error. Has anybody seen this issue. >>> >>> >>> >>> *failed to get the kubelet's cgroup: mountpoint for cpu not found. >>> Kubelet system container metrics may be missing.failed to get the container >>> runtime's cgroup: failed to get container name for docker process: >>> mountpoint for cpu not found. failed to run Kubelet: mountpoint for not >>> found* >>> >>> On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe >>> wrote: >>> >>>> >>>>> Hi Ammad, >>>>> >>>>> Thanks for responding. >>>>> >>>>> Yes the instance is getting created, but i am unable to login though i >>>>> have generated the keypair. There is no default password for this image to >>>>> login via console. >>>>> >>>>> openstack server list >>>>> >>>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>>>> | ID | Name >>>>> | Status | Networks | Image >>>>> | Flavor | >>>>> >>>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>>>> | cf955a75-8cd2-4f91-a01f-677159b57cb2 | >>>>> k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, >>>>> 10.14.20.181 | fedora-coreos-latest | m1.large | >>>>> >>>>> >>>>> ssh -i id_rsa core at 10.14.20.181 >>>>> The authenticity of host '10.14.20.181 (10.14.20.181)' can't be >>>>> established. >>>>> ECDSA key fingerprint is >>>>> SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. >>>>> Are you sure you want to continue connecting (yes/no/[fingerprint])? >>>>> yes >>>>> Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known >>>>> hosts. >>>>> core at 10.14.20.181: Permission denied >>>>> (publickey,gssapi-keyex,gssapi-with-mic). >>>>> >>>>> On Mon, 18 Oct 2021 at 14:02, Ammad Syed >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> Can you check if the master server is deployed as a nova instance ? >>>>>> if yes, then login to the instance and check cloud-init and heat agent logs >>>>>> to see the errors. >>>>>> >>>>>> Ammad >>>>>> >>>>>> On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe < >>>>>> vikarnatathe at gmail.com> wrote: >>>>>> >>>>>>> Hello All, >>>>>>> >>>>>>> I am trying to create a kubernetes cluster using magnum. Image: >>>>>>> fedora-coreos. >>>>>>> >>>>>>> >>>>>>> The stack gets stucked in CREATE_IN_PROGRESS. See the output below. >>>>>>> openstack coe cluster list >>>>>>> >>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>> | uuid | name | keypair | >>>>>>> node_count | master_count | status | health_status | >>>>>>> >>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>> | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | >>>>>>> 2 | 1 | CREATE_IN_PROGRESS | None | >>>>>>> >>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>> >>>>>>> openstack stack resource show k8s-cluster-01-2nyejxo3hyvb >>>>>>> kube_masters >>>>>>> >>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> | Field | Value >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> >>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> | attributes | {'refs_map': None, 'removed_rsrc_list': >>>>>>> [], 'attributes': None, 'refs': None} >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> | creation_time | 2021-10-18T06:44:02Z >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> | description | >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> | links | [{'href': ' >>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', >>>>>>> 'rel': 'self'}, {'href': ' >>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', >>>>>>> 'rel': 'stack'}, {'href': ' >>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', >>>>>>> 'rel': 'nested'}] | >>>>>>> | logical_resource_id | kube_masters >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> | required_by | ['kube_cluster_deploy', >>>>>>> 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> | resource_name | kube_masters >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> | resource_status | CREATE_IN_PROGRESS >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> | resource_status_reason | state changed >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> | resource_type | OS::Heat::ResourceGroup >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> | updated_time | 2021-10-18T06:44:02Z >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> | >>>>>>> >>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>> >>>>>>> Vikarna >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Regards, >>>>>> >>>>>> >>>>>> Syed Ammad Ali >>>>>> >>>>> -- >> Regards, >> >> >> Syed Ammad Ali >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Oct 19 13:25:43 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 19 Oct 2021 15:25:43 +0200 Subject: [openstack][manila] queens netapp share migration In-Reply-To: References: Message-ID: Will do, thanks! Il giorno mar 19 ott 2021 alle ore 15:17 Felipe Rodrigues < felipefuty01 at gmail.com> ha scritto: > Please, report your findings in the launchpad bug tracker. It seems like a > bug! > > In fact, we do not support migration across vservers/clusters using the > storage assisted mechanism. Prefer the host assisted one. > > Best regards, Felipe. > > On Tue, Oct 19, 2021 at 8:24 AM Ignazio Cassano > wrote: > >> I am sorry . In my previous email there are a lot of errors. >> Anycase on stein the migration of share does not return errors but in >> this test I am using another fas cluster which is not so busy. >> >> However looking on netapp gui the migrated share does not appear on >> destination svm:-( >> Ignazio >> >> Il Mar 19 Ott 2021, 09:39 Ignazio Cassano ha >> scritto: >> >>> Hello Felipe, I am ttryng on stein beacause net netapp attached to this >>> installation is not so busy. >>> I do not know if is because netapp is not so busy or if the openstack >>> version is stein and nont queens, but share migration seems to work but the >>> export location is not changed fron to to destination. >>> The svm source has an address on a vlan and destination on another vlan. >>> Why it does not change the export location ? >>> Ignazio >>> >>> Il giorno lun 18 ott 2021 alle ore 21:35 Felipe Rodrigues < >>> felipefuty01 at gmail.com> ha scritto: >>> >>>> Hi Ignazio, >>>> >>>> It seems like a bug, since NetApp driver does not support storage >>>> assisted migration across backends (SVMs).. >>>> >>>> We'll check it and open a bug to it. >>>> >>>> Just a note: there is a bug with the same error opened [1]. It may be >>>> the same as yours. Please, check there and mark as affecting you too. >>>> >>>> [1] https://bugs.launchpad.net/manila/+bug/1723513 >>>> >>>> Best regards, Felipe. >>>> >>>> >>>> On Mon, Oct 18, 2021 at 1:57 PM Ignazio Cassano < >>>> ignaziocassano at gmail.com> wrote: >>>> >>>>> Hello all, >>>>> I have an installation of openstack queens and manila is using netapp >>>>> fas8040 storage with driver manila.share.drivers.netapp.common.NetAppDriver. >>>>> When I try share migration it fails. >>>>> >>>>> manila migration-start --preserve-metadata False --preserve-snapshots >>>>> False --writable True --nondisruptive True --new_share_type svmp2-nfs-1140 >>>>> c765143e-d308-4e9d-8a3f-5cb4692be70b 10.138.176.16 at svmp2-nfs-1140 >>>>> #aggr_fas04_MANILA_TO2_UNITY600_Mixed >>>>> >>>>> >>>>> In the share log file I read: >>>>> 2021-10-18 18:35:01.211 80999 ERROR manila.share.manager >>>>> NetAppException: Volume share_be9b819d_feab_431b_9ade_b257cc08c9f6 in >>>>> Vserver svmp2-nfs-1138 is not part of any data motion operations. >>>>> >>>>> The svmp2-nfs-1138 is the share type where migration start from. >>>>> Both source and destination are on netapp. >>>>> Any help, please? >>>>> Ignazio >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Oct 19 13:37:46 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 19 Oct 2021 13:37:46 +0000 Subject: [nova] Yoga PTG schedule In-Reply-To: References: Message-ID: <20211019133454.4lfa7fkfb6pixouf@yuggoth.org> On 2021-10-19 15:01:06 +0200 (+0200), Sylvain Bauza wrote: [...] > Some confusion occured to me. Looks like the Newton room is > actually a Zoom meeting. Please follow the link given in the PTG > schedule [...] For future reference, you can ask the ptgbot to switch the videoconference URL for your track like `#nova url https://our.new/location` if desired: https://opendev.org/openstack/ptgbot/src/branch/master/README.rst#url -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sshnaidm at redhat.com Tue Oct 19 13:44:24 2021 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Tue, 19 Oct 2021 16:44:24 +0300 Subject: [tripleo][release] tripleo-operator-ansible release job failure In-Reply-To: References: Message-ID: Hi, yeah, we are looking into this, it's a new job and it's the first run. Thanks for heads up! Thanks On Tue, Oct 19, 2021 at 3:37 PM El?d Ill?s wrote: > Hi TripleO team, > > With the latest release [1] the release job failed [2] for > tripleo-operator-ansible, specifically the "ansible-galaxy" command failed > with: "ERROR! Unexpected Exception, this is probably a bug: [Errno 13] > Permission denied: '/tmp/collection_built/tripleo-operator-0.7.0.tar.gz'" > [3]. > > Have you seen this kind of error before? Could you have a look at the > error? > > Thanks in advance, > > El?d > [1] https://review.opendev.org/c/openstack/releases/+/813852 > [2] see forwarded mail below > [3] > https://zuul.opendev.org/t/openstack/build/2dd817ac52824bc886c1839680cb44f1/console > > > > -------- Forwarded Message -------- > Subject: [Release-job-failures] Tag of openstack/tripleo-operator-ansible > for ref refs/tags/0.7.0 failed > Date: Mon, 18 Oct 2021 18:09:47 +0000 > From: zuul at openstack.org > Reply-To: openstack-discuss at lists.openstack.org > To: release-job-failures at lists.openstack.org > > Build failed. > > - publish-openstack-releasenotes-python3 > https://zuul.opendev.org/t/openstack/build/b34e7dfd6d4348aaaf3d7c25415cc801 > : SUCCESS in 5m 00s > - tripleo-operator-ansible-release > https://zuul.opendev.org/t/openstack/build/2dd817ac52824bc886c1839680cb44f1 > : FAILURE in 3m 18s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonykarera at gmail.com Tue Oct 19 14:10:44 2021 From: tonykarera at gmail.com (Karera Tony) Date: Tue, 19 Oct 2021 16:10:44 +0200 Subject: openstack-discuss Digest, Vol 36, Issue 87 In-Reply-To: References: Message-ID: Even 32 works fine On Tue, 19 Oct 2021, 15:25 , wrote: > Send openstack-discuss mailing list submissions to > openstack-discuss at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > or, via email, send a message with subject or body 'help' to > openstack-discuss-request at lists.openstack.org > > You can reach the person managing the list at > openstack-discuss-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of openstack-discuss digest..." > > > Today's Topics: > > 1. Re: Openstack magnum (Vikarna Tathe) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 19 Oct 2021 18:53:20 +0530 > From: Vikarna Tathe > To: Ammad Syed > Cc: openstack-discuss > Subject: Re: Openstack magnum > Message-ID: > < > CAE2S1+473YbwicxYGKq0VoNu4Ozjt-+khGsdy4Za6R8po1M+YA at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi Ammad, > > Thanks!!! It worked. > > On Tue, 19 Oct 2021 at 15:00, Vikarna Tathe > wrote: > > > Hi Ammad, > > > > Yes, fcos34. Let me try with fcos33. Thanks > > > > On Tue, 19 Oct 2021 at 14:52, Ammad Syed wrote: > > > >> Hi, > >> > >> Which fcos image you are using ? It looks like you are using fcos 34. > >> Which is currently not supported. Use fcos 33. > >> > >> On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe > >> wrote: > >> > >>> Hi All, > >>> > >>> I was able to login to the instance. I see that kubelet service is in > >>> activating state. When I checked the journalctl, found the below. > >>> > >>> > >>> > >>> > >>> > >>> > >>> *Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: > >>> Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 > >>> kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs > >>> /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 > >>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: > Main > >>> process exited, code=exited, status=125/n/aOct 19 05:18:34 > >>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: > >>> Failed with result 'exit-code'.Oct 19 05:18:44 > >>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: > >>> Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 > >>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet > via > >>> Hyperkube (System Container).* > >>> > >>> Executed the below command to fix this issue. > >>> *mkdir -p /sys/fs/cgroup/systemd* > >>> > >>> > >>> Now I am getiing the below error. Has anybody seen this issue. > >>> > >>> > >>> > >>> *failed to get the kubelet's cgroup: mountpoint for cpu not found. > >>> Kubelet system container metrics may be missing.failed to get the > container > >>> runtime's cgroup: failed to get container name for docker process: > >>> mountpoint for cpu not found. failed to run Kubelet: mountpoint for > not > >>> found* > >>> > >>> On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe > >>> wrote: > >>> > >>>> > >>>>> Hi Ammad, > >>>>> > >>>>> Thanks for responding. > >>>>> > >>>>> Yes the instance is getting created, but i am unable to login though > i > >>>>> have generated the keypair. There is no default password for this > image to > >>>>> login via console. > >>>>> > >>>>> openstack server list > >>>>> > >>>>> > +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ > >>>>> | ID | Name > >>>>> | Status | Networks | Image > >>>>> | Flavor | > >>>>> > >>>>> > +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ > >>>>> | cf955a75-8cd2-4f91-a01f-677159b57cb2 | > >>>>> k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | > private1=10.100.0.39, > >>>>> 10.14.20.181 | fedora-coreos-latest | m1.large | > >>>>> > >>>>> > >>>>> ssh -i id_rsa core at 10.14.20.181 > >>>>> The authenticity of host '10.14.20.181 (10.14.20.181)' can't be > >>>>> established. > >>>>> ECDSA key fingerprint is > >>>>> SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. > >>>>> Are you sure you want to continue connecting (yes/no/[fingerprint])? > >>>>> yes > >>>>> Warning: Permanently added '10.14.20.181' (ECDSA) to the list of > known > >>>>> hosts. > >>>>> core at 10.14.20.181: Permission denied > >>>>> (publickey,gssapi-keyex,gssapi-with-mic). > >>>>> > >>>>> On Mon, 18 Oct 2021 at 14:02, Ammad Syed > >>>>> wrote: > >>>>> > >>>>>> Hi, > >>>>>> > >>>>>> Can you check if the master server is deployed as a nova instance ? > >>>>>> if yes, then login to the instance and check cloud-init and heat > agent logs > >>>>>> to see the errors. > >>>>>> > >>>>>> Ammad > >>>>>> > >>>>>> On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe < > >>>>>> vikarnatathe at gmail.com> wrote: > >>>>>> > >>>>>>> Hello All, > >>>>>>> > >>>>>>> I am trying to create a kubernetes cluster using magnum. Image: > >>>>>>> fedora-coreos. > >>>>>>> > >>>>>>> > >>>>>>> The stack gets stucked in CREATE_IN_PROGRESS. See the output below. > >>>>>>> openstack coe cluster list > >>>>>>> > >>>>>>> > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > >>>>>>> | uuid | name | keypair | > >>>>>>> node_count | master_count | status | health_status | > >>>>>>> > >>>>>>> > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > >>>>>>> | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | > >>>>>>> 2 | 1 | CREATE_IN_PROGRESS | None | > >>>>>>> > >>>>>>> > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > >>>>>>> > >>>>>>> openstack stack resource show k8s-cluster-01-2nyejxo3hyvb > >>>>>>> kube_masters > >>>>>>> > >>>>>>> > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>>> | Field | Value > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> > >>>>>>> > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>>> | attributes | {'refs_map': None, 'removed_rsrc_list': > >>>>>>> [], 'attributes': None, 'refs': None} > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> | creation_time | 2021-10-18T06:44:02Z > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> | description | > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> | links | [{'href': ' > >>>>>>> > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters > ', > >>>>>>> 'rel': 'self'}, {'href': ' > >>>>>>> > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17 > ', > >>>>>>> 'rel': 'stack'}, {'href': ' > >>>>>>> > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028 > ', > >>>>>>> 'rel': 'nested'}] | > >>>>>>> | logical_resource_id | kube_masters > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> | required_by | ['kube_cluster_deploy', > >>>>>>> 'etcd_address_lb_switch', 'api_address_lb_switch', > 'kube_cluster_config'] > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> | resource_name | kube_masters > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> | resource_status | CREATE_IN_PROGRESS > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> | resource_status_reason | state changed > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> | resource_type | OS::Heat::ResourceGroup > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> | updated_time | 2021-10-18T06:44:02Z > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> | > >>>>>>> > >>>>>>> > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > >>>>>>> > >>>>>>> Vikarna > >>>>>>> > >>>>>> > >>>>>> > >>>>>> -- > >>>>>> Regards, > >>>>>> > >>>>>> > >>>>>> Syed Ammad Ali > >>>>>> > >>>>> -- > >> Regards, > >> > >> > >> Syed Ammad Ali > >> > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.openstack.org/pipermail/openstack-discuss/attachments/20211019/af084f05/attachment.htm > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > openstack-discuss mailing list > openstack-discuss at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > > ------------------------------ > > End of openstack-discuss Digest, Vol 36, Issue 87 > ************************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Tue Oct 19 16:19:11 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Tue, 19 Oct 2021 12:19:11 -0400 Subject: [TripleO] Gate blocker - please hold rechecks - tripleo-ci-centos-8-scenario001-standalone In-Reply-To: References: Message-ID: On Mon, Oct 18, 2021 at 2:37 PM Ronelle Landy wrote: > Hello All, > > We have a gate blocker for tripleo at: > https://bugs.launchpad.net/tripleo/+bug/1947548 > > tripleo-ci-centos-8-scenario001-standalone > is > failing. > > We are testing some reverts. > > Please hold rechecks if you are rechecking for this failure. > We will update this list when the error is cleared. > > Thank you! > The gate blocker is resolved now and tripleo-ci-centos-8-scenario001-standalone is now working. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gthiemonge at redhat.com Tue Oct 19 17:11:22 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Tue, 19 Oct 2021 19:11:22 +0200 Subject: [Octavia] 2021-10-21 meeting cancelled Message-ID: Hi, This is the PTG week, we decided during our previous weekly meeting to cancel tomorrow's meeting. Thanks, Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Tue Oct 19 17:30:56 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Tue, 19 Oct 2021 23:00:56 +0530 Subject: [horizon] No weekly meeting tomorrow due to meeting in PTG Message-ID: Hi all, As we are meeting in PTG, so there will be no horizon weekly meeting tomorrow. Thanks & Regards, Vishal Manchanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From wu.wenxiang at 99cloud.net Tue Oct 19 01:01:04 2021 From: wu.wenxiang at 99cloud.net (=?UTF-8?B?5ZC05paH55u4?=) Date: Tue, 19 Oct 2021 09:01:04 +0800 Subject: [tc][horizon][skyline] Welcome to the Skyline PTG In-Reply-To: References: Message-ID: Sorry, it should be: https://www.youtube.com/watch?v=pFAJLwzxv0A Thanks Best Regards Wenxiang Wu From: Mike Carden Date: Tuesday, October 19, 2021 at 5:34 AM To: ??? Cc: openstack-discuss , ??? , ??? , ??? , ??? , ??? , ??WEI , ?? , ??? Subject: Re: [tc][horizon][skyline] Welcome to the Skyline PTG Hi. The video [2] https://www.youtube.com/watch?v=pFAJLwzxv0 is coming up on YouTube as 'Video Unavailable'. -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: From bxzhu_5355 at 163.com Tue Oct 19 02:11:47 2021 From: bxzhu_5355 at 163.com (Boxiang Zhu) Date: Tue, 19 Oct 2021 10:11:47 +0800 (CST) Subject: =?GBK?Q?=BB=D8=B8=B4:Re:_[tc][horizon][skyline?= =?GBK?Q?]_Welcome_to_the_Skyline_PTG?= In-Reply-To: References: Message-ID: Sorry, it should be: https://www.youtube.com/watch?v=pFAJLwzxv0A Thanks Best Regards Boxiang Zhu At 2021-10-19 05:34:23, "Mike Carden" wrote: Hi. The video [2] https://www.youtube.com/watch?v=pFAJLwzxv0 is coming up on YouTube as 'Video Unavailable'. -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Oct 19 21:23:22 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 19 Oct 2021 22:23:22 +0100 Subject: Trove guest agent and Rabbitmq Message-ID: Hi, I am trying to deploy Trove. I am using the Kolla-ansible and Openstack wallaby version. >From the documentation, the Trove guest agent, which runs inside the Tove instance communicates with the trove-taskmanager via rabbitmq, how is this done? The rabbitmq is running in the api network, the instance is running in the tunnel (tenant) network, and in my case, those networks are in different vlans, how should I configure this? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Oct 19 21:28:27 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 19 Oct 2021 22:28:27 +0100 Subject: kolla-ansible wallaby manila ceph pacific Message-ID: Hi, Has anyone been successful in deploying Manila wallaby using kolla-ansible with ceph pacific as a backend? I have created the manila client in ceph pacific like this : *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* When I deploy, I get this error in manila's log file : Bad target type 'mon-mgr' Any ideas? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From y.furukawa8 at gmail.com Tue Oct 19 21:39:58 2021 From: y.furukawa8 at gmail.com (F Yushiro) Date: Wed, 20 Oct 2021 06:39:58 +0900 Subject: [dev] Cannot login to Gerrit Message-ID: <09F02007-07C1-4ECC-8C3E-ABA42E7D8065@gmail.com> ?Hi, I tried to login ty Gerrit w/ OpenID but redirected the following page and saw "Not Found". https://review.opendev.org/SignInFailure,SIGN_IN,Contact+site+administrator Could you please help me to login? My login email is y.furukawa8 at gmail.com. I used to use ex-company address. When I left the company, I modified my email address to above gmail one. I wonder that might occur this situation. Best regards, -- Yushiro Furukawa -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Tue Oct 19 23:14:42 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 19 Oct 2021 16:14:42 -0700 Subject: kolla-ansible wallaby manila ceph pacific In-Reply-To: References: Message-ID: On Tue, Oct 19, 2021 at 2:35 PM wodel youchi wrote: > Hi, > Has anyone been successful in deploying Manila wallaby using kolla-ansible > with ceph pacific as a backend? > > I have created the manila client in ceph pacific like this : > > *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, allow > rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* > > When I deploy, I get this error in manila's log file : > Bad target type 'mon-mgr' > Any ideas? > Could you share the full log from the manila-share service? There's an open bug related to manila/cephfs deployment: https://bugs.launchpad.net/kolla-ansible/+bug/1935784 Proposed fix: https://review.opendev.org/c/openstack/kolla-ansible/+/802743 > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Oct 19 23:38:38 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 19 Oct 2021 16:38:38 -0700 Subject: [dev] Cannot login to Gerrit In-Reply-To: <09F02007-07C1-4ECC-8C3E-ABA42E7D8065@gmail.com> References: <09F02007-07C1-4ECC-8C3E-ABA42E7D8065@gmail.com> Message-ID: <65741387-1674-46df-b8e2-25fa56f6f39f@www.fastmail.com> On Tue, Oct 19, 2021, at 2:39 PM, F Yushiro wrote: > Hi, I tried to login ty Gerrit w/ OpenID but redirected the following > page and saw "Not Found". > > https://review.opendev.org/SignInFailure,SIGN_IN,Contact+site+administrator > > Could you please help me to login? My login email is > y.furukawa8 at gmail.com. I used to use ex-company address. When I left > the company, I modified my email address to above gmail one. I wonder > that might occur this situation. > The issue is there is an existing account that is associated with this gmail address. I assume this old account is also associated with your old company address. You are attempting to login using a new openid so Gerrit wants to create a new account for you but cannot due to this email address conflict between the new account and the old account. There are a couple of options available to us: * You can login using your old Ubuntu One openid and login to the existing account. Doing this will depend on your ability to login to that account (which may depend on your access to old email?). * A Gerrit admin can delete the email address that is in conflict from the old account allowing the new account to be created. Unfortunately, we have approximately 30 database consistency errors related to this problem from before Gerrit prevented you from creating these conflicting accounts. I've been working through these slowly (down from about 700 previously), but until the remainder are cleaned up we cannot easily add your new openid to the old account allowing you to continue using the old account that way. If you can reach out on OFTC IRC in #opendev we can double check things and help walk you through any of this if it helps. Clark From anlin.kong at gmail.com Tue Oct 19 23:47:52 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 20 Oct 2021 12:47:52 +1300 Subject: Trove guest agent and Rabbitmq In-Reply-To: References: Message-ID: Hi Wodel, There is a management network (Neutron network) configured for communication between controller services and guest agent, you need to config in the infra router (using the network vlan) layer to make sure they can talk. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove Project Lead (OpenStack) OpenStack Cloud Provider Project Lead (Kubernetes) On Wed, Oct 20, 2021 at 10:30 AM wodel youchi wrote: > Hi, > > I am trying to deploy Trove. I am using the Kolla-ansible and Openstack > wallaby version. > From the documentation, the Trove guest agent, which runs inside the Tove > instance communicates with the trove-taskmanager via rabbitmq, how is this > done? > > The rabbitmq is running in the api network, the instance is running in the > tunnel (tenant) network, and in my case, those networks are in different > vlans, how should I configure this? > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Wed Oct 20 01:34:33 2021 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Tue, 19 Oct 2021 21:34:33 -0400 Subject: [Sahara]Sahara project PTG meeting from 3:00 to 5:00 PM on October 19, 2021. In-Reply-To: References: Message-ID: Sorry I could not attend. I have read through the IRC logs. Thanks for hosting and for interesting discussion. On Sun, Oct 17, 2021 at 10:23 PM Juntingqiu Qiujunting (???) < qiujunting at inspur.com> wrote: > Hi all > > > > I'm very sorry: > > > > I missed the scheduled sahara PTG meeting time. We tentatively schedule > the Sahara project PTG meeting from 3:00 to 5:00 PM on October 19, 2021. > > Use IRC channel:#openstack-sahara to conduct PTG conferences. > > My topics are as follows: > > 1. Sahara supports the creation of cloud hosts by specifying system > volumes. > > 2. Sahara deploys a dedicated cluster through cloud host VM tools > (qemu-guest-agent). > > > > *???:* Juntingqiu Qiujunting (???) > *????:* 2021?9?24? 18:05 > *???:* 'jeremyfreudberg at gmail.com' ; Faling > Rui (???) ; 'ltoscano at redhat.com' < > ltoscano at redhat.com> > *??:* 'openstack-discuss at lists.openstack.org' < > openstack-discuss at lists.openstack.org> > *??:* [Sahara]Currently about the development of the Sahara community > there are some points > > > > Hi all: > > > > Currently about the development of the Sahara community there are some > points as following: > > > > 1. About the schedule of the regular meeting of the Sahara project? What > is your suggestion? > > > > How about the regular meeting time every Wednesday afternoon 15:00 to > 16:30? > > > > 2. Regarding the Sahara project maintenance switch from StoryBoard to > launchpad. > > > > https://storyboard.openstack.org/ > > https://blueprints.launchpad.net/openstack/ > > The reasons are as follows: > > 1. OpenSatck core projects are maintained on launchpad, such as > nova, cinder, neutron, etc. > > 2. Most OpenStack contributors are used to working on launchpad. > > > > 3. Do you have any suggestions? > > > > If you think this is feasible, I will post this content in the Sahara > community later. Thank you for your help. > > > > Thank you Fossen. > > > > > > --------------------------------- > > Fossen Qiu *|* ??? > > > > > > CBRD * |* ?????????? > > > > > > *T:* 18249256272 > > > > > > *E:* qiujunting at inspur.com > > > > > > > > [image: signature_1653277958] > > ???? > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3519 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed Oct 20 01:47:17 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 19 Oct 2021 20:47:17 -0500 Subject: [all] RBAC related discussion in Yoga PTG In-Reply-To: <17c76c2d8d8.b4647d6b920595.8260462059922238034@ghanshyammann.com> References: <17c76c2d8d8.b4647d6b920595.8260462059922238034@ghanshyammann.com> Message-ID: <17c9b619915.1142bb0ea1292620.7234159382611538149@ghanshyammann.com> Hello Everyone, During the various projects discussion, we found there are many issues/open questions for the new RBAC especially on system scope. I have kept ~1.5 hrs slot in TC session on Friday for discussing it and also checks on making this as goal in Yoga. It is on Friday, 13:30 - 15 UTC in Juno room, please join the sessions. Also add the point/open questions you have/will have from your project discussions in the below etherpad under @L124 (heading 'Evaluating System Scope' ) in https://etherpad.opendev.org/p/policy-popup-yoga-ptg -gmann ---- On Tue, 12 Oct 2021 18:07:33 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > As you might know, we are not so far from the Yoga PTG. I have created the below etherpad > to collect the RBAC related discussion happening in various project sessions. > > - https://etherpad.opendev.org/p/policy-popup-yoga-ptg > > We have not schedule any separate sessions for this instead thought of attending the > related discussion in project PTG itself. > > Please do the below two steps before PTG: > > 1. Add the common topics (for QA, Horizon etc) you would like to discuss/know. > > 2. Add any related rbac sessions you have planned in your project PTG. > - I have added a few of them but few need the exact schedule/time so that we can plan > to attend it. Please check and add the time for your project sessions. > > -gmann > > From qiujunting at inspur.com Wed Oct 20 01:54:52 2021 From: qiujunting at inspur.com (=?utf-8?B?SnVudGluZ3FpdSBRaXVqdW50aW5nICjpgrHlhpvlqbcp?=) Date: Wed, 20 Oct 2021 01:54:52 +0000 Subject: =?utf-8?B?562U5aSNOiBbU2FoYXJhXVNhaGFyYSBwcm9qZWN0IFBURyBtZWV0aW5nIGZy?= =?utf-8?Q?om_3:00_to_5:00_PM_on_October_19,_2021.?= In-Reply-To: References: Message-ID: OK Jermry Freudberg, Could you introduce your point of view about Sahara project? Jermry Freudberg and Tosky: The plan about which backends should be kept and updated and which ones should be dropped? We can make a plan about which plugins to add and which plugins to delete in Yoga. Could you introduce detailed about specific plugin information? In order to confirm which plugin add or delete. Thank you Fossen. ???: Jeremy Freudberg [mailto:jeremyfreudberg at gmail.com] ????: 2021?10?20? 9:35 ???: Juntingqiu Qiujunting (???) ??: Faling Rui (???) ; ltoscano at redhat.com; openstack-discuss at lists.openstack.org ??: Re: [Sahara]Sahara project PTG meeting from 3:00 to 5:00 PM on October 19, 2021. Sorry I could not attend. I have read through the IRC logs. Thanks for hosting and for interesting discussion. On Sun, Oct 17, 2021 at 10:23 PM Juntingqiu Qiujunting (???) > wrote: Hi all I'm very sorry: I missed the scheduled sahara PTG meeting time. We tentatively schedule the Sahara project PTG meeting from 3:00 to 5:00 PM on October 19, 2021. Use IRC channel:#openstack-sahara to conduct PTG conferences. My topics are as follows: 1. Sahara supports the creation of cloud hosts by specifying system volumes. 2. Sahara deploys a dedicated cluster through cloud host VM tools (qemu-guest-agent). ???: Juntingqiu Qiujunting (???) ????: 2021?9?24? 18:05 ???: 'jeremyfreudberg at gmail.com' >; Faling Rui (???) >; 'ltoscano at redhat.com' > ??: 'openstack-discuss at lists.openstack.org' > ??: [Sahara]Currently about the development of the Sahara community there are some points Hi all: Currently about the development of the Sahara community there are some points as following: 1. About the schedule of the regular meeting of the Sahara project? What is your suggestion? How about the regular meeting time every Wednesday afternoon 15:00 to 16:30? 2. Regarding the Sahara project maintenance switch from StoryBoard to launchpad. https://storyboard.openstack.org/ https://blueprints.launchpad.net/openstack/ The reasons are as follows: 1. OpenSatck core projects are maintained on launchpad, such as nova, cinder, neutron, etc. 2. Most OpenStack contributors are used to working on launchpad. 3. Do you have any suggestions? If you think this is feasible, I will post this content in the Sahara community later. Thank you for your help. Thank you Fossen. --------------------------------- Fossen Qiu | ??? CBRD | ?????????? T: 18249256272 E: qiujunting at inspur.com ???? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3519 bytes Desc: image001.jpg URL: From jeremyfreudberg at gmail.com Wed Oct 20 03:19:19 2021 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Tue, 19 Oct 2021 23:19:19 -0400 Subject: [Sahara]Sahara project PTG meeting from 3:00 to 5:00 PM on October 19, 2021. In-Reply-To: References: Message-ID: HDP: - requires paid subscription now - end of life is December 2021 - I recommended to delete this plugin CDH: - requires paid subscription now - end of life is March 2022 - replacement product is "CDP" which is not suitable for Sahara - I recommend to delete this plugin MapR: - it is already discontinued - I recommend to delete this plugin Storm: - our implementation is based on Pyleus, which is not maintained - I recommend to delete this plugin Spark: - our implementation uses some Hadoop resources provided by CDH, which requires paid subscription and will face end of life - This plugin may be deleted, or someone could work on using regular Hadoop resources instead from CDH Vanilla - I think this plugin is okay On Tue, Oct 19, 2021 at 9:54 PM Juntingqiu Qiujunting (???) < qiujunting at inspur.com> wrote: > OK Jermry Freudberg, Could you introduce your point of view about Sahara > project? > > > > Jermry Freudberg and Tosky: > > > > The plan about which backends should be kept and updated and which ones > should be dropped? We can make a plan about which plugins to add and which > plugins to delete in Yoga. > > Could you introduce detailed about specific plugin information? In order > to confirm which plugin add or delete. > > > > Thank you Fossen. > > > > *???:* Jeremy Freudberg [mailto:jeremyfreudberg at gmail.com] > *????:* 2021?10?20? 9:35 > *???:* Juntingqiu Qiujunting (???) > *??:* Faling Rui (???) ; ltoscano at redhat.com; > openstack-discuss at lists.openstack.org > *??:* Re: [Sahara]Sahara project PTG meeting from 3:00 to 5:00 PM on > October 19, 2021. > > > > Sorry I could not attend. > > I have read through the IRC logs. Thanks for hosting and for interesting > discussion. > > > > On Sun, Oct 17, 2021 at 10:23 PM Juntingqiu Qiujunting (???) < > qiujunting at inspur.com> wrote: > > Hi all > > > > I'm very sorry: > > > > I missed the scheduled sahara PTG meeting time. We tentatively schedule > the Sahara project PTG meeting from 3:00 to 5:00 PM on October 19, 2021. > > Use IRC channel:#openstack-sahara to conduct PTG conferences. > > My topics are as follows: > > 1. Sahara supports the creation of cloud hosts by specifying system > volumes. > > 2. Sahara deploys a dedicated cluster through cloud host VM tools > (qemu-guest-agent). > > > > *???:* Juntingqiu Qiujunting (???) > *????:* 2021?9?24? 18:05 > *???:* 'jeremyfreudberg at gmail.com' ; Faling > Rui (???) ; 'ltoscano at redhat.com' < > ltoscano at redhat.com> > *??:* 'openstack-discuss at lists.openstack.org' < > openstack-discuss at lists.openstack.org> > *??:* [Sahara]Currently about the development of the Sahara community > there are some points > > > > Hi all: > > > > Currently about the development of the Sahara community there are some > points as following: > > > > 1. About the schedule of the regular meeting of the Sahara project? What > is your suggestion? > > > > How about the regular meeting time every Wednesday afternoon 15:00 to > 16:30? > > > > 2. Regarding the Sahara project maintenance switch from StoryBoard to > launchpad. > > > > https://storyboard.openstack.org/ > > https://blueprints.launchpad.net/openstack/ > > The reasons are as follows: > > 1. OpenSatck core projects are maintained on launchpad, such as > nova, cinder, neutron, etc. > > 2. Most OpenStack contributors are used to working on launchpad. > > > > 3. Do you have any suggestions? > > > > If you think this is feasible, I will post this content in the Sahara > community later. Thank you for your help. > > > > Thank you Fossen. > > > > > > --------------------------------- > > Fossen Qiu *|* ??? > > > > > > CBRD * |* ?????????? > > > > > > *T:* 18249256272 > > > > > > *E:* qiujunting at inspur.com > > > > > > > > [image: signature_1653277958] > > ???? > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3519 bytes Desc: not available URL: From ltoscano at redhat.com Wed Oct 20 07:44:44 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 20 Oct 2021 09:44:44 +0200 Subject: [Sahara]Sahara project PTG meeting from 3:00 to 5:00 PM on October 19, 2021. In-Reply-To: References: Message-ID: <4318681.tdWV9SEqCh@whitebase.usersys.redhat.com> On Wednesday, 20 October 2021 05:19:19 CEST Jeremy Freudberg wrote: > > Vanilla > - I think this plugin is okay Vanilla is a bit outdated, it may need to support the newer version of Hadoop, and be tested on newer versions of Ubuntu (and CentOS Stream). Moreover, that's one of the few backends whose image generation should be ported from sahara-image-elements to sahara-image-pack. -- Luigi From wodel.youchi at gmail.com Wed Oct 20 08:23:44 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Wed, 20 Oct 2021 09:23:44 +0100 Subject: Trove guest agent and Rabbitmq In-Reply-To: References: Message-ID: Hi Lingxian, and thanks. Could you be more specific? What is this management network? in globals.yml this what I have in terme of networking : kolla_internal_vip_address: "10.10.3.1" kolla_internal_fqdn: "dashinternal.domain.tld" kolla_external_vip_address: "x.x.x.x" kolla_external_fqdn: "dash.domain.tld" *network_interface: "bond0"kolla_external_vip_interface: "bond1"api_interface: "bond1.30"storage_interface: "bond1.10"tunnel_interface: "bond1.40"octavia_network_interface: "{{ api_interface }}"neutron_external_interface: "bond2"* neutron_plugin_agent: "openvswitch" What is this management network? Do I have to create it? If yes how? Regards. Le mer. 20 oct. 2021 ? 00:48, Lingxian Kong a ?crit : > Hi Wodel, > > There is a management network (Neutron network) configured for > communication between controller services and guest agent, you need to > config in the infra router (using the network vlan) layer to make sure they > can talk. > > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove Project Lead (OpenStack) > OpenStack Cloud Provider Project Lead (Kubernetes) > > > On Wed, Oct 20, 2021 at 10:30 AM wodel youchi > wrote: > >> Hi, >> >> I am trying to deploy Trove. I am using the Kolla-ansible and Openstack >> wallaby version. >> From the documentation, the Trove guest agent, which runs inside the Tove >> instance communicates with the trove-taskmanager via rabbitmq, how is this >> done? >> >> The rabbitmq is running in the api network, the instance is running in >> the tunnel (tenant) network, and in my case, those networks are in >> different vlans, how should I configure this? >> >> Regards. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.kavanagh at canonical.com Wed Oct 20 08:44:49 2021 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Wed, 20 Oct 2021 09:44:49 +0100 Subject: [charms] Yoga charms PTG session 1 today Message-ID: Hi All Just a quick reminder that the 1st (of 2) charms PTG session is today at 14.00 UTC in the Icehouse room. Out etherpad is: https://etherpad.opendev.org/p/charms-yoga-ptg Full schedule is: https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/Uploads/PTG-Oct-18-22-2021-Schedule.pdf Feel free to come along and ask/discuss any of your OpenStack/charms questions! Thanks Alex -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From v at prokofev.me Wed Oct 20 09:43:37 2021 From: v at prokofev.me (Vladimir Prokofev) Date: Wed, 20 Oct 2021 12:43:37 +0300 Subject: [tempest] S3 API tests Message-ID: Hello. Are there any Swift S3 API tests in tempest? I didn't find any standard packages, nor as a plugin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandronic888 at gmail.com Wed Oct 20 12:24:26 2021 From: sandronic888 at gmail.com (S Andronic) Date: Wed, 20 Oct 2021 13:24:26 +0100 Subject: Openstack Glance image signature and validation for upload and boot controls? Message-ID: Hi, I have a question in regards to Openstack Glance and if I got it right this can be a place to ask, if I am wrong please kindly point me in the right direction. When you enable Image Signing and Certificate Validation in nova.conf: [glance] verify_glance_signatures = True enable_certificate_validation = True Will this stop users from uploading unsigned images or using unsigned images to spin up instances? Intuitively I feel that it will enforce checks only if the signature property exists, but what if it doesn't? Does it control in any way unsigned images? Does it stop users from uploading or using anything unsigned? Would an image without the signing properties just be rejected? If this feature doesn't stop the use of unsigned images as a security control what is the logic behind it then? Is this meant not to stop users from using unsigned images but such that people who do use signed images have verification for their code? So if the goal is to stop people from using random images and image signing and validation is not the answer what would be? Kind Regards, S. Andronic -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Oct 20 13:03:40 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 20 Oct 2021 08:03:40 -0500 Subject: [tempest] S3 API tests In-Reply-To: References: Message-ID: <17c9dccdaff.d1bdf3031333248.3125733454723960826@ghanshyammann.com> ---- On Wed, 20 Oct 2021 04:43:37 -0500 Vladimir Prokofev wrote ---- > Hello. > Are there any Swift S3 API tests in tempest? I didn't find any standard packages, nor as a plugin. I do not think we have, all tests tempest have for swift are under https://github.com/openstack/tempest/tree/master/tempest/api/object_storage -gmann From sbauza at redhat.com Wed Oct 20 13:21:03 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 20 Oct 2021 15:21:03 +0200 Subject: [Nova][interop] In-Reply-To: References: <17b8827a008.1105017ee301036.2872782249905206264@ghanshyammann.com> Message-ID: On Fri, Aug 27, 2021 at 5:18 PM Arkady Kanevsky wrote: > Thanks Ghanshyam. > I will add time slot now, and detail of the meeting later when we have a > draft of next interop guidelines. > Thanks,Arkady > > Can we clarify which timeslot we're discussing ? In the agenda, the interop session was for today 3pm UTC but someone wrote that it could be 4pm UTC which makes things confusing as we already planned other topics for the end of the day. Since we accepted those other topics for the end of day, it would be difficult to move those as people already planned to attend them. Thanks, -Sylvain On Fri, Aug 27, 2021 at 11:08 AM Ghanshyam Mann > wrote: > >> HI Arkady, >> >> Please add it in https://etherpad.opendev.org/p/nova-yoga-ptg >> >> also it will be good to add detail about specific thing to discuss as >> nova API changes are with microversion so they do not actually >> effect current guidelines until interop start doing the microversion >> capability. >> >> -gmann >> >> >> ---- On Fri, 27 Aug 2021 09:49:15 -0500 Arkady Kanevsky < >> akanevsk at redhat.com> wrote ---- >> > Sean and Nova team,Interop WG would like 20-30 minutes on Yaga PTG >> agenda to discuss latest interop guidelines for Nova.If you can point us to >> etherpad then we can add it to agenda.Thanks,Arkady >> > >> > -- >> > Arkady Kanevsky, Ph.D.Phone: 972 707-6456Corporate Phone: 919 729-5744 >> ext. 8176456 >> > >> >> > > -- > Arkady Kanevsky, Ph.D. > Phone: 972 707-6456 > Corporate Phone: 919 729-5744 ext. 8176456 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akanevsk at redhat.com Wed Oct 20 13:33:36 2021 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Wed, 20 Oct 2021 08:33:36 -0500 Subject: [Nova][interop] In-Reply-To: References: <17b8827a008.1105017ee301036.2872782249905206264@ghanshyammann.com> Message-ID: Sylvain, It was me who put 16:00 UTC for interop. I cannot make it at 15:00 UTC. And I have on my calendar as per the original email agreement at 16:00. Thanks, Arkady On Wed, Oct 20, 2021 at 8:21 AM Sylvain Bauza wrote: > > > On Fri, Aug 27, 2021 at 5:18 PM Arkady Kanevsky > wrote: > >> Thanks Ghanshyam. >> I will add time slot now, and detail of the meeting later when we have a >> draft of next interop guidelines. >> Thanks,Arkady >> >> > Can we clarify which timeslot we're discussing ? > In the agenda, the interop session was for today 3pm UTC but someone wrote > that it could be 4pm UTC which makes things confusing as we already planned > other topics for the end of the day. > Since we accepted those other topics for the end of day, it would be > difficult to move those as people already planned to attend them. > > Thanks, > -Sylvain > > On Fri, Aug 27, 2021 at 11:08 AM Ghanshyam Mann >> wrote: >> >>> HI Arkady, >>> >>> Please add it in https://etherpad.opendev.org/p/nova-yoga-ptg >>> >>> also it will be good to add detail about specific thing to discuss as >>> nova API changes are with microversion so they do not actually >>> effect current guidelines until interop start doing the microversion >>> capability. >>> >>> -gmann >>> >>> >>> ---- On Fri, 27 Aug 2021 09:49:15 -0500 Arkady Kanevsky < >>> akanevsk at redhat.com> wrote ---- >>> > Sean and Nova team,Interop WG would like 20-30 minutes on Yaga PTG >>> agenda to discuss latest interop guidelines for Nova.If you can point us to >>> etherpad then we can add it to agenda.Thanks,Arkady >>> > >>> > -- >>> > Arkady Kanevsky, Ph.D.Phone: 972 707-6456Corporate Phone: 919 >>> 729-5744 ext. 8176456 >>> > >>> >>> >> >> -- >> Arkady Kanevsky, Ph.D. >> Phone: 972 707-6456 >> Corporate Phone: 919 729-5744 ext. 8176456 >> > -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Oct 20 13:39:43 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 20 Oct 2021 08:39:43 -0500 Subject: [all][tc] Yoga TC-PTG Planning In-Reply-To: <17ace90220e.118d3fd60389088.3657538459751965041@ghanshyammann.com> References: <17aa100bb32.12a69c027966060.679414729277509844@ghanshyammann.com> <17ace90220e.118d3fd60389088.3657538459751965041@ghanshyammann.com> Message-ID: <17c9dedda65.b7d6d1961336825.2353101682575573521@ghanshyammann.com> Hello Everyone, TC PTG agenda and schedule (if any topic conflict please ping me no IRC) is there in etherpad. non-tc members, please add PING and your name under the topic you would like to participate. I have added a few of them. https://etherpad.opendev.org/p/tc-yoga-ptg -gmann ---- On Thu, 22 Jul 2021 09:13:10 -0500 Ghanshyam Mann wrote ---- > Booked the below slots for TC: > > - Monday 15-17 UTC - TC+PTL interaction > - Thursday-Friday 13-17 UTC - TC discussions > > -gmann > > > ---- On Tue, 13 Jul 2021 12:53:37 -0500 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > As you already know that the Yoga cycle virtual PTG will be held between 18th - 22nd October[1]. > > > > To plan the Technical Committee PTG sessions, please do the following: > > > > 1. Fill the below doodle poll as per your availability. Please fill it soon as deadline to book the slot is 21th July. > > > > - https://doodle.com/poll/6dfdmdfi4s8wc7cd > > > > 2. Add the topics you would like to discuss to the below etherpad. > > > > - https://etherpad.opendev.org/p/tc-yoga-ptg > > > > NOTE: this is not limited to TC members only; I would like all community members to > > fill the doodle poll and, add the topics you would like or want TC members to discuss in PTG. > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023540.html > > > > -gmann > > > > > > From sbauza at redhat.com Wed Oct 20 14:00:14 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 20 Oct 2021 16:00:14 +0200 Subject: [Nova][interop] In-Reply-To: References: <17b8827a008.1105017ee301036.2872782249905206264@ghanshyammann.com> Message-ID: On Wed, Oct 20, 2021 at 3:33 PM Arkady Kanevsky wrote: > Sylvain, > It was me who put 16:00 UTC for interop. > I cannot make it at 15:00 UTC. And I have on my calendar as per the > original email agreement at 16:00. > OK, then something got changed in the meantime. I'll ask the other contributors who planned to discuss other topics at this time to be present one hour before. See you then at 4pm UTC. -S Thanks, > Arkady > > On Wed, Oct 20, 2021 at 8:21 AM Sylvain Bauza wrote: > >> >> >> On Fri, Aug 27, 2021 at 5:18 PM Arkady Kanevsky >> wrote: >> >>> Thanks Ghanshyam. >>> I will add time slot now, and detail of the meeting later when we have a >>> draft of next interop guidelines. >>> Thanks,Arkady >>> >>> >> Can we clarify which timeslot we're discussing ? >> In the agenda, the interop session was for today 3pm UTC but someone >> wrote that it could be 4pm UTC which makes things confusing as we already >> planned other topics for the end of the day. >> Since we accepted those other topics for the end of day, it would be >> difficult to move those as people already planned to attend them. >> >> Thanks, >> -Sylvain >> >> On Fri, Aug 27, 2021 at 11:08 AM Ghanshyam Mann >>> wrote: >>> >>>> HI Arkady, >>>> >>>> Please add it in https://etherpad.opendev.org/p/nova-yoga-ptg >>>> >>>> also it will be good to add detail about specific thing to discuss as >>>> nova API changes are with microversion so they do not actually >>>> effect current guidelines until interop start doing the microversion >>>> capability. >>>> >>>> -gmann >>>> >>>> >>>> ---- On Fri, 27 Aug 2021 09:49:15 -0500 Arkady Kanevsky < >>>> akanevsk at redhat.com> wrote ---- >>>> > Sean and Nova team,Interop WG would like 20-30 minutes on Yaga PTG >>>> agenda to discuss latest interop guidelines for Nova.If you can point us to >>>> etherpad then we can add it to agenda.Thanks,Arkady >>>> > >>>> > -- >>>> > Arkady Kanevsky, Ph.D.Phone: 972 707-6456Corporate Phone: 919 >>>> 729-5744 ext. 8176456 >>>> > >>>> >>>> >>> >>> -- >>> Arkady Kanevsky, Ph.D. >>> Phone: 972 707-6456 >>> Corporate Phone: 919 729-5744 ext. 8176456 >>> >> > > -- > Arkady Kanevsky, Ph.D. > Phone: 972 707-6456 > Corporate Phone: 919 729-5744 ext. 8176456 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zaitcev at redhat.com Wed Oct 20 14:00:17 2021 From: zaitcev at redhat.com (Pete Zaitcev) Date: Wed, 20 Oct 2021 09:00:17 -0500 Subject: [tempest] S3 API tests In-Reply-To: <17c9dccdaff.d1bdf3031333248.3125733454723960826@ghanshyammann.com> References: <17c9dccdaff.d1bdf3031333248.3125733454723960826@ghanshyammann.com> Message-ID: <20211020090017.665199fe@suzdal.zaitcev.lan> On Wed, 20 Oct 2021 08:03:40 -0500 Ghanshyam Mann wrote: > ---- On Wed, 20 Oct 2021 04:43:37 -0500 Vladimir Prokofev wrote ---- > > Are there any Swift S3 API tests in tempest? I didn't find any standard packages, nor as a plugin. > > I do not think we have, all tests tempest have for swift are under > https://github.com/openstack/tempest/tree/master/tempest/api/object_storage This came up at PTG on Monday during the Interop meeting with Arkady. We cannot baseline S3 support in interop because Tempest does not have any S3 tests, so the interoperable Swift does not need to have S3. I don't know if anything needs to be done here. Ceph has independent S3 compliance tests, FWIW (independent means not published by Amazon). They are even in Python. Naturally they are geared towards testing Ceph RGW: https://github.com/ceph/s3-tests -- Pete From akanevsk at redhat.com Wed Oct 20 14:03:01 2021 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Wed, 20 Oct 2021 09:03:01 -0500 Subject: [Nova][interop] In-Reply-To: References: <17b8827a008.1105017ee301036.2872782249905206264@ghanshyammann.com> Message-ID: many thanks On Wed, Oct 20, 2021 at 9:00 AM Sylvain Bauza wrote: > > > On Wed, Oct 20, 2021 at 3:33 PM Arkady Kanevsky > wrote: > >> Sylvain, >> It was me who put 16:00 UTC for interop. >> I cannot make it at 15:00 UTC. And I have on my calendar as per the >> original email agreement at 16:00. >> > > OK, then something got changed in the meantime. I'll ask the other > contributors who planned to discuss other topics at this time to be present > one hour before. > See you then at 4pm UTC. > > -S > > Thanks, >> Arkady >> >> On Wed, Oct 20, 2021 at 8:21 AM Sylvain Bauza wrote: >> >>> >>> >>> On Fri, Aug 27, 2021 at 5:18 PM Arkady Kanevsky >>> wrote: >>> >>>> Thanks Ghanshyam. >>>> I will add time slot now, and detail of the meeting later when we have >>>> a draft of next interop guidelines. >>>> Thanks,Arkady >>>> >>>> >>> Can we clarify which timeslot we're discussing ? >>> In the agenda, the interop session was for today 3pm UTC but someone >>> wrote that it could be 4pm UTC which makes things confusing as we already >>> planned other topics for the end of the day. >>> Since we accepted those other topics for the end of day, it would be >>> difficult to move those as people already planned to attend them. >>> >>> Thanks, >>> -Sylvain >>> >>> On Fri, Aug 27, 2021 at 11:08 AM Ghanshyam Mann >>>> wrote: >>>> >>>>> HI Arkady, >>>>> >>>>> Please add it in https://etherpad.opendev.org/p/nova-yoga-ptg >>>>> >>>>> also it will be good to add detail about specific thing to discuss as >>>>> nova API changes are with microversion so they do not actually >>>>> effect current guidelines until interop start doing the microversion >>>>> capability. >>>>> >>>>> -gmann >>>>> >>>>> >>>>> ---- On Fri, 27 Aug 2021 09:49:15 -0500 Arkady Kanevsky < >>>>> akanevsk at redhat.com> wrote ---- >>>>> > Sean and Nova team,Interop WG would like 20-30 minutes on Yaga PTG >>>>> agenda to discuss latest interop guidelines for Nova.If you can point us to >>>>> etherpad then we can add it to agenda.Thanks,Arkady >>>>> > >>>>> > -- >>>>> > Arkady Kanevsky, Ph.D.Phone: 972 707-6456Corporate Phone: 919 >>>>> 729-5744 ext. 8176456 >>>>> > >>>>> >>>>> >>>> >>>> -- >>>> Arkady Kanevsky, Ph.D. >>>> Phone: 972 707-6456 >>>> Corporate Phone: 919 729-5744 ext. 8176456 >>>> >>> >> >> -- >> Arkady Kanevsky, Ph.D. >> Phone: 972 707-6456 >> Corporate Phone: 919 729-5744 ext. 8176456 >> > -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Oct 20 14:08:36 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 20 Oct 2021 09:08:36 -0500 Subject: [tempest] S3 API tests In-Reply-To: <20211020090017.665199fe@suzdal.zaitcev.lan> References: <17c9dccdaff.d1bdf3031333248.3125733454723960826@ghanshyammann.com> <20211020090017.665199fe@suzdal.zaitcev.lan> Message-ID: <17c9e084af7.b51570371339549.5256246560393668034@ghanshyammann.com> ---- On Wed, 20 Oct 2021 09:00:17 -0500 Pete Zaitcev wrote ---- > On Wed, 20 Oct 2021 08:03:40 -0500 > Ghanshyam Mann wrote: > > ---- On Wed, 20 Oct 2021 04:43:37 -0500 Vladimir Prokofev wrote ---- > > > > Are there any Swift S3 API tests in tempest? I didn't find any standard packages, nor as a plugin. > > > > I do not think we have, all tests tempest have for swift are under > > https://github.com/openstack/tempest/tree/master/tempest/api/object_storage > > This came up at PTG on Monday during the Interop meeting with Arkady. > We cannot baseline S3 support in interop because Tempest does not have > any S3 tests, so the interoperable Swift does not need to have S3. > I don't know if anything needs to be done here. > > Ceph has independent S3 compliance tests, FWIW (independent means > not published by Amazon). They are even in Python. Naturally they are > geared towards testing Ceph RGW: > https://github.com/ceph/s3-tests If interop need to have test for the capability then, we are fine and test can be added. -gmann > > -- Pete > > > From ltoscano at redhat.com Wed Oct 20 14:32:03 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 20 Oct 2021 16:32:03 +0200 Subject: [tempest] S3 API tests In-Reply-To: <17c9e084af7.b51570371339549.5256246560393668034@ghanshyammann.com> References: <20211020090017.665199fe@suzdal.zaitcev.lan> <17c9e084af7.b51570371339549.5256246560393668034@ghanshyammann.com> Message-ID: <23754655.EfDdHjke4D@whitebase.usersys.redhat.com> On Wednesday, 20 October 2021 16:08:36 CEST Ghanshyam Mann wrote: > ---- On Wed, 20 Oct 2021 09:00:17 -0500 Pete Zaitcev > wrote ---- > > On Wed, 20 Oct 2021 08:03:40 -0500 > > > > Ghanshyam Mann wrote: > > > ---- On Wed, 20 Oct 2021 04:43:37 -0500 Vladimir Prokofev > > > wrote ---- > > > > > > Are there any Swift S3 API tests in tempest? I didn't find any > > > > standard packages, nor as a plugin. > > > > > I do not think we have, all tests tempest have for swift are under > > > https://github.com/openstack/tempest/tree/master/tempest/api/object_sto > > > rage > > > > This came up at PTG on Monday during the Interop meeting with Arkady. > > We cannot baseline S3 support in interop because Tempest does not have > > any S3 tests, so the interoperable Swift does not need to have S3. > > I don't know if anything needs to be done here. > > > > Ceph has independent S3 compliance tests, FWIW (independent means > > not published by Amazon). They are even in Python. Naturally they are > > > > geared towards testing Ceph RGW: > > https://github.com/ceph/s3-tests > > If interop need to have test for the capability then, we are fine and test > can be added. Wouldn't it make sense to adopt/fix/extend an existing "native" S3 test suite like the one Pete mentioned, instead of rewriting a new set of tests? Or maybe that's what you proposed? -- Luigi From zigo at debian.org Wed Oct 20 15:17:54 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 20 Oct 2021 17:17:54 +0200 Subject: [all][tc] Skyline as a new official project [was: What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min] In-Reply-To: <20211018121818.rerqlp7ek7z3rnya@yuggoth.org> References: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> <93C08133-972B-44BE-9F2A-661A1B86651F@99cloud.net> <20211018121818.rerqlp7ek7z3rnya@yuggoth.org> Message-ID: <19a6d9d2-ed39-39cd-1c85-8fee97f93b8b@debian.org> On 10/18/21 2:18 PM, Jeremy Stanley wrote: > On 2021-10-18 11:22:11 +0800 (+0800), ??? wrote: >> Skyline-apiserver is a pure Python code project, following the >> Python wheel packaging standard, using pip for installation, and >> the dependency management of the project using poetry[1] >> >> Skyline-console uses npm for dependency management, development >> and testing. During the packaging and distribution process, >> webpack will be used to process the source code and dependent >> library code first, and output the packaged static resource files. >> These static resource files will be stored in an empty Python >> module[2]. > [...] > > GNU/Linux distributions like Debian are going to want to separately > package the original source code for all of these Web components and > their dependencies, and recreate them at the time the distro's > binary packages are built. I believe the concerns are making it easy > for them to find the source for all of it, and to attempt to use > dependencies which these distributions already package in order to > reduce their workload. Further, it helps to make sure the software > is capable of using multiple versions of its dependencies when > possible, because it's going to be installed into shared > environments with other software which may have some of the same > dependencies, so may need to be able to agree on common versions > they all support. Hi, Thanks Jeremy for summing-up things in a better way that I ever would. Also, using pip is *not* an option for distros. I'm not sure what you mean by "following the Python wheel packaging standard", but to me, we're not there yet. I'd like to have a normal setup.py and setup.cfg file in each Python module so it's easy to call "python3 setup.py install --root $(pwd)/debian/skyline-apiserver --install-layout=deb". I would also expect to see a "normal" requirements.txt and test-requirements.txt like in every other OpenStack project, the use of stestr to run unit tests, and so on. Right now, when looking at the skyline-apiserver as the Debian OpenStack package maintainer, I'd need a lot of manual work to use pyproject.toml instead of my standard tooling. As I understand, dependencies are expressed in the pyproject.toml, but then how do I get the Python code installed under debian/skyline-apiserver? BTW, what made you choose something completely different than the rest of the OpenStack project? Cheers, Thomas Goirand (zigo) From arnaud.morin at gmail.com Wed Oct 20 15:28:38 2021 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Wed, 20 Oct 2021 15:28:38 +0000 Subject: neutron l3 agents number Message-ID: Hey team, When using DVR + HA, we endup with all routers beeing deployed on the computes (the DVR part) and 3 routers on dvr_snat nodes (for HA). The question is why 3 for snats? I know that this is the max_l3_agents_per_router config parameter, but 2 seems enough? On the other side, DHCP agent default (dhcp_agents_per_network) is 1, while this is also a precious service provided in the tenant network. So, is there any downside on having only 2 agents for routers? Thanks in advance, Arnaud. From gmann at ghanshyammann.com Wed Oct 20 15:49:06 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 20 Oct 2021 10:49:06 -0500 Subject: [tempest] S3 API tests In-Reply-To: <23754655.EfDdHjke4D@whitebase.usersys.redhat.com> References: <20211020090017.665199fe@suzdal.zaitcev.lan> <17c9e084af7.b51570371339549.5256246560393668034@ghanshyammann.com> <23754655.EfDdHjke4D@whitebase.usersys.redhat.com> Message-ID: <17c9e644e89.bb0d9bb71347784.481792923688873598@ghanshyammann.com> ---- On Wed, 20 Oct 2021 09:32:03 -0500 Luigi Toscano wrote ---- > On Wednesday, 20 October 2021 16:08:36 CEST Ghanshyam Mann wrote: > > ---- On Wed, 20 Oct 2021 09:00:17 -0500 Pete Zaitcev > > wrote ---- > > > On Wed, 20 Oct 2021 08:03:40 -0500 > > > > > > Ghanshyam Mann wrote: > > > > ---- On Wed, 20 Oct 2021 04:43:37 -0500 Vladimir Prokofev > > > > wrote ---- > > > > > > > Are there any Swift S3 API tests in tempest? I didn't find any > > > > > standard packages, nor as a plugin. > > > > > > I do not think we have, all tests tempest have for swift are under > > > > https://github.com/openstack/tempest/tree/master/tempest/api/object_sto > > > > rage > > > > > > This came up at PTG on Monday during the Interop meeting with Arkady. > > > We cannot baseline S3 support in interop because Tempest does not have > > > any S3 tests, so the interoperable Swift does not need to have S3. > > > I don't know if anything needs to be done here. > > > > > > Ceph has independent S3 compliance tests, FWIW (independent means > > > not published by Amazon). They are even in Python. Naturally they are > > > > > > geared towards testing Ceph RGW: > > > https://github.com/ceph/s3-tests > > > > If interop need to have test for the capability then, we are fine and test > > can be added. > > Wouldn't it make sense to adopt/fix/extend an existing "native" S3 test suite > like the one Pete mentioned, instead of rewriting a new set of tests? Or maybe > that's what you proposed? For interop, we need tests to exist as one of the official OpenStack project under TC. Tests can be in tempest or tempest plugin. So either write/copy the tests in tempest or new plugins under swift. -gmann > > -- > Luigi > > > From wodel.youchi at gmail.com Wed Oct 20 09:15:31 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Wed, 20 Oct 2021 10:15:31 +0100 Subject: kolla-ansible wallaby manila ceph pacific In-Reply-To: References: Message-ID: Hi, and thanks I tried to apply the patch, but it didn't work, this is the manila-share.log. By the way, I did change to caps for the manila client to what is said in wallaby documentation, that is : [client.manila] key = keyyyyyyyy..... * caps mgr = "allow rw" caps mon = "allow r"* [root at ControllerA manila]# cat manila-share.log 2021-10-20 10:03:22.286 7 INFO oslo_service.periodic_task [-] Skipping periodic task update_share_usage_size because it is disabled 2021-10-20 10:03:22.310 7 INFO oslo_service.service [req-5b253656-4fe2-4087-b4ab-9ba2a8a0443f - - - - -] Starting 1 workers 2021-10-20 10:03:22.315 30 INFO manila.service [-] Starting manila-share node (version 12.0.1) 2021-10-20 10:03:22.320 30 INFO manila.share.drivers.cephfs.driver [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep h client found, connecting... 2021-10-20 10:03:22.368 30 INFO manila.share.drivers.cephfs.driver [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep h client connection complete. 2021-10-20 10:03:22.372 30 ERROR manila.share.manager [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered during i n*itialization* * of driver CephFSDriver at ControllerA@cephfsnative1: manila.exception.ShareBackendException: json_command failed - prefix=fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback (most recent call last): * 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 191, in rados_command 2021-10-20 10:03:22.372 30 ERROR manila.share.manager timeout=RADOS_TIMEOUT) 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ command 2021-10-20 10:03:22.372 30 ERROR manila.share.manager inbuf, timeout, verbose) 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ command_retry 2021-10-20 10:03:22.372 30 ERROR manila.share.manager return send_command(*args, **kwargs) 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ command 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise ArgumentValid("Bad target type '{0}'".format(target[0])) 2021-10-20 10:03:22.372 30 ERROR manila.share.manager ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' 2021-10-20 10:03:22.372 30 ERROR manila.share.manager 2021-10-20 10:03:22.372 30 ERROR manila.share.manager During handling of the above exception, another exception occurred: 2021-10-20 10:03:22.372 30 ERROR manila.share.manager 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py ", line 346, in _driver_setup 2021-10-20 10:03:22.372 30 ERROR manila.share.manager self.driver.do_setup(ctxt) 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 251, in do_setup 2021-10-20 10:03:22.372 30 ERROR manila.share.manager volname=self.volname) 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 401, in volname 2021-10-20 10:03:22.372 30 ERROR manila.share.manager self.rados_client, "fs volume ls", json_obj=True) 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 205, in rados_command 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise exception.ShareBackendException(msg) 2021-10-20 10:03:22.372 30 ERROR manila.share.manager manila.exception.ShareBackendException: json_command failed - prefix=fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:03:22.372 30 ERROR manila.share.manager 2021-10-20 10:03:26.379 30 ERROR manila.share.manager [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered during i nitialization of driver CephFSDriver at ControllerA@cephfsnative1: manila.exception.ShareBackendException: json_command failed - prefix= fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 191, in rados_command 2021-10-20 10:03:26.379 30 ERROR manila.share.manager timeout=RADOS_TIMEOUT) 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ command 2021-10-20 10:03:26.379 30 ERROR manila.share.manager inbuf, timeout, verbose) 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ command_retry 2021-10-20 10:03:26.379 30 ERROR manila.share.manager return send_command(*args, **kwargs) 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ command 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise ArgumentValid("Bad target type '{0}'".format(target[0])) 2021-10-20 10:03:26.379 30 ERROR manila.share.manager ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' 2021-10-20 10:03:26.379 30 ERROR manila.share.manager 2021-10-20 10:03:26.379 30 ERROR manila.share.manager During handling of the above exception, another exception occurred: 2021-10-20 10:03:26.379 30 ERROR manila.share.manager 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py ", line 346, in _driver_setup 2021-10-20 10:03:26.379 30 ERROR manila.share.manager self.driver.do_setup(ctxt) 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 251, in do_setup 2021-10-20 10:03:26.379 30 ERROR manila.share.manager volname=self.volname) 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 401, in volname 2021-10-20 10:03:26.379 30 ERROR manila.share.manager self.rados_client, "fs volume ls", json_obj=True) 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 205, in rados_command 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise exception.ShareBackendException(msg) 2021-10-20 10:03:26.379 30 ERROR manila.share.manager manila.exception.ShareBackendException: json_command failed - prefix=fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:03:26.379 30 ERROR manila.share.manager 2021-10-20 10:03:34.387 30 ERROR manila.share.manager [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered during i nitialization of driver CephFSDriver at ControllerA@cephfsnative1: manila.exception.ShareBackendException: json_command failed - prefix= fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 191, in rados_command 2021-10-20 10:03:34.387 30 ERROR manila.share.manager timeout=RADOS_TIMEOUT) 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ command 2021-10-20 10:03:34.387 30 ERROR manila.share.manager inbuf, timeout, verbose) 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ command_retry 2021-10-20 10:03:34.387 30 ERROR manila.share.manager return send_command(*args, **kwargs) 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ command 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise ArgumentValid("Bad target type '{0}'".format(target[0])) 2021-10-20 10:03:34.387 30 ERROR manila.share.manager ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' 2021-10-20 10:03:34.387 30 ERROR manila.share.manager 2021-10-20 10:03:34.387 30 ERROR manila.share.manager During handling of the above exception, another exception occurred: 2021-10-20 10:03:34.387 30 ERROR manila.share.manager 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py ", line 346, in _driver_setup 2021-10-20 10:03:34.387 30 ERROR manila.share.manager self.driver.do_setup(ctxt) 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 251, in do_setup 2021-10-20 10:03:34.387 30 ERROR manila.share.manager volname=self.volname) 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 401, in volname 2021-10-20 10:03:34.387 30 ERROR manila.share.manager self.rados_client, "fs volume ls", json_obj=True) 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 205, in rados_command 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise exception.ShareBackendException(msg) 2021-10-20 10:03:34.387 30 ERROR manila.share.manager manila.exception.ShareBackendException: json_command failed - prefix=fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:03:34.387 30 ERROR manila.share.manager 2021-10-20 10:03:50.404 30 ERROR manila.share.manager [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered during i nitialization of driver CephFSDriver at ControllerA@cephfsnative1: manila.exception.ShareBackendException: json_command failed - prefix= fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 191, in rados_command 2021-10-20 10:03:50.404 30 ERROR manila.share.manager timeout=RADOS_TIMEOUT) 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ command 2021-10-20 10:03:50.404 30 ERROR manila.share.manager inbuf, timeout, verbose) 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ command_retry 2021-10-20 10:03:50.404 30 ERROR manila.share.manager return send_command(*args, **kwargs) 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ command 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise ArgumentValid("Bad target type '{0}'".format(target[0])) 2021-10-20 10:03:50.404 30 ERROR manila.share.manager ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' 2021-10-20 10:03:50.404 30 ERROR manila.share.manager 2021-10-20 10:03:50.404 30 ERROR manila.share.manager During handling of the above exception, another exception occurred: 2021-10-20 10:03:50.404 30 ERROR manila.share.manager 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py ", line 346, in _driver_setup 2021-10-20 10:03:50.404 30 ERROR manila.share.manager self.driver.do_setup(ctxt) 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 251, in do_setup 2021-10-20 10:03:50.404 30 ERROR manila.share.manager volname=self.volname) 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 401, in volname 2021-10-20 10:03:50.404 30 ERROR manila.share.manager self.rados_client, "fs volume ls", json_obj=True) 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 205, in rados_command 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise exception.ShareBackendException(msg) 2021-10-20 10:03:50.404 30 ERROR manila.share.manager manila.exception.ShareBackendException: json_command failed - prefix=fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:03:50.404 30 ERROR manila.share.manager 2021-10-20 10:04:22.436 30 ERROR manila.share.manager [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered during i nitialization of driver CephFSDriver at ControllerA@cephfsnative1: manila.exception.ShareBackendException: json_command failed - prefix= fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 191, in rados_command 2021-10-20 10:04:22.436 30 ERROR manila.share.manager timeout=RADOS_TIMEOUT) 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ command 2021-10-20 10:04:22.436 30 ERROR manila.share.manager inbuf, timeout, verbose) 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ command_retry 2021-10-20 10:04:22.436 30 ERROR manila.share.manager return send_command(*args, **kwargs) 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ command 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise ArgumentValid("Bad target type '{0}'".format(target[0])) 2021-10-20 10:04:22.436 30 ERROR manila.share.manager ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' 2021-10-20 10:04:22.436 30 ERROR manila.share.manager 2021-10-20 10:04:22.436 30 ERROR manila.share.manager During handling of the above exception, another exception occurred: 2021-10-20 10:04:22.436 30 ERROR manila.share.manager 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py ", line 346, in _driver_setup 2021-10-20 10:04:22.436 30 ERROR manila.share.manager self.driver.do_setup(ctxt) 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 251, in do_setup 2021-10-20 10:04:22.436 30 ERROR manila.share.manager volname=self.volname) 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 401, in volname 2021-10-20 10:04:22.436 30 ERROR manila.share.manager self.rados_client, "fs volume ls", json_obj=True) 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 205, in rados_command 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise exception.ShareBackendException(msg) 2021-10-20 10:04:22.436 30 ERROR manila.share.manager manila.exception.ShareBackendException: json_command failed - prefix=fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:04:22.436 30 ERROR manila.share.manager 2021-10-20 10:05:26.438 30 ERROR manila.share.manager [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered during i nitialization of driver CephFSDriver at ControllerA@cephfsnative1: manila.exception.ShareBackendException: json_command failed - prefix= fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 191, in rados_command 2021-10-20 10:05:26.438 30 ERROR manila.share.manager timeout=RADOS_TIMEOUT) 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ command 2021-10-20 10:05:26.438 30 ERROR manila.share.manager inbuf, timeout, verbose) 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ command_retry 2021-10-20 10:05:26.438 30 ERROR manila.share.manager return send_command(*args, **kwargs) 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ command 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise ArgumentValid("Bad target type '{0}'".format(target[0])) 2021-10-20 10:05:26.438 30 ERROR manila.share.manager ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' 2021-10-20 10:05:26.438 30 ERROR manila.share.manager 2021-10-20 10:05:26.438 30 ERROR manila.share.manager During handling of the above exception, another exception occurred: 2021-10-20 10:05:26.438 30 ERROR manila.share.manager 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py ", line 346, in _driver_setup 2021-10-20 10:05:26.438 30 ERROR manila.share.manager self.driver.do_setup(ctxt) 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 251, in do_setup 2021-10-20 10:05:26.438 30 ERROR manila.share.manager volname=self.volname) 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 401, in volname 2021-10-20 10:05:26.438 30 ERROR manila.share.manager self.rados_client, "fs volume ls", json_obj=True) 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 205, in rados_command 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise exception.ShareBackendException(msg) 2021-10-20 10:05:26.438 30 ERROR manila.share.manager manila.exception.ShareBackendException: json_command failed - prefix=fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:05:26.438 30 ERROR manila.share.manager 2021-10-20 10:07:34.539 30 ERROR manila.share.manager [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered during i nitialization of driver CephFSDriver at ControllerA@cephfsnative1: manila.exception.ShareBackendException: json_command failed - prefix= fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 191, in rados_command 2021-10-20 10:07:34.539 30 ERROR manila.share.manager timeout=RADOS_TIMEOUT) 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ command 2021-10-20 10:07:34.539 30 ERROR manila.share.manager inbuf, timeout, verbose) 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ command_retry 2021-10-20 10:07:34.539 30 ERROR manila.share.manager return send_command(*args, **kwargs) 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ command 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise ArgumentValid("Bad target type '{0}'".format(target[0])) 2021-10-20 10:07:34.539 30 ERROR manila.share.manager ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' 2021-10-20 10:07:34.539 30 ERROR manila.share.manager 2021-10-20 10:07:34.539 30 ERROR manila.share.manager During handling of the above exception, another exception occurred: 2021-10-20 10:07:34.539 30 ERROR manila.share.manager 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py ", line 346, in _driver_setup 2021-10-20 10:07:34.539 30 ERROR manila.share.manager self.driver.do_setup(ctxt) 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 251, in do_setup 2021-10-20 10:07:34.539 30 ERROR manila.share.manager volname=self.volname) 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 401, in volname 2021-10-20 10:07:34.539 30 ERROR manila.share.manager self.rados_client, "fs volume ls", json_obj=True) 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 205, in rados_command 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise exception.ShareBackendException(msg) 2021-10-20 10:07:34.539 30 ERROR manila.share.manager manila.exception.ShareBackendException: json_command failed - prefix=fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:07:34.539 30 ERROR manila.share.manager 2021-10-20 10:11:50.596 30 ERROR manila.share.manager [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered during i nitialization of driver CephFSDriver at ControllerA@cephfsnative1: manila.exception.ShareBackendException: json_command failed - prefix= fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 191, in rados_command 2021-10-20 10:11:50.596 30 ERROR manila.share.manager timeout=RADOS_TIMEOUT) 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ command 2021-10-20 10:11:50.596 30 ERROR manila.share.manager inbuf, timeout, verbose) 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ command_retry 2021-10-20 10:11:50.596 30 ERROR manila.share.manager return send_command(*args, **kwargs) 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ command 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise ArgumentValid("Bad target type '{0}'".format(target[0])) 2021-10-20 10:11:50.596 30 ERROR manila.share.manager ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' 2021-10-20 10:11:50.596 30 ERROR manila.share.manager 2021-10-20 10:11:50.596 30 ERROR manila.share.manager During handling of the above exception, another exception occurred: 2021-10-20 10:11:50.596 30 ERROR manila.share.manager 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most recent call last): 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py ", line 346, in _driver_setup 2021-10-20 10:11:50.596 30 ERROR manila.share.manager self.driver.do_setup(ctxt) 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 251, in do_setup 2021-10-20 10:11:50.596 30 ERROR manila.share.manager volname=self.volname) 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 401, in volname 2021-10-20 10:11:50.596 30 ERROR manila.share.manager self.rados_client, "fs volume ls", json_obj=True) 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce phfs/driver.py", line 205, in rados_command 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise exception.ShareBackendException(msg) 2021-10-20 10:11:50.596 30 ERROR manila.share.manager manila.exception.ShareBackendException: json_command failed - prefix=fs volume ls, argdict={'format': 'json'} - exception message: Bad target type 'mon-mgr'. 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Regards Le mer. 20 oct. 2021 ? 00:14, Goutham Pacha Ravi a ?crit : > > On Tue, Oct 19, 2021 at 2:35 PM wodel youchi > wrote: > >> Hi, >> Has anyone been successful in deploying Manila wallaby using >> kolla-ansible with ceph pacific as a backend? >> >> I have created the manila client in ceph pacific like this : >> >> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, allow >> rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >> >> When I deploy, I get this error in manila's log file : >> Bad target type 'mon-mgr' >> Any ideas? >> > > Could you share the full log from the manila-share service? > There's an open bug related to manila/cephfs deployment: > https://bugs.launchpad.net/kolla-ansible/+bug/1935784 > Proposed fix: > https://review.opendev.org/c/openstack/kolla-ansible/+/802743 > > > > >> >> Regards. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Wed Oct 20 19:43:59 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 21 Oct 2021 08:43:59 +1300 Subject: Trove guest agent and Rabbitmq In-Reply-To: References: Message-ID: I don't use Kolla, but the management network config in Trove is: [DEFAULT] management_networks = --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove Project Lead (OpenStack) OpenStack Cloud Provider Project Lead (Kubernetes) On Wed, Oct 20, 2021 at 9:23 PM wodel youchi wrote: > Hi Lingxian, and thanks. > > Could you be more specific? What is this management network? in > globals.yml this what I have in terme of networking : > > kolla_internal_vip_address: "10.10.3.1" > kolla_internal_fqdn: "dashinternal.domain.tld" > kolla_external_vip_address: "x.x.x.x" > kolla_external_fqdn: "dash.domain.tld" > > > > > > > *network_interface: "bond0"kolla_external_vip_interface: > "bond1"api_interface: "bond1.30"storage_interface: > "bond1.10"tunnel_interface: "bond1.40"octavia_network_interface: "{{ > api_interface }}"neutron_external_interface: "bond2"* > neutron_plugin_agent: "openvswitch" > > What is this management network? Do I have to create it? If yes how? > > > Regards. > > Le mer. 20 oct. 2021 ? 00:48, Lingxian Kong a > ?crit : > >> Hi Wodel, >> >> There is a management network (Neutron network) configured for >> communication between controller services and guest agent, you need to >> config in the infra router (using the network vlan) layer to make sure they >> can talk. >> >> --- >> Lingxian Kong >> Senior Cloud Engineer (Catalyst Cloud) >> Trove Project Lead (OpenStack) >> OpenStack Cloud Provider Project Lead (Kubernetes) >> >> >> On Wed, Oct 20, 2021 at 10:30 AM wodel youchi >> wrote: >> >>> Hi, >>> >>> I am trying to deploy Trove. I am using the Kolla-ansible and Openstack >>> wallaby version. >>> From the documentation, the Trove guest agent, which runs inside the >>> Tove instance communicates with the trove-taskmanager via rabbitmq, how is >>> this done? >>> >>> The rabbitmq is running in the api network, the instance is running in >>> the tunnel (tenant) network, and in my case, those networks are in >>> different vlans, how should I configure this? >>> >>> Regards. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tburke at nvidia.com Wed Oct 20 23:35:17 2021 From: tburke at nvidia.com (Timothy Burke) Date: Wed, 20 Oct 2021 23:35:17 +0000 Subject: [swift][ptg] Ops feedback session soon - Oct 21 at 13:00 UTC Message-ID: As in PTGs past, we're getting devs and ops together to talk about Swift: what's working, what isn't, and what would be most helpful to improve. We're meeting in Havana (https://www.openstack.org/ptg/rooms/havana) at 13:00UTC -- if you run a Swift cluster, we hope to see you there! Even if you can't make it, I'd appreciate if you can offer some feedback on this PTG's etherpad (https://etherpad.opendev.org/p/swift-yoga-ops-feedback). Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Thu Oct 21 05:34:43 2021 From: arnaud.morin at gmail.com (Arnaud) Date: Thu, 21 Oct 2021 07:34:43 +0200 Subject: [neutron] neutron l3 agents number In-Reply-To: References: Message-ID: <4FF7BC3A-626E-4DD9-A8D6-2C87730BAF96@gmail.com> Sorry, forgot to add neutron header Kind regards, Arnaud Le 20 octobre 2021 17:28:39 GMT+02:00, Arnaud Morin a ?crit?: >Hey team, > >When using DVR + HA, we endup with all routers beeing deployed on the >computes (the DVR part) and 3 routers on dvr_snat nodes (for HA). > >The question is why 3 for snats? >I know that this is the max_l3_agents_per_router config parameter, but >2 seems enough? > >On the other side, DHCP agent default (dhcp_agents_per_network) is 1, >while this is also a precious service provided in the tenant network. > >So, is there any downside on having only 2 agents for routers? > >Thanks in advance, > >Arnaud. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Oct 21 06:49:50 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 21 Oct 2021 08:49:50 +0200 Subject: [neutron] neutron l3 agents number In-Reply-To: <4FF7BC3A-626E-4DD9-A8D6-2C87730BAF96@gmail.com> References: <4FF7BC3A-626E-4DD9-A8D6-2C87730BAF96@gmail.com> Message-ID: <20211021064950.75zsotsd67olw43n@p1.localdomain> Hi, On Thu, Oct 21, 2021 at 07:34:43AM +0200, Arnaud wrote: > Sorry, forgot to add neutron header > > Kind regards, > Arnaud > > Le 20 octobre 2021 17:28:39 GMT+02:00, Arnaud Morin a ?crit?: > >Hey team, > > > >When using DVR + HA, we endup with all routers beeing deployed on the > >computes (the DVR part) and 3 routers on dvr_snat nodes (for HA). > > > >The question is why 3 for snats? > >I know that this is the max_l3_agents_per_router config parameter, but > >2 seems enough? > > > >On the other side, DHCP agent default (dhcp_agents_per_network) is 1, > >while this is also a precious service provided in the tenant network. > > > >So, is there any downside on having only 2 agents for routers? TBH I don't think there is any downside to have only 2 agents hosting each router (SNAT in case of DVR). Actually this is what we are e.g. testing in the neutron-ovs-tempest-dvr-ha-multinode-full job as we have only 2 nodes there. > > > >Thanks in advance, > > > >Arnaud. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From y.furukawa8 at gmail.com Thu Oct 21 08:25:32 2021 From: y.furukawa8 at gmail.com (F Yushiro) Date: Thu, 21 Oct 2021 17:25:32 +0900 Subject: Cannot login to gerrit Message-ID: ?Hi, I tried to login ty Gerrit w/ OpenID but redirected the following page and saw "Not Found". https://review.opendev.org/SignInFailure,SIGN_IN,Contact+site+administrator Could you please help me to login? My login email is y.furukawa8 at gmail.com. I used to use ex-company address. When I left the company, I've modified my email address to above gmail one. I wonder that might occur this situation. Best regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Thu Oct 21 08:59:02 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 21 Oct 2021 09:59:02 +0100 Subject: Trove guest agent and Rabbitmq In-Reply-To: References: Message-ID: Hi again, Reading, I did find that, do I have to create another provider network (internal one), or does a simple tenant routed network can do the job? I tried to create a tenant network and route it using static routes, but it didn't work for me. Regards. Le mer. 20 oct. 2021 ? 20:44, Lingxian Kong a ?crit : > I don't use Kolla, but the management network config in Trove is: > > [DEFAULT] > management_networks = > > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove Project Lead (OpenStack) > OpenStack Cloud Provider Project Lead (Kubernetes) > > > On Wed, Oct 20, 2021 at 9:23 PM wodel youchi > wrote: > >> Hi Lingxian, and thanks. >> >> Could you be more specific? What is this management network? in >> globals.yml this what I have in terme of networking : >> >> kolla_internal_vip_address: "10.10.3.1" >> kolla_internal_fqdn: "dashinternal.domain.tld" >> kolla_external_vip_address: "x.x.x.x" >> kolla_external_fqdn: "dash.domain.tld" >> >> >> >> >> >> >> *network_interface: "bond0"kolla_external_vip_interface: >> "bond1"api_interface: "bond1.30"storage_interface: >> "bond1.10"tunnel_interface: "bond1.40"octavia_network_interface: "{{ >> api_interface }}"neutron_external_interface: "bond2"* >> neutron_plugin_agent: "openvswitch" >> >> What is this management network? Do I have to create it? If yes how? >> >> >> Regards. >> >> Le mer. 20 oct. 2021 ? 00:48, Lingxian Kong a >> ?crit : >> >>> Hi Wodel, >>> >>> There is a management network (Neutron network) configured for >>> communication between controller services and guest agent, you need to >>> config in the infra router (using the network vlan) layer to make sure they >>> can talk. >>> >>> --- >>> Lingxian Kong >>> Senior Cloud Engineer (Catalyst Cloud) >>> Trove Project Lead (OpenStack) >>> OpenStack Cloud Provider Project Lead (Kubernetes) >>> >>> >>> On Wed, Oct 20, 2021 at 10:30 AM wodel youchi >>> wrote: >>> >>>> Hi, >>>> >>>> I am trying to deploy Trove. I am using the Kolla-ansible and Openstack >>>> wallaby version. >>>> From the documentation, the Trove guest agent, which runs inside the >>>> Tove instance communicates with the trove-taskmanager via rabbitmq, how is >>>> this done? >>>> >>>> The rabbitmq is running in the api network, the instance is running in >>>> the tunnel (tenant) network, and in my case, those networks are in >>>> different vlans, how should I configure this? >>>> >>>> Regards. >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Thu Oct 21 09:37:09 2021 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Thu, 21 Oct 2021 09:37:09 +0000 Subject: [neutron] neutron l3 agents number In-Reply-To: <20211021064950.75zsotsd67olw43n@p1.localdomain> References: <4FF7BC3A-626E-4DD9-A8D6-2C87730BAF96@gmail.com> <20211021064950.75zsotsd67olw43n@p1.localdomain> Message-ID: Thanks slawek, Actually, we dig a little bit into code: https://review.opendev.org/c/openstack/neutron/+/64553 In patchet 59, a discussion between Carl and Assaf seems to be the decision of having 3: - "An additional standby could help certain topologies though I'm sure." - "Fair enough" Cheers, On 21.10.21 - 08:49, Slawek Kaplonski wrote: > Hi, > > On Thu, Oct 21, 2021 at 07:34:43AM +0200, Arnaud wrote: > > Sorry, forgot to add neutron header > > > > Kind regards, > > Arnaud > > > > Le 20 octobre 2021 17:28:39 GMT+02:00, Arnaud Morin a ?crit?: > > >Hey team, > > > > > >When using DVR + HA, we endup with all routers beeing deployed on the > > >computes (the DVR part) and 3 routers on dvr_snat nodes (for HA). > > > > > >The question is why 3 for snats? > > >I know that this is the max_l3_agents_per_router config parameter, but > > >2 seems enough? > > > > > >On the other side, DHCP agent default (dhcp_agents_per_network) is 1, > > >while this is also a precious service provided in the tenant network. > > > > > >So, is there any downside on having only 2 agents for routers? > > TBH I don't think there is any downside to have only 2 agents hosting each > router (SNAT in case of DVR). Actually this is what we are e.g. testing in the > neutron-ovs-tempest-dvr-ha-multinode-full job as we have only 2 nodes there. > > > > > > >Thanks in advance, > > > > > >Arnaud. > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From syedammad83 at gmail.com Thu Oct 21 10:15:34 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Thu, 21 Oct 2021 15:15:34 +0500 Subject: Trove guest agent and Rabbitmq In-Reply-To: References: Message-ID: Hi Wodel, What I did was created a provider vlan network and created a security group for it. The network should be able to reach RMQ of your openstack infra. Ammad On Thu, Oct 21, 2021 at 2:07 PM wodel youchi wrote: > Hi again, > > Reading, I did find that, do I have to create another provider network > (internal one), or does a simple tenant routed network can do the job? > I tried to create a tenant network and route it using static routes, but > it didn't work for me. > > Regards. > > Le mer. 20 oct. 2021 ? 20:44, Lingxian Kong a > ?crit : > >> I don't use Kolla, but the management network config in Trove is: >> >> [DEFAULT] >> management_networks = >> >> --- >> Lingxian Kong >> Senior Cloud Engineer (Catalyst Cloud) >> Trove Project Lead (OpenStack) >> OpenStack Cloud Provider Project Lead (Kubernetes) >> >> >> On Wed, Oct 20, 2021 at 9:23 PM wodel youchi >> wrote: >> >>> Hi Lingxian, and thanks. >>> >>> Could you be more specific? What is this management network? in >>> globals.yml this what I have in terme of networking : >>> >>> kolla_internal_vip_address: "10.10.3.1" >>> kolla_internal_fqdn: "dashinternal.domain.tld" >>> kolla_external_vip_address: "x.x.x.x" >>> kolla_external_fqdn: "dash.domain.tld" >>> >>> >>> >>> >>> >>> >>> *network_interface: "bond0"kolla_external_vip_interface: >>> "bond1"api_interface: "bond1.30"storage_interface: >>> "bond1.10"tunnel_interface: "bond1.40"octavia_network_interface: "{{ >>> api_interface }}"neutron_external_interface: "bond2"* >>> neutron_plugin_agent: "openvswitch" >>> >>> What is this management network? Do I have to create it? If yes how? >>> >>> >>> Regards. >>> >>> Le mer. 20 oct. 2021 ? 00:48, Lingxian Kong a >>> ?crit : >>> >>>> Hi Wodel, >>>> >>>> There is a management network (Neutron network) configured for >>>> communication between controller services and guest agent, you need to >>>> config in the infra router (using the network vlan) layer to make sure they >>>> can talk. >>>> >>>> --- >>>> Lingxian Kong >>>> Senior Cloud Engineer (Catalyst Cloud) >>>> Trove Project Lead (OpenStack) >>>> OpenStack Cloud Provider Project Lead (Kubernetes) >>>> >>>> >>>> On Wed, Oct 20, 2021 at 10:30 AM wodel youchi >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> I am trying to deploy Trove. I am using the Kolla-ansible and >>>>> Openstack wallaby version. >>>>> From the documentation, the Trove guest agent, which runs inside the >>>>> Tove instance communicates with the trove-taskmanager via rabbitmq, how is >>>>> this done? >>>>> >>>>> The rabbitmq is running in the api network, the instance is running in >>>>> the tunnel (tenant) network, and in my case, those networks are in >>>>> different vlans, how should I configure this? >>>>> >>>>> Regards. >>>>> >>>> -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu Oct 21 11:18:31 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 21 Oct 2021 13:18:31 +0200 Subject: [all][tc] Skyline as a new official project [was: What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min] In-Reply-To: <19a6d9d2-ed39-39cd-1c85-8fee97f93b8b@debian.org> References: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> <93C08133-972B-44BE-9F2A-661A1B86651F@99cloud.net> <20211018121818.rerqlp7ek7z3rnya@yuggoth.org> <19a6d9d2-ed39-39cd-1c85-8fee97f93b8b@debian.org> Message-ID: On Wed, Oct 20, 2021 at 5:22 PM Thomas Goirand wrote: > On 10/18/21 2:18 PM, Jeremy Stanley wrote: > > On 2021-10-18 11:22:11 +0800 (+0800), ??? wrote: > >> Skyline-apiserver is a pure Python code project, following the > >> Python wheel packaging standard, using pip for installation, and > >> the dependency management of the project using poetry[1] > >> > >> Skyline-console uses npm for dependency management, development > >> and testing. During the packaging and distribution process, > >> webpack will be used to process the source code and dependent > >> library code first, and output the packaged static resource files. > >> These static resource files will be stored in an empty Python > >> module[2]. > > [...] > > > > GNU/Linux distributions like Debian are going to want to separately > > package the original source code for all of these Web components and > > their dependencies, and recreate them at the time the distro's > > binary packages are built. I believe the concerns are making it easy > > for them to find the source for all of it, and to attempt to use > > dependencies which these distributions already package in order to > > reduce their workload. Further, it helps to make sure the software > > is capable of using multiple versions of its dependencies when > > possible, because it's going to be installed into shared > > environments with other software which may have some of the same > > dependencies, so may need to be able to agree on common versions > > they all support. > > Hi, > > Thanks Jeremy for summing-up things in a better way that I ever would. > > Also, using pip is *not* an option for distros. I'm not sure what you > mean by "following the Python wheel packaging standard", but to me, > we're not there yet. I'd like to have a normal setup.py and setup.cfg > file in each Python module so it's easy to call "python3 setup.py > install --root $(pwd)/debian/skyline-apiserver --install-layout=deb". > Side note: calling setup.py is essentially deprecated: https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html > > I would also expect to see a "normal" requirements.txt and > test-requirements.txt like in every other OpenStack project, the use of > stestr to run unit tests, and so on. > > Right now, when looking at the skyline-apiserver as the Debian OpenStack > package maintainer, I'd need a lot of manual work to use pyproject.toml > instead of my standard tooling. > PyProject is the universal way forward, you'll (and we'll) need to adopt sooner or later. Dmitry > > As I understand, dependencies are expressed in the pyproject.toml, but > then how do I get the Python code installed under debian/skyline-apiserver? > > BTW, what made you choose something completely different than the rest > of the OpenStack project? > > Cheers, > > Thomas Goirand (zigo) > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsanjeewa at kln.ac.lk Thu Oct 21 04:29:01 2021 From: bsanjeewa at kln.ac.lk (Buddhika Godakuru) Date: Thu, 21 Oct 2021 09:59:01 +0530 Subject: kolla-ansible wallaby manila ceph pacific In-Reply-To: References: Message-ID: Dear Wodel, I think this is because manila has changed the way how to set/create auth ID in Wallaby for native CephFS driver. For the patch to work, you should change the command *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* to something like, ceph auth get-or-create client.manila -o manila.keyring mgr 'allow rw' mon 'allow r' Please see Manila Wallaby CephFS Driver document [1] Hope this helps. Thank you [1] https://docs.openstack.org/manila/wallaby/admin/cephfs_driver.html#authorizing-the-driver-to-communicate-with-ceph On Wed, 20 Oct 2021 at 23:19, wodel youchi wrote: > Hi, and thanks > > I tried to apply the patch, but it didn't work, this is the > manila-share.log. > By the way, I did change to caps for the manila client to what is said in > wallaby documentation, that is : > [client.manila] > key = keyyyyyyyy..... > > * caps mgr = "allow rw" caps mon = "allow r"* > > [root at ControllerA manila]# cat manila-share.log > 2021-10-20 10:03:22.286 7 INFO oslo_service.periodic_task [-] Skipping > periodic task update_share_usage_size because it is disabled > 2021-10-20 10:03:22.310 7 INFO oslo_service.service > [req-5b253656-4fe2-4087-b4ab-9ba2a8a0443f - - - - -] Starting 1 workers > 2021-10-20 10:03:22.315 30 INFO manila.service [-] Starting manila-share > node (version 12.0.1) > 2021-10-20 10:03:22.320 30 INFO manila.share.drivers.cephfs.driver > [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep > h client found, connecting... > 2021-10-20 10:03:22.368 30 INFO manila.share.drivers.cephfs.driver > [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep > h client connection complete. > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager > [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered > during i > n*itialization* > > * of driver CephFSDriver at ControllerA@cephfsnative1: > manila.exception.ShareBackendException: json_command failed - prefix=fs > volume ls, argdict={'format': 'json'} - exception message: Bad target type > 'mon-mgr'. 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback > (most recent call last): * > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 191, in rados_command > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager > timeout=RADOS_TIMEOUT) > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ > command > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager inbuf, timeout, > verbose) > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ > command_retry > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager return > send_command(*args, **kwargs) > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ > command > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise > ArgumentValid("Bad target type '{0}'".format(target[0])) > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager > ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager During handling of > the above exception, another exception occurred: > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py > ", line 346, in _driver_setup > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager > self.driver.do_setup(ctxt) > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 251, in do_setup > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager > volname=self.volname) > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 401, in volname > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager > self.rados_client, "fs volume ls", json_obj=True) > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 205, in rados_command > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise > exception.ShareBackendException(msg) > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager > manila.exception.ShareBackendException: json_command failed - prefix=fs > volume > ls, argdict={'format': 'json'} - exception message: Bad target type > 'mon-mgr'. > 2021-10-20 10:03:22.372 30 ERROR manila.share.manager > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager > [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered > during i > nitialization of driver CephFSDriver at ControllerA@cephfsnative1: > manila.exception.ShareBackendException: json_command failed - prefix= > fs volume ls, argdict={'format': 'json'} - exception message: Bad target > type 'mon-mgr'. > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 191, in rados_command > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager > timeout=RADOS_TIMEOUT) > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ > command > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager inbuf, timeout, > verbose) > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ > command_retry > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager return > send_command(*args, **kwargs) > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ > command > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise > ArgumentValid("Bad target type '{0}'".format(target[0])) > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager > ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager During handling of > the above exception, another exception occurred: > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py > ", line 346, in _driver_setup > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager > self.driver.do_setup(ctxt) > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 251, in do_setup > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager > volname=self.volname) > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 401, in volname > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager > self.rados_client, "fs volume ls", json_obj=True) > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 205, in rados_command > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise > exception.ShareBackendException(msg) > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager > manila.exception.ShareBackendException: json_command failed - prefix=fs > volume > ls, argdict={'format': 'json'} - exception message: Bad target type > 'mon-mgr'. > 2021-10-20 10:03:26.379 30 ERROR manila.share.manager > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager > [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered > during i > nitialization of driver CephFSDriver at ControllerA@cephfsnative1: > manila.exception.ShareBackendException: json_command failed - prefix= > fs volume ls, argdict={'format': 'json'} - exception message: Bad target > type 'mon-mgr'. > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 191, in rados_command > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager > timeout=RADOS_TIMEOUT) > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ > command > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager inbuf, timeout, > verbose) > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ > command_retry > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager return > send_command(*args, **kwargs) > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ > command > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise > ArgumentValid("Bad target type '{0}'".format(target[0])) > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager > ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager During handling of > the above exception, another exception occurred: > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py > ", line 346, in _driver_setup > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager > self.driver.do_setup(ctxt) > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 251, in do_setup > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager > volname=self.volname) > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 401, in volname > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager > self.rados_client, "fs volume ls", json_obj=True) > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 205, in rados_command > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise > exception.ShareBackendException(msg) > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager > manila.exception.ShareBackendException: json_command failed - prefix=fs > volume > ls, argdict={'format': 'json'} - exception message: Bad target type > 'mon-mgr'. > 2021-10-20 10:03:34.387 30 ERROR manila.share.manager > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager > [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered > during i > nitialization of driver CephFSDriver at ControllerA@cephfsnative1: > manila.exception.ShareBackendException: json_command failed - prefix= > fs volume ls, argdict={'format': 'json'} - exception message: Bad target > type 'mon-mgr'. > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 191, in rados_command > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager > timeout=RADOS_TIMEOUT) > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ > command > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager inbuf, timeout, > verbose) > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ > command_retry > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager return > send_command(*args, **kwargs) > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ > command > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise > ArgumentValid("Bad target type '{0}'".format(target[0])) > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager > ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager During handling of > the above exception, another exception occurred: > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py > ", line 346, in _driver_setup > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager > self.driver.do_setup(ctxt) > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 251, in do_setup > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager > volname=self.volname) > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 401, in volname > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager > self.rados_client, "fs volume ls", json_obj=True) > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 205, in rados_command > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise > exception.ShareBackendException(msg) > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager > manila.exception.ShareBackendException: json_command failed - prefix=fs > volume > ls, argdict={'format': 'json'} - exception message: Bad target type > 'mon-mgr'. > 2021-10-20 10:03:50.404 30 ERROR manila.share.manager > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager > [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered > during i > nitialization of driver CephFSDriver at ControllerA@cephfsnative1: > manila.exception.ShareBackendException: json_command failed - prefix= > fs volume ls, argdict={'format': 'json'} - exception message: Bad target > type 'mon-mgr'. > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 191, in rados_command > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager > timeout=RADOS_TIMEOUT) > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ > command > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager inbuf, timeout, > verbose) > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ > command_retry > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager return > send_command(*args, **kwargs) > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ > command > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise > ArgumentValid("Bad target type '{0}'".format(target[0])) > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager > ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager During handling of > the above exception, another exception occurred: > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py > ", line 346, in _driver_setup > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager > self.driver.do_setup(ctxt) > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 251, in do_setup > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager > volname=self.volname) > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 401, in volname > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager > self.rados_client, "fs volume ls", json_obj=True) > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 205, in rados_command > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise > exception.ShareBackendException(msg) > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager > manila.exception.ShareBackendException: json_command failed - prefix=fs > volume > ls, argdict={'format': 'json'} - exception message: Bad target type > 'mon-mgr'. > 2021-10-20 10:04:22.436 30 ERROR manila.share.manager > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager > [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered > during i > nitialization of driver CephFSDriver at ControllerA@cephfsnative1: > manila.exception.ShareBackendException: json_command failed - prefix= > fs volume ls, argdict={'format': 'json'} - exception message: Bad target > type 'mon-mgr'. > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 191, in rados_command > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager > timeout=RADOS_TIMEOUT) > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ > command > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager inbuf, timeout, > verbose) > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ > command_retry > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager return > send_command(*args, **kwargs) > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ > command > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise > ArgumentValid("Bad target type '{0}'".format(target[0])) > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager > ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager During handling of > the above exception, another exception occurred: > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py > ", line 346, in _driver_setup > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager > self.driver.do_setup(ctxt) > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 251, in do_setup > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager > volname=self.volname) > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 401, in volname > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager > self.rados_client, "fs volume ls", json_obj=True) > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 205, in rados_command > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise > exception.ShareBackendException(msg) > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager > manila.exception.ShareBackendException: json_command failed - prefix=fs > volume > ls, argdict={'format': 'json'} - exception message: Bad target type > 'mon-mgr'. > 2021-10-20 10:05:26.438 30 ERROR manila.share.manager > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager > [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered > during i > nitialization of driver CephFSDriver at ControllerA@cephfsnative1: > manila.exception.ShareBackendException: json_command failed - prefix= > fs volume ls, argdict={'format': 'json'} - exception message: Bad target > type 'mon-mgr'. > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 191, in rados_command > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager > timeout=RADOS_TIMEOUT) > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ > command > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager inbuf, timeout, > verbose) > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ > command_retry > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager return > send_command(*args, **kwargs) > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ > command > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise > ArgumentValid("Bad target type '{0}'".format(target[0])) > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager > ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager During handling of > the above exception, another exception occurred: > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py > ", line 346, in _driver_setup > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager > self.driver.do_setup(ctxt) > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 251, in do_setup > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager > volname=self.volname) > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 401, in volname > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager > self.rados_client, "fs volume ls", json_obj=True) > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 205, in rados_command > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise > exception.ShareBackendException(msg) > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager > manila.exception.ShareBackendException: json_command failed - prefix=fs > volume > ls, argdict={'format': 'json'} - exception message: Bad target type > 'mon-mgr'. > 2021-10-20 10:07:34.539 30 ERROR manila.share.manager > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager > [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered > during i > nitialization of driver CephFSDriver at ControllerA@cephfsnative1: > manila.exception.ShareBackendException: json_command failed - prefix= > fs volume ls, argdict={'format': 'json'} - exception message: Bad target > type 'mon-mgr'. > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 191, in rados_command > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager > timeout=RADOS_TIMEOUT) > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ > command > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager inbuf, timeout, > verbose) > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ > command_retry > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager return > send_command(*args, **kwargs) > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File > "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ > command > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise > ArgumentValid("Bad target type '{0}'".format(target[0])) > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager > ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager During handling of > the above exception, another exception occurred: > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most > recent call last): > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py > ", line 346, in _driver_setup > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager > self.driver.do_setup(ctxt) > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 251, in do_setup > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager > volname=self.volname) > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 401, in volname > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager > self.rados_client, "fs volume ls", json_obj=True) > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File > "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce > phfs/driver.py", line 205, in rados_command > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise > exception.ShareBackendException(msg) > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager > manila.exception.ShareBackendException: json_command failed - prefix=fs > volume > ls, argdict={'format': 'json'} - exception message: Bad target type > 'mon-mgr'. > 2021-10-20 10:11:50.596 30 ERROR manila.share.manager > > Regards > > Le mer. 20 oct. 2021 ? 00:14, Goutham Pacha Ravi > a ?crit : > >> >> On Tue, Oct 19, 2021 at 2:35 PM wodel youchi >> wrote: >> >>> Hi, >>> Has anyone been successful in deploying Manila wallaby using >>> kolla-ansible with ceph pacific as a backend? >>> >>> I have created the manila client in ceph pacific like this : >>> >>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, allow >>> rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>> >>> When I deploy, I get this error in manila's log file : >>> Bad target type 'mon-mgr' >>> Any ideas? >>> >> >> Could you share the full log from the manila-share service? >> There's an open bug related to manila/cephfs deployment: >> https://bugs.launchpad.net/kolla-ansible/+bug/1935784 >> Proposed fix: >> https://review.opendev.org/c/openstack/kolla-ansible/+/802743 >> >> >> >> >>> >>> Regards. >>> >> -- ??????? ????? ???????? Buddhika Sanjeewa Godakuru Systems Analyst/Programmer Deputy Webmaster / University of Kelaniya Information and Communication Technology Centre (ICTC) University of Kelaniya, Sri Lanka, Kelaniya, Sri Lanka. Mobile : (+94) 071 5696981 Office : (+94) 011 2903420 / 2903424 -- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++? University of Kelaniya Sri Lanka, accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information provided, unless that information is subsequently confirmed in writing. If you are not the intended recipient, this email and/or any information it contains should not be copied, disclosed, retained or used by you or any other party and the email and all its contents should be promptly deleted fully from our system and the sender informed. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Thu Oct 21 08:56:13 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 21 Oct 2021 09:56:13 +0100 Subject: kolla-ansible wallaby manila ceph pacific In-Reply-To: References: Message-ID: Hi, I did that already, I changed the keyring to "*ceph auth get-or-create client.manila -o manila.keyring mgr 'allow rw' mon 'allow r'*" it didn't work, then I tried with ceph octopus, same error. I applied the patch, then I recreated the keyring for manila as wallaby documentation, I get the error "*Bad target type 'mon-mgr'*" Regards. Le jeu. 21 oct. 2021 ? 05:29, Buddhika Godakuru a ?crit : > Dear Wodel, > I think this is because manila has changed the way how to set/create auth > ID in Wallaby for native CephFS driver. > For the patch to work, you should change the command > *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, allow > rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* > to something like, > ceph auth get-or-create client.manila -o manila.keyring mgr 'allow rw' mon > 'allow r' > > Please see Manila Wallaby CephFS Driver document [1] > > Hope this helps. > > Thank you > [1] > https://docs.openstack.org/manila/wallaby/admin/cephfs_driver.html#authorizing-the-driver-to-communicate-with-ceph > > On Wed, 20 Oct 2021 at 23:19, wodel youchi wrote: > >> Hi, and thanks >> >> I tried to apply the patch, but it didn't work, this is the >> manila-share.log. >> By the way, I did change to caps for the manila client to what is said in >> wallaby documentation, that is : >> [client.manila] >> key = keyyyyyyyy..... >> >> * caps mgr = "allow rw" caps mon = "allow r"* >> >> [root at ControllerA manila]# cat manila-share.log >> 2021-10-20 10:03:22.286 7 INFO oslo_service.periodic_task [-] Skipping >> periodic task update_share_usage_size because it is disabled >> 2021-10-20 10:03:22.310 7 INFO oslo_service.service >> [req-5b253656-4fe2-4087-b4ab-9ba2a8a0443f - - - - -] Starting 1 workers >> 2021-10-20 10:03:22.315 30 INFO manila.service [-] Starting manila-share >> node (version 12.0.1) >> 2021-10-20 10:03:22.320 30 INFO manila.share.drivers.cephfs.driver >> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep >> h client found, connecting... >> 2021-10-20 10:03:22.368 30 INFO manila.share.drivers.cephfs.driver >> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep >> h client connection complete. >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >> during i >> n*itialization* >> >> * of driver CephFSDriver at ControllerA@cephfsnative1: >> manila.exception.ShareBackendException: json_command failed - prefix=fs >> volume ls, argdict={'format': 'json'} - exception message: Bad target type >> 'mon-mgr'. 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback >> (most recent call last): * >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 191, in rados_command >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >> timeout=RADOS_TIMEOUT) >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >> command >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager inbuf, timeout, >> verbose) >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >> command_retry >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager return >> send_command(*args, **kwargs) >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >> command >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise >> ArgumentValid("Bad target type '{0}'".format(target[0])) >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager During handling of >> the above exception, another exception occurred: >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >> ", line 346, in _driver_setup >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >> self.driver.do_setup(ctxt) >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 251, in do_setup >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >> volname=self.volname) >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 401, in volname >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >> self.rados_client, "fs volume ls", json_obj=True) >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 205, in rados_command >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise >> exception.ShareBackendException(msg) >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >> manila.exception.ShareBackendException: json_command failed - prefix=fs >> volume >> ls, argdict={'format': 'json'} - exception message: Bad target type >> 'mon-mgr'. >> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >> during i >> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >> manila.exception.ShareBackendException: json_command failed - prefix= >> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >> type 'mon-mgr'. >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 191, in rados_command >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >> timeout=RADOS_TIMEOUT) >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >> command >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager inbuf, timeout, >> verbose) >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >> command_retry >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager return >> send_command(*args, **kwargs) >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >> command >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise >> ArgumentValid("Bad target type '{0}'".format(target[0])) >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager During handling of >> the above exception, another exception occurred: >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >> ", line 346, in _driver_setup >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >> self.driver.do_setup(ctxt) >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 251, in do_setup >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >> volname=self.volname) >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 401, in volname >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >> self.rados_client, "fs volume ls", json_obj=True) >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 205, in rados_command >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise >> exception.ShareBackendException(msg) >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >> manila.exception.ShareBackendException: json_command failed - prefix=fs >> volume >> ls, argdict={'format': 'json'} - exception message: Bad target type >> 'mon-mgr'. >> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >> during i >> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >> manila.exception.ShareBackendException: json_command failed - prefix= >> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >> type 'mon-mgr'. >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 191, in rados_command >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >> timeout=RADOS_TIMEOUT) >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >> command >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager inbuf, timeout, >> verbose) >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >> command_retry >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager return >> send_command(*args, **kwargs) >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >> command >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise >> ArgumentValid("Bad target type '{0}'".format(target[0])) >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager During handling of >> the above exception, another exception occurred: >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >> ", line 346, in _driver_setup >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >> self.driver.do_setup(ctxt) >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 251, in do_setup >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >> volname=self.volname) >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 401, in volname >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >> self.rados_client, "fs volume ls", json_obj=True) >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 205, in rados_command >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise >> exception.ShareBackendException(msg) >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >> manila.exception.ShareBackendException: json_command failed - prefix=fs >> volume >> ls, argdict={'format': 'json'} - exception message: Bad target type >> 'mon-mgr'. >> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >> during i >> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >> manila.exception.ShareBackendException: json_command failed - prefix= >> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >> type 'mon-mgr'. >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 191, in rados_command >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >> timeout=RADOS_TIMEOUT) >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >> command >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager inbuf, timeout, >> verbose) >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >> command_retry >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager return >> send_command(*args, **kwargs) >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >> command >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise >> ArgumentValid("Bad target type '{0}'".format(target[0])) >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager During handling of >> the above exception, another exception occurred: >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >> ", line 346, in _driver_setup >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >> self.driver.do_setup(ctxt) >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 251, in do_setup >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >> volname=self.volname) >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 401, in volname >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >> self.rados_client, "fs volume ls", json_obj=True) >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 205, in rados_command >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise >> exception.ShareBackendException(msg) >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >> manila.exception.ShareBackendException: json_command failed - prefix=fs >> volume >> ls, argdict={'format': 'json'} - exception message: Bad target type >> 'mon-mgr'. >> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >> during i >> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >> manila.exception.ShareBackendException: json_command failed - prefix= >> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >> type 'mon-mgr'. >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 191, in rados_command >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >> timeout=RADOS_TIMEOUT) >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >> command >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager inbuf, timeout, >> verbose) >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >> command_retry >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager return >> send_command(*args, **kwargs) >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >> command >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise >> ArgumentValid("Bad target type '{0}'".format(target[0])) >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager During handling of >> the above exception, another exception occurred: >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >> ", line 346, in _driver_setup >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >> self.driver.do_setup(ctxt) >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 251, in do_setup >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >> volname=self.volname) >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 401, in volname >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >> self.rados_client, "fs volume ls", json_obj=True) >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 205, in rados_command >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise >> exception.ShareBackendException(msg) >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >> manila.exception.ShareBackendException: json_command failed - prefix=fs >> volume >> ls, argdict={'format': 'json'} - exception message: Bad target type >> 'mon-mgr'. >> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >> during i >> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >> manila.exception.ShareBackendException: json_command failed - prefix= >> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >> type 'mon-mgr'. >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 191, in rados_command >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >> timeout=RADOS_TIMEOUT) >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >> command >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager inbuf, timeout, >> verbose) >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >> command_retry >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager return >> send_command(*args, **kwargs) >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >> command >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise >> ArgumentValid("Bad target type '{0}'".format(target[0])) >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager During handling of >> the above exception, another exception occurred: >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >> ", line 346, in _driver_setup >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >> self.driver.do_setup(ctxt) >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 251, in do_setup >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >> volname=self.volname) >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 401, in volname >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >> self.rados_client, "fs volume ls", json_obj=True) >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 205, in rados_command >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise >> exception.ShareBackendException(msg) >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >> manila.exception.ShareBackendException: json_command failed - prefix=fs >> volume >> ls, argdict={'format': 'json'} - exception message: Bad target type >> 'mon-mgr'. >> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >> during i >> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >> manila.exception.ShareBackendException: json_command failed - prefix= >> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >> type 'mon-mgr'. >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 191, in rados_command >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >> timeout=RADOS_TIMEOUT) >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >> command >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager inbuf, timeout, >> verbose) >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >> command_retry >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager return >> send_command(*args, **kwargs) >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >> command >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise >> ArgumentValid("Bad target type '{0}'".format(target[0])) >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager During handling of >> the above exception, another exception occurred: >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >> ", line 346, in _driver_setup >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >> self.driver.do_setup(ctxt) >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 251, in do_setup >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >> volname=self.volname) >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 401, in volname >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >> self.rados_client, "fs volume ls", json_obj=True) >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 205, in rados_command >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise >> exception.ShareBackendException(msg) >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >> manila.exception.ShareBackendException: json_command failed - prefix=fs >> volume >> ls, argdict={'format': 'json'} - exception message: Bad target type >> 'mon-mgr'. >> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >> during i >> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >> manila.exception.ShareBackendException: json_command failed - prefix= >> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >> type 'mon-mgr'. >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 191, in rados_command >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >> timeout=RADOS_TIMEOUT) >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >> command >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager inbuf, timeout, >> verbose) >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >> command_retry >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager return >> send_command(*args, **kwargs) >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >> command >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >> ArgumentValid("Bad target type '{0}'".format(target[0])) >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager During handling of >> the above exception, another exception occurred: >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most >> recent call last): >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >> ", line 346, in _driver_setup >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >> self.driver.do_setup(ctxt) >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 251, in do_setup >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >> volname=self.volname) >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 401, in volname >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >> self.rados_client, "fs volume ls", json_obj=True) >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >> phfs/driver.py", line 205, in rados_command >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >> exception.ShareBackendException(msg) >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >> manila.exception.ShareBackendException: json_command failed - prefix=fs >> volume >> ls, argdict={'format': 'json'} - exception message: Bad target type >> 'mon-mgr'. >> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >> >> Regards >> >> Le mer. 20 oct. 2021 ? 00:14, Goutham Pacha Ravi >> a ?crit : >> >>> >>> On Tue, Oct 19, 2021 at 2:35 PM wodel youchi >>> wrote: >>> >>>> Hi, >>>> Has anyone been successful in deploying Manila wallaby using >>>> kolla-ansible with ceph pacific as a backend? >>>> >>>> I have created the manila client in ceph pacific like this : >>>> >>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, >>>> allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>> >>>> When I deploy, I get this error in manila's log file : >>>> Bad target type 'mon-mgr' >>>> Any ideas? >>>> >>> >>> Could you share the full log from the manila-share service? >>> There's an open bug related to manila/cephfs deployment: >>> https://bugs.launchpad.net/kolla-ansible/+bug/1935784 >>> Proposed fix: >>> https://review.opendev.org/c/openstack/kolla-ansible/+/802743 >>> >>> >>> >>> >>>> >>>> Regards. >>>> >>> > > -- > > ??????? ????? ???????? > Buddhika Sanjeewa Godakuru > > Systems Analyst/Programmer > Deputy Webmaster / University of Kelaniya > > Information and Communication Technology Centre (ICTC) > University of Kelaniya, Sri Lanka, > Kelaniya, > Sri Lanka. > > Mobile : (+94) 071 5696981 > Office : (+94) 011 2903420 / 2903424 > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > University of Kelaniya Sri Lanka, accepts no liability for the content of > this email, or for the consequences of any actions taken on the basis of > the information provided, unless that information is subsequently confirmed > in writing. If you are not the intended recipient, this email and/or any > information it contains should not be copied, disclosed, retained or used > by you or any other party and the email and all its contents should be > promptly deleted fully from our system and the sender informed. > > E-mail transmission cannot be guaranteed to be secure or error-free as > information could be intercepted, corrupted, lost, destroyed, arrive late > or incomplete. > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu Oct 21 17:42:57 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 21 Oct 2021 13:42:57 -0400 Subject: [cinder][PTG] Friday schedule update Message-ID: Thanks for everyone who's participated so far. We're having a productive PTG! It turns out that it's going to be important for us to participate in the TC discussion of the secure RBAC community goal tomorrow, so I've rearranged our schedule a bit. What we have now is: 1300 UTC: announcements 1302 UTC: Quotas!!! (geguileo) 1330-1500 UTC: community goal: secure RBAC 1500 UTC: Clarify the Volume Driver API, Part 2 1600 UTC: Yoga priorities and responsibilities (rosmaita) The etherpad has been updated and has links to relevant info: https://etherpad.opendev.org/p/yoga-ptg-cinder cheers, brian From cboylan at sapwetik.org Thu Oct 21 17:47:35 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 21 Oct 2021 10:47:35 -0700 Subject: Cannot login to gerrit In-Reply-To: References: Message-ID: <9c029d7b-57aa-4027-ba29-3208e21eb89d@www.fastmail.com> On Thu, Oct 21, 2021, at 1:25 AM, F Yushiro wrote: > Hi, I tried to login ty Gerrit w/ OpenID but redirected the following > page and saw "Not Found". > > https://review.opendev.org/SignInFailure,SIGN_IN,Contact+site+administrator > > Could you please help me to login? My login email is > y.furukawa8 at gmail.com. I used to use ex-company address. When I left > the company, I've modified my email address to above gmail one. I > wonder that might occur this situation. I responded to your previous email to the list, http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025463.html, with the reason this is happening and a couple of potential solutions. Can you read that over and followup on that thread? > > Best regards, From kchamart at redhat.com Thu Oct 21 17:49:34 2021 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 21 Oct 2021 19:49:34 +0200 Subject: CentOS-9 guests & 'qemu64' CPU model are incompatible; and reasons to avoid 'qemu64' in general Message-ID: Summary ------- RHEL-9 / CentOS-9 (but not Fedora) has switched[1] to a new baseline microarchitecture called "x86-64-v2". This is to bring in support for additioal low-level CPU instructions, among other reasons. Now, if you've explicitly configured "cpu_mode=none" in `nova.conf` on your compute nodes ? which results in the guest getting the extremely undesirable "qemu64" CPU model ? it will refuse to boot RHEL-9 or CentOS-9 guests. To fix this, please update the CPU model to "Nehalem". It is the oldest CPU model that is compatible with CentOS-9/RHEL-9 "x86-64-v2". Further, Nehalem also works with `virt_type=kvm|qemu`, _and_ on both Intel and AMD hardware. So this is a good alternative. Details ------- Nova has three config attributes to setup various aspect of a guest CPU: `cpu_mode`, `cpu_model[s]`, and `cpu_model_extra_flags`. Examples of how to use these are in the documentation[2]. If you're using `cpu_mode = none` (e.g. upstream DevStack defalts to it for understandable reasons, mainly live-migration compatiblity): [libvirt] cpu_mode = none ... and want to boot CentOS-9, replace the above with the custom model, "Nehalem", which is the oldest CPU model that's compatible with the new x86-64-v2 baseline: [libvirt] cpu_mode = custom cpu_model = Nehalem The same applies if you're using "qemu64" or "kvm64" with, or without any custom CPU flags ? i.e. use Nehalem. (Also, please refer to[3] for more fine-grained recommendations of guest CPU configuration. It's a long document, but a patient reader will be rewarded.) Why is "qemu64" model undesirable for production? ------------------------------------------------- For those wondering about it, a few reasons why `qemu64` CPU model is not at all desirable: (1) It is vulnerable to many of the Spectre and other side-channel security flaws. To see this in "action", you can launch a guest with 'qemu64' CPU model, and then run the below: $ cd /sys/devices/system/cpu/vulnerabilities/ $ grep . * l1tf:Mitigation: PTE Inversion mds:Vulnerable: ... no microcode; SMT Host state unknown meltdown:Mitigation: PTI spec_store_bypass:Vulnerable spectre_v1:Mitigation: usercopy/swapgs barriers ... spectre_v2:Mitigation: Full generic retpoline ... Notice the "Vulnerable" entries. (2) "qemu64" does not support several critical CPU features: (a) AES (Advanced Encryption Standard) instruction, which is important for imporved TLS performance and encryption. (b) RDRAND instruction: without this, guests can get starved for entropy. (c) PCID flag: an obscure-but-important flag that'll lower the performance degradation that you incur from the "Meltdown" security fixes. Probably there are more reasons that I don't know of. An understandable reason why CI systems running in a cloud environment go with 'qemu64' is convenience: with 'qemu64', you can live-migrate a guest regardless of its underlying hardware (whether it's Intel or AMD). That's one main reason why upstream DevStack defaults to it. * * * Overall, the thumb-rule here is to either always explicitly specify a "sane" CPU model, based on the recommendations here[3]. Or to use Nova/libvirt's default ("host-model"). [1] https://developers.redhat.com/blog/2021/01/05/building-red-hat-enterprise-linux-9-for-the-x86-64-v2-microarchitecture-level [2] https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.cpu_mode [3] https://www.qemu.org/docs/master/system/i386/cpu.html#recommendations-for-kvm-cpu-model-configuration-on-x86-hosts [4] https://opendev.org/openstack/whitebox-tempest-plugin/src/branch/master/.zuul.yaml#L54 -- /kashyap From cboylan at sapwetik.org Thu Oct 21 17:56:42 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 21 Oct 2021 10:56:42 -0700 Subject: CentOS-9 guests & 'qemu64' CPU model are incompatible; and reasons to avoid 'qemu64' in general In-Reply-To: References: Message-ID: On Thu, Oct 21, 2021, at 10:49 AM, Kashyap Chamarthy wrote: > Summary > ------- > > RHEL-9 / CentOS-9 (but not Fedora) has switched[1] to a new baseline > microarchitecture called "x86-64-v2". This is to bring in support for > additioal low-level CPU instructions, among other reasons. Now, if > you've explicitly configured "cpu_mode=none" in `nova.conf` on your > compute nodes ? which results in the guest getting the extremely > undesirable "qemu64" CPU model ? it will refuse to boot RHEL-9 or CentOS-9 > guests. > > To fix this, please update the CPU model to "Nehalem". It is the oldest > CPU model that is compatible with CentOS-9/RHEL-9 "x86-64-v2". Further, > Nehalem also works with `virt_type=kvm|qemu`, _and_ on both Intel and > AMD hardware. So this is a good alternative. Thank you for looking into this and providing such detailed information. It has been really helpful. > > Details > ------- > > Nova has three config attributes to setup various aspect of a guest CPU: > `cpu_mode`, `cpu_model[s]`, and `cpu_model_extra_flags`. Examples of > how to use these are in the documentation[2]. If you're using `cpu_mode > = none` (e.g. upstream DevStack defalts to it for understandable > reasons, mainly live-migration compatiblity): > > [libvirt] > cpu_mode = none > > ... and want to boot CentOS-9, replace the above with the custom model, > "Nehalem", which is the oldest CPU model that's compatible with the new > x86-64-v2 baseline: > > [libvirt] > cpu_mode = custom > cpu_model = Nehalem > > The same applies if you're using "qemu64" or "kvm64" with, or without > any custom CPU flags ? i.e. use Nehalem. (Also, please refer to[3] for > more fine-grained recommendations of guest CPU configuration. It's a > long document, but a patient reader will be rewarded.) > > > Why is "qemu64" model undesirable for production? > ------------------------------------------------- > > For those wondering about it, a few reasons why `qemu64` CPU model is > not at all desirable: > > (1) It is vulnerable to many of the Spectre and other side-channel > security flaws. To see this in "action", you can launch a guest > with 'qemu64' CPU model, and then run the below: > > $ cd /sys/devices/system/cpu/vulnerabilities/ > $ grep . * > l1tf:Mitigation: PTE Inversion > mds:Vulnerable: ... no microcode; SMT Host state unknown > meltdown:Mitigation: PTI > spec_store_bypass:Vulnerable > spectre_v1:Mitigation: usercopy/swapgs barriers ... > spectre_v2:Mitigation: Full generic retpoline ... > > Notice the "Vulnerable" entries. > > (2) "qemu64" does not support several critical CPU features: > > (a) AES (Advanced Encryption Standard) instruction, > which is important for imporved TLS performance and encryption. > > (b) RDRAND instruction: without this, guests can get starved for > entropy. > > (c) PCID flag: an obscure-but-important flag that'll lower the > performance degradation that you incur from the "Meltdown" > security fixes. > > Probably there are more reasons that I don't know of. > > > An understandable reason why CI systems running in a cloud environment > go with 'qemu64' is convenience: with 'qemu64', you can live-migrate a > guest regardless of its underlying hardware (whether it's Intel or AMD). > That's one main reason why upstream DevStack defaults to it. I've got a change up to Devstack to convert it over to Nehalem by default [5]. So far it looks good, but we will want to recheck it a few times and make sure we have good test coverage across the clouds we run testing on just to be sure that the CPUs we get from those clouds are able to support this CPU type. Good news is that we successfully built a centos-9-stream image and booted it with the Nehalem change in place [6]. > > * * * > > Overall, the thumb-rule here is to either always explicitly specify a > "sane" CPU model, based on the recommendations here[3]. Or to use > Nova/libvirt's default ("host-model"). Devstack is currently setting cpu_mode to none. Should Nova be updated to make this result in a better behavior? Is this literally not passing a cpu mode to libvirt/qemu and allowing them to choose a default? If so maybe libvirt/qemu need to update their defaults? > > > [1] > https://developers.redhat.com/blog/2021/01/05/building-red-hat-enterprise-linux-9-for-the-x86-64-v2-microarchitecture-level > [2] > https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.cpu_mode > [3] > https://www.qemu.org/docs/master/system/i386/cpu.html#recommendations-for-kvm-cpu-model-configuration-on-x86-hosts > [4] > https://opendev.org/openstack/whitebox-tempest-plugin/src/branch/master/.zuul.yaml#L54 [5] https://review.opendev.org/c/openstack/devstack/+/815020 [6] https://zuul.opendev.org/t/openstack/build/b5841d4d264c4c8f93d2368500d6221d > > -- > /kashyap From rosmaita.fossdev at gmail.com Thu Oct 21 23:48:48 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 21 Oct 2021 19:48:48 -0400 Subject: [glance][nova][cinder] Openstack Glance image signature and validation for upload and boot controls? In-Reply-To: References: Message-ID: On 10/20/21 8:24 AM, S Andronic wrote: > Hi, > > I have a question in regards to Openstack Glance and if I got it right > this can be a place to ask, if I am wrong please kindly point me in the > ?right direction. > > ?When you enable Image Signing and Certificate Validation in nova.conf: > ?[glance] > ?verify_glance_signatures = True > ?enable_certificate_validation = True Note: Since Rocky, if you have enable_certificate_validation = True but have default_trusted_certificate_ids at its default value of empty list, then a user must supply a list of trusted_image_certificates in the create-server request, or the request will fail. > > ?Will this stop users from uploading unsigned images No, glance doesn't have a setting that requires uploaded images to be signed. However: - If the image record contains *all* the appropriate image signature properties, the PUT /v2/images/{image_id}/file call will fail if the data can't be validated. - You could write an image import plugin that would disallow import of image data for which the image record doesn't have the image signature properties set. > or using unsigned > ? images to spin up instances? Yes, if verify_glance_signatures is True, nova won't boot unsigned images: https://docs.openstack.org/nova/latest/configuration/config.html#glance.verify_glance_signatures > ?Intuitively I feel that it will enforce checks only if the signature > ?property exists, but what if it doesn't? See above. > ?Does it control in any way unsigned images? Yes, if verify_glance_signatures is True, unsigned images can't be used to boot an instance. > ?Does it stop users from uploading or using anything unsigned? No, glance doesn't require it. > ?Would an image without the signing properties just be rejected? It depends on what service you are talking about: Glance: no, glance won't reject an unsigned image. Nova: yes, if verify_glance_signatures is set. Cinder: it depends ... if verify_glance_signatures is enabled: - if you create a volume from an image AND the image has *any* of the image signature properties set, cinder will try to validate the image data and the volume will go to error if validation fails. If the validation succeeds, you get signature_verified: true in the volume-image-metadata. - if you create a volume from an image AND the image has NONE of the image signature properties, the volume creation will succeed (assuming nothing else goes wrong) and you get signature_verified: false in the volume-image-metadata. But ... Nova won't do certificate validation for a boot-from-volume request (as described in [0]). But I'm not clear on what happens if verify_glance_signatures is true and enable_certificate_validation is false. I believe that nova will boot the volume on the theory that cinder has already handled the signature validation part (which it has, if the option is enabled and at least one image signature property is set on the image), and it's the certificate validation part that isn't being handled? Hopefully someone else will explain this. [0] https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/nova-validate-certificates.html > ?If this feature doesn't stop the use of unsigned images as a security > ?control what is the logic behind it then? I guess you can look at the spec to see what threat models the feature was proposed to address: https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/image-verification.html > ?Is this meant not to stop users from using unsigned images but such > ?that people who do use signed images have verification for their code? This is a good question, and the asymmetry between how nova and cinder treat requests to create a resource from an unsigned image when verify_glance_images is enabled makes this difficult to answer (at least for me). > ?So if the goal is to stop people from using random images and image > ?signing and validation is not the answer what would be? It really depends on what your cloud users want/need, and what you mean by a "random image". For example, you could only allow public images provided by you the operator to be used to boot servers by blocking image uploads and server snapshots, or allowing snapshots but not allowing image sharing (which could get you "random" images, but they'd be restricted to a single project, which would probably be OK). Like I said, it depends on your goals and what your users will put up with (I think users would absolutely hate not being able to create server snapshots, but there are probably some users for whom that wouldn't be a problem). While we're talking about server snapshots, however, note that with verify_glance_images enabled in nova, you can boot a server from a signed image and then use the server createImage action to create an image in Glance. This image won't have the image signature properties on it, however, and hence won't be bootable. Your users will have to download the image so they can generate a signature for it and then set all the image signature metadata on the image before it nova will boot it. (I'm pretty sure this is true.) You may want to send another email with '[ops]' in the subject line to ask other operators who use this feature what their configuration and experiences are like. > > ?Kind Regards, > ?S. Andronic Good luck! brian From skaplons at redhat.com Fri Oct 22 07:00:35 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 22 Oct 2021 09:00:35 +0200 Subject: [neutron] neutron l3 agents number In-Reply-To: References: <20211021064950.75zsotsd67olw43n@p1.localdomain> Message-ID: <2483175.Lt9SDvczpP@p1> Hi, On czwartek, 21 pa?dziernika 2021 11:37:09 CEST Arnaud Morin wrote: > Thanks slawek, > > Actually, we dig a little bit into code: > https://review.opendev.org/c/openstack/neutron/+/64553 > > In patchet 59, a discussion between Carl and Assaf seems to be the > decision of having 3: > - "An additional standby could help certain topologies though I'm sure." > - "Fair enough" WOW, You dig deep to find that :) Thx a lot. > > Cheers, > > On 21.10.21 - 08:49, Slawek Kaplonski wrote: > > Hi, > > > > On Thu, Oct 21, 2021 at 07:34:43AM +0200, Arnaud wrote: > > > Sorry, forgot to add neutron header > > > > > > Kind regards, > > > Arnaud > > > > > > Le 20 octobre 2021 17:28:39 GMT+02:00, Arnaud Morin a ?crit : > > > >Hey team, > > > > > > > >When using DVR + HA, we endup with all routers beeing deployed on the > > > >computes (the DVR part) and 3 routers on dvr_snat nodes (for HA). > > > > > > > >The question is why 3 for snats? > > > >I know that this is the max_l3_agents_per_router config parameter, but > > > >2 seems enough? > > > > > > > >On the other side, DHCP agent default (dhcp_agents_per_network) is 1, > > > >while this is also a precious service provided in the tenant network. > > > > > > > >So, is there any downside on having only 2 agents for routers? > > > > TBH I don't think there is any downside to have only 2 agents hosting each > > router (SNAT in case of DVR). Actually this is what we are e.g. testing in > > the neutron-ovs-tempest-dvr-ha-multinode-full job as we have only 2 nodes > > there.> > > > >Thanks in advance, > > > > > > > >Arnaud. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gouthampravi at gmail.com Thu Oct 21 23:34:17 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 21 Oct 2021 16:34:17 -0700 Subject: kolla-ansible wallaby manila ceph pacific In-Reply-To: References: Message-ID: On Thu, Oct 21, 2021 at 1:56 AM wodel youchi wrote: > Hi, > > I did that already, I changed the keyring to "*ceph auth get-or-create > client.manila -o manila.keyring mgr 'allow rw' mon 'allow r'*" it didn't > work, then I tried with ceph octopus, same error. > I applied the patch, then I recreated the keyring for manila as wallaby > documentation, I get the error "*Bad target type 'mon-mgr'*" > Thanks, the error seems similar to this issue: https://tracker.ceph.com/issues/51039 Can you confirm the ceph version installed? On the ceph side, some changes land after GA and get back ported; > > Regards. > > Le jeu. 21 oct. 2021 ? 05:29, Buddhika Godakuru a > ?crit : > >> Dear Wodel, >> I think this is because manila has changed the way how to set/create auth >> ID in Wallaby for native CephFS driver. >> For the patch to work, you should change the command >> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, allow >> rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >> to something like, >> ceph auth get-or-create client.manila -o manila.keyring mgr 'allow >> rw' mon 'allow r' >> >> Please see Manila Wallaby CephFS Driver document [1] >> >> Hope this helps. >> >> Thank you >> [1] >> https://docs.openstack.org/manila/wallaby/admin/cephfs_driver.html#authorizing-the-driver-to-communicate-with-ceph >> >> On Wed, 20 Oct 2021 at 23:19, wodel youchi >> wrote: >> >>> Hi, and thanks >>> >>> I tried to apply the patch, but it didn't work, this is the >>> manila-share.log. >>> By the way, I did change to caps for the manila client to what is said >>> in wallaby documentation, that is : >>> [client.manila] >>> key = keyyyyyyyy..... >>> >>> * caps mgr = "allow rw" caps mon = "allow r"* >>> >>> [root at ControllerA manila]# cat manila-share.log >>> 2021-10-20 10:03:22.286 7 INFO oslo_service.periodic_task [-] Skipping >>> periodic task update_share_usage_size because it is disabled >>> 2021-10-20 10:03:22.310 7 INFO oslo_service.service >>> [req-5b253656-4fe2-4087-b4ab-9ba2a8a0443f - - - - -] Starting 1 workers >>> 2021-10-20 10:03:22.315 30 INFO manila.service [-] Starting manila-share >>> node (version 12.0.1) >>> 2021-10-20 10:03:22.320 30 INFO manila.share.drivers.cephfs.driver >>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep >>> h client found, connecting... >>> 2021-10-20 10:03:22.368 30 INFO manila.share.drivers.cephfs.driver >>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep >>> h client connection complete. >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>> during i >>> n*itialization* >>> >>> * of driver CephFSDriver at ControllerA@cephfsnative1: >>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>> volume ls, argdict={'format': 'json'} - exception message: Bad target type >>> 'mon-mgr'. 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback >>> (most recent call last): * >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 191, in rados_command >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>> timeout=RADOS_TIMEOUT) >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>> command >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager inbuf, >>> timeout, verbose) >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>> command_retry >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager return >>> send_command(*args, **kwargs) >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>> command >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise >>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager During handling of >>> the above exception, another exception occurred: >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>> ", line 346, in _driver_setup >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>> self.driver.do_setup(ctxt) >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 251, in do_setup >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>> volname=self.volname) >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 401, in volname >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>> self.rados_client, "fs volume ls", json_obj=True) >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 205, in rados_command >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise >>> exception.ShareBackendException(msg) >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>> volume >>> ls, argdict={'format': 'json'} - exception message: Bad target type >>> 'mon-mgr'. >>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>> during i >>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>> manila.exception.ShareBackendException: json_command failed - prefix= >>> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >>> type 'mon-mgr'. >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 191, in rados_command >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>> timeout=RADOS_TIMEOUT) >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>> command >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager inbuf, >>> timeout, verbose) >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>> command_retry >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager return >>> send_command(*args, **kwargs) >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>> command >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise >>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager During handling of >>> the above exception, another exception occurred: >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>> ", line 346, in _driver_setup >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>> self.driver.do_setup(ctxt) >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 251, in do_setup >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>> volname=self.volname) >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 401, in volname >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>> self.rados_client, "fs volume ls", json_obj=True) >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 205, in rados_command >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise >>> exception.ShareBackendException(msg) >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>> volume >>> ls, argdict={'format': 'json'} - exception message: Bad target type >>> 'mon-mgr'. >>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>> during i >>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>> manila.exception.ShareBackendException: json_command failed - prefix= >>> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >>> type 'mon-mgr'. >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 191, in rados_command >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>> timeout=RADOS_TIMEOUT) >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>> command >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager inbuf, >>> timeout, verbose) >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>> command_retry >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager return >>> send_command(*args, **kwargs) >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>> command >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise >>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager During handling of >>> the above exception, another exception occurred: >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>> ", line 346, in _driver_setup >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>> self.driver.do_setup(ctxt) >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 251, in do_setup >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>> volname=self.volname) >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 401, in volname >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>> self.rados_client, "fs volume ls", json_obj=True) >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 205, in rados_command >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise >>> exception.ShareBackendException(msg) >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>> volume >>> ls, argdict={'format': 'json'} - exception message: Bad target type >>> 'mon-mgr'. >>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>> during i >>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>> manila.exception.ShareBackendException: json_command failed - prefix= >>> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >>> type 'mon-mgr'. >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 191, in rados_command >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>> timeout=RADOS_TIMEOUT) >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>> command >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager inbuf, >>> timeout, verbose) >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>> command_retry >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager return >>> send_command(*args, **kwargs) >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>> command >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise >>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager During handling of >>> the above exception, another exception occurred: >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>> ", line 346, in _driver_setup >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>> self.driver.do_setup(ctxt) >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 251, in do_setup >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>> volname=self.volname) >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 401, in volname >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>> self.rados_client, "fs volume ls", json_obj=True) >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 205, in rados_command >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise >>> exception.ShareBackendException(msg) >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>> volume >>> ls, argdict={'format': 'json'} - exception message: Bad target type >>> 'mon-mgr'. >>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>> during i >>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>> manila.exception.ShareBackendException: json_command failed - prefix= >>> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >>> type 'mon-mgr'. >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 191, in rados_command >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>> timeout=RADOS_TIMEOUT) >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>> command >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager inbuf, >>> timeout, verbose) >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>> command_retry >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager return >>> send_command(*args, **kwargs) >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>> command >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise >>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager During handling of >>> the above exception, another exception occurred: >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>> ", line 346, in _driver_setup >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>> self.driver.do_setup(ctxt) >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 251, in do_setup >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>> volname=self.volname) >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 401, in volname >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>> self.rados_client, "fs volume ls", json_obj=True) >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 205, in rados_command >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise >>> exception.ShareBackendException(msg) >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>> volume >>> ls, argdict={'format': 'json'} - exception message: Bad target type >>> 'mon-mgr'. >>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>> during i >>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>> manila.exception.ShareBackendException: json_command failed - prefix= >>> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >>> type 'mon-mgr'. >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 191, in rados_command >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>> timeout=RADOS_TIMEOUT) >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>> command >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager inbuf, >>> timeout, verbose) >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>> command_retry >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager return >>> send_command(*args, **kwargs) >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>> command >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise >>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager During handling of >>> the above exception, another exception occurred: >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>> ", line 346, in _driver_setup >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>> self.driver.do_setup(ctxt) >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 251, in do_setup >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>> volname=self.volname) >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 401, in volname >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>> self.rados_client, "fs volume ls", json_obj=True) >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 205, in rados_command >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise >>> exception.ShareBackendException(msg) >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>> volume >>> ls, argdict={'format': 'json'} - exception message: Bad target type >>> 'mon-mgr'. >>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>> during i >>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>> manila.exception.ShareBackendException: json_command failed - prefix= >>> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >>> type 'mon-mgr'. >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 191, in rados_command >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>> timeout=RADOS_TIMEOUT) >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>> command >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager inbuf, >>> timeout, verbose) >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>> command_retry >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager return >>> send_command(*args, **kwargs) >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>> command >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise >>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager During handling of >>> the above exception, another exception occurred: >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>> ", line 346, in _driver_setup >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>> self.driver.do_setup(ctxt) >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 251, in do_setup >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>> volname=self.volname) >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 401, in volname >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>> self.rados_client, "fs volume ls", json_obj=True) >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 205, in rados_command >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise >>> exception.ShareBackendException(msg) >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>> volume >>> ls, argdict={'format': 'json'} - exception message: Bad target type >>> 'mon-mgr'. >>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>> during i >>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>> manila.exception.ShareBackendException: json_command failed - prefix= >>> fs volume ls, argdict={'format': 'json'} - exception message: Bad target >>> type 'mon-mgr'. >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 191, in rados_command >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>> timeout=RADOS_TIMEOUT) >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>> command >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager inbuf, >>> timeout, verbose) >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>> command_retry >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager return >>> send_command(*args, **kwargs) >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>> command >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager During handling of >>> the above exception, another exception occurred: >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most >>> recent call last): >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>> ", line 346, in _driver_setup >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>> self.driver.do_setup(ctxt) >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 251, in do_setup >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>> volname=self.volname) >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 401, in volname >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>> self.rados_client, "fs volume ls", json_obj=True) >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>> phfs/driver.py", line 205, in rados_command >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >>> exception.ShareBackendException(msg) >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>> volume >>> ls, argdict={'format': 'json'} - exception message: Bad target type >>> 'mon-mgr'. >>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>> >>> Regards >>> >>> Le mer. 20 oct. 2021 ? 00:14, Goutham Pacha Ravi >>> a ?crit : >>> >>>> >>>> On Tue, Oct 19, 2021 at 2:35 PM wodel youchi >>>> wrote: >>>> >>>>> Hi, >>>>> Has anyone been successful in deploying Manila wallaby using >>>>> kolla-ansible with ceph pacific as a backend? >>>>> >>>>> I have created the manila client in ceph pacific like this : >>>>> >>>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, >>>>> allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>>> >>>>> When I deploy, I get this error in manila's log file : >>>>> Bad target type 'mon-mgr' >>>>> Any ideas? >>>>> >>>> >>>> Could you share the full log from the manila-share service? >>>> There's an open bug related to manila/cephfs deployment: >>>> https://bugs.launchpad.net/kolla-ansible/+bug/1935784 >>>> Proposed fix: >>>> https://review.opendev.org/c/openstack/kolla-ansible/+/802743 >>>> >>>> >>>> >>>> >>>>> >>>>> Regards. >>>>> >>>> >> >> -- >> >> ??????? ????? ???????? >> Buddhika Sanjeewa Godakuru >> >> Systems Analyst/Programmer >> Deputy Webmaster / University of Kelaniya >> >> Information and Communication Technology Centre (ICTC) >> University of Kelaniya, Sri Lanka, >> Kelaniya, >> Sri Lanka. >> >> Mobile : (+94) 071 5696981 >> Office : (+94) 011 2903420 / 2903424 >> >> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> University of Kelaniya Sri Lanka, accepts no liability for the content of >> this email, or for the consequences of any actions taken on the basis of >> the information provided, unless that information is subsequently confirmed >> in writing. If you are not the intended recipient, this email and/or any >> information it contains should not be copied, disclosed, retained or used >> by you or any other party and the email and all its contents should be >> promptly deleted fully from our system and the sender informed. >> >> E-mail transmission cannot be guaranteed to be secure or error-free as >> information could be intercepted, corrupted, lost, destroyed, arrive late >> or incomplete. >> >> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Fri Oct 22 06:46:44 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Fri, 22 Oct 2021 12:16:44 +0530 Subject: [tripleo] Issue finding /etc/docker directory in Centos 8 Overcloud Deployment Message-ID: Hi Team, I am trying to install Tripleo Train Release on Centos 8 I have successfully installed Undercloud. For overcloud images, I have downloaded and uploaded images given on rdo_trunk https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ For deployment of overcloud, I executed the command openstack overcloud deploy --templates This command successfully creates a stack, executed the SSH on the machines successfully and then started some ansible tasks. During execution of ansible tasks, it gave a error at below task TASK | manage /etc/docker/daemon.json *The error message clearly states that directory /etc/docker is not found.* This task is present at path " */usr/share/ansible/roles/container-registry/tasks/docker.yml"* Since, in Centos 8 docker has been replaced with Podman, this error is bound to happen Commenting this particular task can be a workaround, but I have no idea what would be the impact of this. There are multiple tasks which makes use of /etc/docker directory Is there any way to resolve this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Fri Oct 22 14:51:23 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 22 Oct 2021 20:21:23 +0530 Subject: [Glance] Yoga PTG Summary Message-ID: Hi All, We had our fourth virtual PTG between 18th October to 22nd October 2020. Thanks to everyone who joined the virtual PTG sessions. Using bluejeans app we had lots of discussion around different topics for glance, glance + cinder and Secure RBAC. I have created etherpad [1] with Notes from the session and which also includes the recordings of each discussion. Here is a short summary of the discussions. Tuesday, October 19th # Xena Retrospective On the positive note, we merged a number of useful features this cycle. We managed to implement a project scope of secure RBAC for metadef APIs, Implemented quotas using unified limits and moved policy enforcing closer to API layer. We also manage to wipe out many bugs from our bug backlog. On the other side we need to improve our documentation and API reference guide. Recording: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 1 # Cache API During Xena cycle we managed to start this work and implement the core functionality but failed to merge it due to lack of negative tests and tempest coverage. In the Yoga cycle we are going to focus on adding tempest coverage for the same along with a new API to cache the given image immediately rather than waiting for a periodic job to pre-cache it for us. Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 2 # Native Image Encryption Unfortunately this topic is sitting in our agenda for the last couple of PTGs. Current update is the core feature is depending on Barbican microversion work and once it is complete then the Consumer API can be functional again. At the moment in Glance we have decided to go ahead and implement the glance side part and instead of having placeholder (barbican consumer API secret register and deletion) code as commented we can have a WIP patch for the same with depending on glance side work. Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 3 # Default Glance to configure multiple stores Glance has deprecated single stores configuration since Stein cycle and now will start putting efforts to deploy glance using multistore by-default and then remove single store support from glance. This might be going to take a couple of cycles, so in yoga we are going to migrate internal unit and functional tests to use multistore config and also going to modify devstack to deploy glance using multistore configuration for swift and Ceph (for file and cinder it's already supported). Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 4 # Quotas Usage API In Xena we have implemented quotas for images API using unified limits. This cycle we will add new APIs which will help enduser to get the clear picture of quotas like What is total quota, used quota and remaining quota. So first we are coming up with the spec for the design and then implementation for the same. Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 5 Wednesday, October 20th 2021 # Policy Refactoring - Part 2 In Xena we have managed to move all policy checks to the API layer. This cycle we need to work on removing dead code of policy and authorization layer. Before removing both the layers we need to make sure that property protection is working as expected and for the same we need to add one job with a POST script to verify that removing auth and policy layer will not break property protection. Recordings: https://bluejeans.com/s/AMZzGObPhK4 - Chapter 1 # Database read only checks We have added new RBAC policy checks which are equivalent to readonly checks in our db layer, e.g. image ownership check, visibility check etc. To kick start work for this dansmith (thanks for volunteering) will work on PoC and abhishekk will work on specs about how we will modify/improve our db layer. # Secure RBAC - System Scope/Project Admin scope In Xena we have managed to move all policy checks to API layer and implemented project scope of metadef APIs. So as of now we have project scope for all glance APIs. During this discussion Security Team has updated us that discussions are still going about how the system scope should be used/implemented and they are planning to introduce a new role 'manager' which will act between 'admin' and 'member' roles. We need to keep an eye on this new development. https://etherpad.opendev.org/p/tc-yoga-ptg - line #446 https://etherpad.opendev.org/p/policy-popup-yoga-ptg - line #122 Recordings: https://bluejeans.com/s/AMZzGObPhK4 - Chapter 3 Thursday, October 21st 2021 # Glance- Interop interlock Due to confusion between timings this discussion didn't happen as planned. The InterOP team has added some questions later to PTG etherpad (refer line no #270). @Glance team please respond to those questions as I will be out for the next couple of weeks. # Upload volume to image in RBD backend In case when we upload a volume as an image to glance's rbd backend, it starts with a 0 size rbd image and performs resize in chunks of 8 MB which makes the operation very slow. Current idea is to pass volume size as image size to avoid these resize operations. As this change will lead to use locations API which are not advisable to use in glance due to security concerns and also checksum and multi-hash will not be available, cinder side will have new config option (default False) to use this optimization if set to True. Also this change needs a glance side to expose ceph pool information, so Rajat (whoami-rajat) will coordinate with the glance team to set up a spec and implement the same. # Upload volume to image when glance backend is cinder Similar to above topic this will have same security concerns and same config option can be used to have this optimization available. Rajat will coordinate with the glance team to implement glance side changes. Note, as we have joined the cinder team, Brian Rosmaita/Rajat will update/share the recording links for above two sessions in glance PTG etherpad once it is available. # Adding multiple tags overrides existing tags We are going to modify the create multiple tags API which will have one boolean parameter (Default to False) in the header to maintain backward compatibility. If it is True then we are going to add new tags to the existing list of tags rather than replacing those. Recordings: https://bluejeans.com/s/@ttsNs8vIFq - Chapter 1 # Delete newly created metadef resource types from DB after deassociating To maintain the consistency with other metadef APIs, we are going to add two new APIs. 1. Create resource types 2. Delete given resource type Recordings: https://bluejeans.com/s/@ttsNs8vIFq - Chapter 2 You will find the detailed information about the same in the PTG etherpad [1] along with the recordings of the sessions. Kindly let me know if you have any questions about the same. [1] https://etherpad.opendev.org/p/yoga-glance-ptg Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Fri Oct 22 15:03:50 2021 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 22 Oct 2021 09:03:50 -0600 Subject: [tripleo] Issue finding /etc/docker directory in Centos 8 Overcloud Deployment In-Reply-To: References: Message-ID: You need to specify the podman environment file for the overcloud deploy. You are missing some environment files that are necessary to deploy. You can't just run `openstack overcloud deploy --templates`. You should be passing addition files for configuration and networking related parameters. On Fri, Oct 22, 2021, 8:53 AM Anirudh Gupta wrote: > Hi Team, > > I am trying to install Tripleo Train Release on Centos 8 > I have successfully installed Undercloud. > > For overcloud images, I have downloaded and uploaded images given on > rdo_trunk > https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ > > For deployment of overcloud, I executed the command > > openstack overcloud deploy --templates > > This command successfully creates a stack, executed the SSH on the > machines successfully and then started some ansible tasks. > During execution of ansible tasks, it gave a error at below task > > TASK | manage /etc/docker/daemon.json > *The error message clearly states that directory /etc/docker is not found.* > > This task is present at path " > */usr/share/ansible/roles/container-registry/tasks/docker.yml"* > > Since, in Centos 8 docker has been replaced with Podman, this error is > bound to happen > Commenting this particular task can be a workaround, but I have no idea > what would be the impact of this. > There are multiple tasks which makes use of /etc/docker directory > > Is there any way to resolve this? > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Fri Oct 22 19:46:05 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Fri, 22 Oct 2021 15:46:05 -0400 Subject: [charms] OpenStack Charms 21.10 release is now available Message-ID: The 21.10 release of the OpenStack Charms is now available. This release brings several new features to the existing OpenStack Charms deployments for Queens, Stein, Ussuri, Victoria, Wallaby, Xena and many stable combinations of Ubuntu + OpenStack. Please see the Release notes for full details: https://docs.openstack.org/charm-guide/latest/release-notes/2110.html == Highlights == * OpenStack Xena OpenStack Xena is now supported on Ubuntu 20.04 LTS (via UCA) and Ubuntu 21.10 natively. * Cinder storage backend charms Two stable charms are now available that provide LVM and NetApp storage backends for Cinder. The new charms are cinder-lvm and cinder-netapp respectively. The cinder-lvm charm deprecates the LVM functionality of the cinder charm. A migration path is available. * Cloud operational improvements Improvements have been implemented at the operational level through the addition of many actions and configuration options to the current set of stable charms. * Tech-preview charms Two tech-preview charms are now available. The ceph-dashboard charm deploys the Ceph Dashboard and the openstack-loadbalancer charm deploys a load balancer for OpenStack applications that support the charm. * Documentation updates Ongoing improvements to the OpenStack Charms Deployment Guide, the OpenStack Charm Guide, and the charm READMEs. == OpenStack Charms team == The OpenStack Charms team can be contacted on the #openstack-charms IRC channel (on OFTC) or in the Ubuntu user forum: https://discourse.ubuntu.com/c/openstack/ == Thank you == Lots of thanks to the 57 contributors below who squashed 131 bugs, enabled new features, and improved the documentation! Alex Kavanagh Arif Ali Aurelien Lourot Bartosz Woronicz Billy Olsen Brett Milford Chris MacNaughton Corey Bryant Cornellius Metto Cory Johns David Ames David Negreira Diko Parvanov Dmitrii Shcherbakov Edin Sarajlic Edward Hope-Morley Eric Chen Erlon R. Felipe Reyes Frode Nordahl Gabriel Angelo Garrett Thompson Ghanshyam Mann Gustavo Sanchez Hemanth Nakkina Hernan Garcia James Page James Troup Jarred Wilson John P Jose Phillips Julien Thieffry Liam Young Linda Guo Luciano Lo Martin Kalcok Nicholas Njihia Nicolas Bock Nikhil Kshirsagar Nobuto Murata Peter Matulis Robert Gildein Rodrigo Barbieri Sean McGinnis Simon Dodsley Stephan Pampel Trent Lloyd Vladimir Grevtsev Xav Paice Yoshi Kadokawa YuehuiLei Zhang Hua eric-chen jiangzhilin likui tushargite96 yangyawei -- OpenStack Charms Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri Oct 22 22:06:02 2021 From: zigo at debian.org (Thomas Goirand) Date: Sat, 23 Oct 2021 00:06:02 +0200 Subject: [all][tc] Skyline as a new official project [was: What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min] In-Reply-To: References: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> <93C08133-972B-44BE-9F2A-661A1B86651F@99cloud.net> <20211018121818.rerqlp7ek7z3rnya@yuggoth.org> <19a6d9d2-ed39-39cd-1c85-8fee97f93b8b@debian.org> Message-ID: <4b4a3c1f-67eb-9e77-ec54-b5d2a698c4a2@debian.org> On 10/21/21 1:18 PM, Dmitry Tantsur wrote: > Side note: calling setup.py is essentially deprecated: > https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html > This is essentially bullshit: the replacement method proposed in this page is supposed to be "pip install", which does dependency resolving and download from the internet, which is not useful (and even forbidden at package build time) for downstream distributions. If using pip is the only thing that upstream Python people are proposing to distributions, without any distro-specific options (like the --install-layout=deb option of setuptools, for example), then something really wrong is going on. Unless I missed something and upstream Python likes us from now on?!? > Right now, when looking at the skyline-apiserver as the Debian OpenStack > package maintainer, I'd need a lot of manual work to use pyproject.toml > instead of my standard tooling. > > > PyProject is the universal way forward, you'll (and we'll) need to adopt > sooner or later. There's currently an effort going on to get poetry packaged in Debian. However, we're not there yet, but I have hope for it. Anyway, this is orthogonal to what I wrote: IMO skyline needs to conform with everything that's made elsewhere in OpenStack, and adopt the same standards. If OpenStack is ever to move to Poetry (which means: all projects), then maybe I will revise this thinking. Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Fri Oct 22 22:40:29 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 22 Oct 2021 22:40:29 +0000 Subject: [all][tc] Skyline as a new official project [was: What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min] In-Reply-To: <4b4a3c1f-67eb-9e77-ec54-b5d2a698c4a2@debian.org> References: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> <93C08133-972B-44BE-9F2A-661A1B86651F@99cloud.net> <20211018121818.rerqlp7ek7z3rnya@yuggoth.org> <19a6d9d2-ed39-39cd-1c85-8fee97f93b8b@debian.org> <4b4a3c1f-67eb-9e77-ec54-b5d2a698c4a2@debian.org> Message-ID: <20211022224028.blm3djon2rrgbguk@yuggoth.org> On 2021-10-23 00:06:02 +0200 (+0200), Thomas Goirand wrote: [...] > This is essentially bullshit: the replacement method proposed in this > page is supposed to be "pip install", which does dependency resolving > and download from the internet, which is not useful (and even forbidden > at package build time) for downstream distributions. > > If using pip is the only thing that upstream Python people are proposing > to distributions, without any distro-specific options (like the > --install-layout=deb option of setuptools, for example), then something > really wrong is going on. > > Unless I missed something and upstream Python likes us from now on?!? [...] Their new answer to "how should we be building binary packages from Python source distributions" is this (already packaged in Debian): https://packages.debian.org/python3-build The idea is that a source distribution may have a variety of different possible build backends per PEP 517, but as long as you run `python3 -m build` (and the relevant build dependencies are made available somehow) then it shouldn't matter. Calling directly into setup.py is only going to work for projects which use Setuptools as their build backend and provide a setup.py, of course, probably not for any other build backend. > IMO skyline needs to conform with everything that's made elsewhere > in OpenStack, and adopt the same standards. [...] I agree, and I attempted to leave some detailed notes on https://review.opendev.org/814037 highlighting the differences I spotted, which I think they're going to want to address for consistency with the rest of OpenStack. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Oct 22 22:09:47 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 22 Oct 2021 15:09:47 -0700 Subject: [all][tc] Skyline as a new official project [was: What's happening in Technical Committee: summary 15th Oct, 21: Reading: 5 min] In-Reply-To: <4b4a3c1f-67eb-9e77-ec54-b5d2a698c4a2@debian.org> References: <17c84b557a6.12930f0e21106266.7388077538937209855@ghanshyammann.com> <446265bd-eb57-a5a6-2f5d-937c6cdad372@debian.org> <93C08133-972B-44BE-9F2A-661A1B86651F@99cloud.net> <20211018121818.rerqlp7ek7z3rnya@yuggoth.org> <19a6d9d2-ed39-39cd-1c85-8fee97f93b8b@debian.org> <4b4a3c1f-67eb-9e77-ec54-b5d2a698c4a2@debian.org> Message-ID: On Fri, Oct 22, 2021, at 3:06 PM, Thomas Goirand wrote: > On 10/21/21 1:18 PM, Dmitry Tantsur wrote: >> Side note: calling setup.py is essentially deprecated: >> https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html >> > > This is essentially bullshit: the replacement method proposed in this > page is supposed to be "pip install", which does dependency resolving > and download from the internet, which is not useful (and even forbidden > at package build time) for downstream distributions. > > If using pip is the only thing that upstream Python people are proposing > to distributions, without any distro-specific options (like the > --install-layout=deb option of setuptools, for example), then something > really wrong is going on. > > Unless I missed something and upstream Python likes us from now on?!? I think https://pypa-build.readthedocs.io/en/latest/ may be the new tool that is a bit more usable for distros. But I've not used it at all so couldn't comment on its applicability. From aschultz at redhat.com Sat Oct 23 04:47:36 2021 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 22 Oct 2021 22:47:36 -0600 Subject: [tripleo] Issue finding /etc/docker directory in Centos 8 Overcloud Deployment In-Reply-To: References: Message-ID: Depends on your desired feature configuration. https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/index.html In train on CentOS 8 you need to specify podman.yaml for containerized deployment due to the lack of docker. https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/overcloud.html#deploying-the-containerized-overcloud It's -e /use/share/openstack-tripleo-heat-templates/environments/podman.yaml This was switched in Ussuri but because train could be installed on 7 or 8 it had this issue. On Fri, Oct 22, 2021, 10:39 PM Anirudh Gupta wrote: > Hi Alex > Thanks for your reply. > I am relatively new to TripleO, Can you please suggest what environment > files are necessary to be passed? > > Regards > Anirudh Gupta > > On Fri, 22 Oct, 2021, 8:34 pm Alex Schultz, wrote: > >> You need to specify the podman environment file for the overcloud >> deploy. You are missing some environment files that are necessary to >> deploy. You can't just run `openstack overcloud deploy --templates`. You >> should be passing addition files for configuration and networking related >> parameters. >> >> On Fri, Oct 22, 2021, 8:53 AM Anirudh Gupta wrote: >> >>> Hi Team, >>> >>> I am trying to install Tripleo Train Release on Centos 8 >>> I have successfully installed Undercloud. >>> >>> For overcloud images, I have downloaded and uploaded images given on >>> rdo_trunk >>> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >>> >>> For deployment of overcloud, I executed the command >>> >>> openstack overcloud deploy --templates >>> >>> This command successfully creates a stack, executed the SSH on the >>> machines successfully and then started some ansible tasks. >>> During execution of ansible tasks, it gave a error at below task >>> >>> TASK | manage /etc/docker/daemon.json >>> *The error message clearly states that directory /etc/docker is not >>> found.* >>> >>> This task is present at path " >>> */usr/share/ansible/roles/container-registry/tasks/docker.yml"* >>> >>> Since, in Centos 8 docker has been replaced with Podman, this error is >>> bound to happen >>> Commenting this particular task can be a workaround, but I have no idea >>> what would be the impact of this. >>> There are multiple tasks which makes use of /etc/docker directory >>> >>> Is there any way to resolve this? >>> >>> >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandronic888 at gmail.com Sat Oct 23 15:31:01 2021 From: sandronic888 at gmail.com (S Andronic) Date: Sat, 23 Oct 2021 16:31:01 +0100 Subject: [glance][nova][cinder] Openstack Glance image signature and validation for upload and boot controls? In-Reply-To: References: Message-ID: Hi Brian, Thank you very much for your reply and the references, it has been most helpful. Kind regards S. Andronic On Fri, 22 Oct 2021, 00:52 Brian Rosmaita, wrote: > On 10/20/21 8:24 AM, S Andronic wrote: > > Hi, > > > > I have a question in regards to Openstack Glance and if I got it right > > this can be a place to ask, if I am wrong please kindly point me in the > > right direction. > > > > When you enable Image Signing and Certificate Validation in nova.conf: > > [glance] > > verify_glance_signatures = True > > enable_certificate_validation = True > > Note: Since Rocky, if you have enable_certificate_validation = True but > have default_trusted_certificate_ids at its default value of empty list, > then a user must supply a list of trusted_image_certificates in the > create-server request, or the request will fail. > > > > Will this stop users from uploading unsigned images > > No, glance doesn't have a setting that requires uploaded images to be > signed. However: > - If the image record contains *all* the appropriate image signature > properties, the PUT /v2/images/{image_id}/file call will fail if the > data can't be validated. > - You could write an image import plugin that would disallow import of > image data for which the image record doesn't have the image signature > properties set. > > > or using unsigned > > images to spin up instances? > > Yes, if verify_glance_signatures is True, nova won't boot unsigned images: > > https://docs.openstack.org/nova/latest/configuration/config.html#glance.verify_glance_signatures > > > Intuitively I feel that it will enforce checks only if the signature > > property exists, but what if it doesn't? > > See above. > > > Does it control in any way unsigned images? > > Yes, if verify_glance_signatures is True, unsigned images can't be used > to boot an instance. > > > Does it stop users from uploading or using anything unsigned? > > No, glance doesn't require it. > > > Would an image without the signing properties just be rejected? > > It depends on what service you are talking about: > > Glance: no, glance won't reject an unsigned image. > > Nova: yes, if verify_glance_signatures is set. > > Cinder: it depends ... if verify_glance_signatures is enabled: > - if you create a volume from an image AND the image has *any* of the > image signature properties set, cinder will try to validate the image > data and the volume will go to error if validation fails. If the > validation succeeds, you get signature_verified: true in the > volume-image-metadata. > - if you create a volume from an image AND the image has NONE of the > image signature properties, the volume creation will succeed (assuming > nothing else goes wrong) and you get signature_verified: false in the > volume-image-metadata. > > But ... Nova won't do certificate validation for a boot-from-volume > request (as described in [0]). But I'm not clear on what happens if > verify_glance_signatures is true and enable_certificate_validation is > false. I believe that nova will boot the volume on the theory that > cinder has already handled the signature validation part (which it has, > if the option is enabled and at least one image signature property is > set on the image), and it's the certificate validation part that isn't > being handled? Hopefully someone else will explain this. > > [0] > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/nova-validate-certificates.html > > > If this feature doesn't stop the use of unsigned images as a security > > control what is the logic behind it then? > > I guess you can look at the spec to see what threat models the feature > was proposed to address: > > https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/image-verification.html > > > Is this meant not to stop users from using unsigned images but such > > that people who do use signed images have verification for their code? > > This is a good question, and the asymmetry between how nova and cinder > treat requests to create a resource from an unsigned image when > verify_glance_images is enabled makes this difficult to answer (at least > for me). > > > So if the goal is to stop people from using random images and image > > signing and validation is not the answer what would be? > > It really depends on what your cloud users want/need, and what you mean > by a "random image". For example, you could only allow public images > provided by you the operator to be used to boot servers by blocking > image uploads and server snapshots, or allowing snapshots but not > allowing image sharing (which could get you "random" images, but they'd > be restricted to a single project, which would probably be OK). Like I > said, it depends on your goals and what your users will put up with (I > think users would absolutely hate not being able to create server > snapshots, but there are probably some users for whom that wouldn't be a > problem). > > While we're talking about server snapshots, however, note that with > verify_glance_images enabled in nova, you can boot a server from a > signed image and then use the server createImage action to create an > image in Glance. This image won't have the image signature properties > on it, however, and hence won't be bootable. Your users will have to > download the image so they can generate a signature for it and then set > all the image signature metadata on the image before it nova will boot > it. (I'm pretty sure this is true.) > > You may want to send another email with '[ops]' in the subject line to > ask other operators who use this feature what their configuration and > experiences are like. > > > > > Kind Regards, > > S. Andronic > > Good luck! > brian > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Sat Oct 23 04:39:22 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Sat, 23 Oct 2021 10:09:22 +0530 Subject: [tripleo] Issue finding /etc/docker directory in Centos 8 Overcloud Deployment In-Reply-To: References: Message-ID: Hi Alex Thanks for your reply. I am relatively new to TripleO, Can you please suggest what environment files are necessary to be passed? Regards Anirudh Gupta On Fri, 22 Oct, 2021, 8:34 pm Alex Schultz, wrote: > You need to specify the podman environment file for the overcloud deploy. > You are missing some environment files that are necessary to deploy. You > can't just run `openstack overcloud deploy --templates`. You should be > passing addition files for configuration and networking related parameters. > > On Fri, Oct 22, 2021, 8:53 AM Anirudh Gupta wrote: > >> Hi Team, >> >> I am trying to install Tripleo Train Release on Centos 8 >> I have successfully installed Undercloud. >> >> For overcloud images, I have downloaded and uploaded images given on >> rdo_trunk >> https://images.rdoproject.org/centos8/train/rdo_trunk/current-tripleo/ >> >> For deployment of overcloud, I executed the command >> >> openstack overcloud deploy --templates >> >> This command successfully creates a stack, executed the SSH on the >> machines successfully and then started some ansible tasks. >> During execution of ansible tasks, it gave a error at below task >> >> TASK | manage /etc/docker/daemon.json >> *The error message clearly states that directory /etc/docker is not >> found.* >> >> This task is present at path " >> */usr/share/ansible/roles/container-registry/tasks/docker.yml"* >> >> Since, in Centos 8 docker has been replaced with Podman, this error is >> bound to happen >> Commenting this particular task can be a workaround, but I have no idea >> what would be the impact of this. >> There are multiple tasks which makes use of /etc/docker directory >> >> Is there any way to resolve this? >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasemin.demiral at tubitak.gov.tr Sat Oct 23 19:37:42 2021 From: yasemin.demiral at tubitak.gov.tr (Yasemin =?utf-8?Q?DEM=C4=B0RAL_=28BILGEM_BTE=29?=) Date: Sat, 23 Oct 2021 22:37:42 +0300 (EET) Subject: [magnum] [victoria] [fedora-coreos] [ssl] Message-ID: <340489778.121168292.1635017862327.JavaMail.zimbra@tubitak.gov.tr> Hi I work on OpenStack stable/victoria version with OSA. I use CoreOS 31.20200323.3.2 to create kubernetes cluster. When the cluster creating, I can't connect with SSH on linux or windows, but I can connect on MacOS. Is there any idea about that. I need to connect SSH to master node while creating section at Windows machine. Than is it possible to be insecure keystone service in OpenStack magnum environment? Is SSL certificate mandatory? Thank you Yasemin DEM?RAL Senior Researcher at TUBITAK BILGEM B3LAB Safir Cloud Scrum Master [ http://www.tubitak.gov.tr/tr/icerik-sorumluluk-reddi ] -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlibosva at redhat.com Sat Oct 23 21:53:16 2021 From: jlibosva at redhat.com (Jakub Libosvar) Date: Sat, 23 Oct 2021 17:53:16 -0400 Subject: [Neutron] Bug Deputy Report October 19 - 23 Message-ID: <3542704c-b691-ddab-61c5-0cfc4d524edc@redhat.com> Hi all, I was the bug deputy for the week starting Oct 19. Here is the report, no critical bugs, it was a quiet week, maybe because of the PTG :) Medium ------ * "network_namespace_exists"can timeout in loaded systems Link: https://bugs.launchpad.net/neutron/+bug/1947974 Assigned to Rodolfo Fix proposed: https://review.opendev.org/c/openstack/neutron/+/814868 * Non HA router - missing iptables rule for redirect metadata queries to haproxy Link: https://bugs.launchpad.net/neutron/+bug/1947993 Assigned to Slawek Fix proposed: https://review.opendev.org/c/openstack/neutron/+/814892 * [OVN] Logical router port must have at least one network Edit Link: https://bugs.launchpad.net/neutron/+bug/1948457 Needs an assignee Low --- * qvo ports are not removed correctly when an instance is deleted immediately after creation Link: https://bugs.launchpad.net/neutron/+bug/1948452 Needs an assignee * [OVN] Mech driver fails to delete DHCP options during subnet deletion Link: https://bugs.launchpad.net/neutron/+bug/1948466 Needs an assignee Kuba From wodel.youchi at gmail.com Sat Oct 23 21:56:59 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Sat, 23 Oct 2021 22:56:59 +0100 Subject: Freezer agent on Wallaby Message-ID: Hi, Has anyone been able to make Freezer work with Openstack Wallaby? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Sun Oct 24 11:00:32 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Sun, 24 Oct 2021 12:00:32 +0100 Subject: Freezer agent on Wallaby In-Reply-To: References: Message-ID: Hi, Every time I try to backup I get this error message : *freezer.main [-] can only concatenate str (not "bytes") to str: TypeError: can only concatenate str (not "bytes") to str * 2021-10-24 11:58:06.636 4371 ERROR freezer.main Traceback (most recent call last): 2021-10-24 11:58:06.636 4371 ERROR freezer.main File "/usr/local/lib/python3.8/site-packages/freezer/main.py", line 272, in main 2021-10-24 11:58:06.636 4371 ERROR freezer.main freezer_main(backup_args) 2021-10-24 11:58:06.636 4371 ERROR freezer.main File "/usr/local/lib/python3.8/site-packages/freezer/main.py", line 137, in freezer _main 2021-10-24 11:58:06.636 4371 ERROR freezer.main return run_job(backup_args, storage) 2021-10-24 11:58:06.636 4371 ERROR freezer.main File "/usr/local/lib/python3.8/site-packages/freezer/main.py", line 150, in run_job 2021-10-24 11:58:06.636 4371 ERROR freezer.main response = freezer_job.execute() 2021-10-24 11:58:06.636 4371 ERROR freezer.main File "/usr/local/lib/python3.8/site-packages/freezer/job.py", line 200, in execute 2021-10-24 11:58:06.636 4371 ERROR freezer.main backup_level = self.backup(app_mode) 2021-10-24 11:58:06.636 4371 ERROR freezer.main File "/usr/local/lib/python3.8/site-packages/freezer/job.py", line 401, in backup 2021-10-24 11:58:06.636 4371 ERROR freezer.main backup_os.backup_cinder_by_glance(self.conf.cinder_vol_id) 2021-10-24 11:58:06.636 4371 ERROR freezer.main File "/usr/local/lib/python3.8/site-packages/freezer/openstack/backup.py", line 79, in backup_cinder_by_glance 2021-10-24 11:58:06.636 4371 ERROR freezer.main self.storage.add_stream(stream, package, headers=headers) 2021-10-24 11:58:06.636 4371 ERROR freezer.main File "/usr/local/lib/python3.8/site-packages/freezer/storage/fslike.py", line 97, i n add_stream 2021-10-24 11:58:06.636 4371 ERROR freezer.main for el in stream: 2021-10-24 11:58:06.636 4371 ERROR freezer.main File "/usr/local/lib/python3.8/site-packages/freezer/utils/utils.py", line 281, in __next__ *2021-10-24 11:58:06.636 4371 ERROR freezer.main self.reminder += self.stream.next() 2021-10-24 11:58:06.636 4371 ERROR freezer.main TypeError: can only concatenate str (not "bytes") to str 2021-10-24 11:58:06.636 4371 ERROR freezer.main 2021-10-24 11:58:06.637 4371 CRITICAL freezer.main [-] Run freezer agent process unsuccessfully 2021-10-24 11:58:06.637 4371 CRITICAL freezer.main [-] Critical Error: can only concatenate str (not "bytes") to str* Regards. Le sam. 23 oct. 2021 ? 22:56, wodel youchi a ?crit : > Hi, > > Has anyone been able to make Freezer work with Openstack Wallaby? > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasemin.demiral at tubitak.gov.tr Sun Oct 24 12:01:36 2021 From: yasemin.demiral at tubitak.gov.tr (Yasemin =?utf-8?Q?DEM=C4=B0RAL_=28BILGEM_BTE=29?=) Date: Sun, 24 Oct 2021 15:01:36 +0300 (EET) Subject: [magnum] [fcos33] In-Reply-To: <1135213573.121319386.1635076726264.JavaMail.zimbra@tubitak.gov.tr> References: <1135213573.121319386.1635076726264.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <1658669092.121319863.1635076896727.JavaMail.zimbra@tubitak.gov.tr> Hi, How can I dowloand fcos 33? I can't find any link for dowloanding it. Yasemin DEM?RAL [ http://www.tubitak.gov.tr/tr/icerik-sorumluluk-reddi ] Senior Researcher at TUBITAK BILGEM B3LAB Safir Cloud Scrum Master Kimden: "Vikarna Tathe" Kime: "Ammad Syed" Kk: "openstack-discuss" G?nderilenler: 19 Ekim Sal? 2021 16:23:20 Konu: Re: Openstack magnum Hi Ammad, Thanks!!! It worked. On Tue, 19 Oct 2021 at 15:00, Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: Hi Ammad, Yes, fcos34. Let me try with fcos33. Thanks On Tue, 19 Oct 2021 at 14:52, Ammad Syed < [ mailto:syedammad83 at gmail.com | syedammad83 at gmail.com ] > wrote: BQ_BEGIN Hi, Which fcos image you are using ? It looks like you are using fcos 34. Which is currently not supported. Use fcos 33. On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: BQ_BEGIN Hi All, I was able to login to the instance. I see that kubelet service is in activating state. When I checked the journalctl, found the below. Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Started Kubelet via Hyperkube (System Container). Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs /sys/fs/cgroup/systemd: no such file or directory Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=125/n/a Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via Hyperkube (System Container). Executed the below command to fix this issue. mkdir -p /sys/fs/cgroup/systemd Now I am getiing the below error. Has anybody seen this issue. failed to get the kubelet's cgroup: mountpoint for cpu not found. Kubelet system container metrics may be missing. failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. failed to run Kubelet: mountpoint for not found On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: BQ_BEGIN BQ_BEGIN Hi Ammad, Thanks for responding. Yes the instance is getting created, but i am unable to login though i have generated the keypair. There is no default password for this image to login via console. openstack server list +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | cf955a75-8cd2-4f91-a01f-677159b57cb2 | k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, 10.14.20.181 | fedora-coreos-latest | m1.large | ssh -i id_rsa [ mailto:core at 10.14.20.181 | core at 10.14.20.181 ] The authenticity of host '10.14.20.181 (10.14.20.181)' can't be established. ECDSA key fingerprint is SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known hosts. [ mailto:core at 10.14.20.181 | core at 10.14.20.181 ] : Permission denied (publickey,gssapi-keyex,gssapi-with-mic). On Mon, 18 Oct 2021 at 14:02, Ammad Syed < [ mailto:syedammad83 at gmail.com | syedammad83 at gmail.com ] > wrote: BQ_BEGIN Hi, Can you check if the master server is deployed as a nova instance ? if yes, then login to the instance and check cloud-init and heat agent logs to see the errors. Ammad On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: BQ_BEGIN Hello All, I am trying to create a kubernetes cluster using magnum. Image: fedora-coreos. The stack gets stucked in CREATE_IN_PROGRESS. See the output below. openstack coe cluster list +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | uuid | name | keypair | node_count | master_count | status | health_status | +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | 2 | 1 | CREATE_IN_PROGRESS | None | +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'refs_map': None, 'removed_rsrc_list': [], 'attributes': None, 'refs': None} | | creation_time | 2021-10-18T06:44:02Z | | description | | | links | [{'href': ' [ http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters | http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters ] ', 'rel': 'self'}, {'href': ' [ http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17 | http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17 ] ', 'rel': 'stack'}, {'href': ' [ http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028 | http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028 ] ', 'rel': 'nested'}] | | logical_resource_id | kube_masters | | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 | | required_by | ['kube_cluster_deploy', 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] | | resource_name | kube_masters | | resource_status | CREATE_IN_PROGRESS | | resource_status_reason | state changed | | resource_type | OS::Heat::ResourceGroup | | updated_time | 2021-10-18T06:44:02Z | +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Vikarna -- Regards, Syed Ammad Ali BQ_END BQ_END BQ_END BQ_END -- Regards, Syed Ammad Ali BQ_END BQ_END -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Mon Oct 25 01:09:52 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Mon, 25 Oct 2021 01:09:52 +0000 Subject: [cyborg][ptg] Meeting time changes Message-ID: <15ada75314504fcbbfc8b35429436718@inspur.com> Hi all, At PTG, base the contributors of Cyborg, we discussed the meeting time of Cyborg. By voting on the meeting time on PTG, everyone agreed to adjust the meeting time to Friday 6:00UTC-7:00UTC (Beijing time at China, 2:00-3:00pm). We look forward to your participation in our conference, and you can submit any requirements to us. You can register for the feature on Launchpad [1] at first. Later I will send a summary of our discussion on Cyborg PTG. [1] https://blueprints.launchpad.net/openstack-cyborg brinzhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Mon Oct 25 04:59:10 2021 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 25 Oct 2021 10:29:10 +0530 Subject: [Glance] Yoga PTG Summary In-Reply-To: References: Message-ID: On Fri, Oct 22, 2021 at 8:27 PM Abhishek Kekane wrote: > Hi All, > > We had our fourth virtual PTG between 18th October to 22nd October 2020. > Thanks to everyone who joined the virtual PTG sessions. Using bluejeans app > we had lots of discussion around different topics for glance, glance + > cinder and Secure RBAC. > > I have created etherpad [1] with Notes from the session and which also > includes the recordings of each discussion. Here is a short summary of the > discussions. > > Tuesday, October 19th > # Xena Retrospective > > On the positive note, we merged a number of useful features this cycle. We > managed to implement a project scope of secure RBAC for metadef APIs, > Implemented quotas using unified limits and moved policy enforcing closer > to API layer. > We also manage to wipe out many bugs from our bug backlog. On the other > side we need to improve our documentation and API reference guide. > > Recording: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 1 > > # Cache API > During Xena cycle we managed to start this work and implement the core > functionality but failed to merge it due to lack of negative tests and > tempest coverage. In the Yoga cycle we are going to focus on adding tempest > coverage for the same along with a new API to cache the given image > immediately rather than waiting for a periodic job to pre-cache it for us. > > Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 2 > > # Native Image Encryption > Unfortunately this topic is sitting in our agenda for the last couple of > PTGs. Current update is the core feature is depending on Barbican > microversion work and once it is complete then the Consumer API can be > functional again. At the moment in Glance we have decided to go ahead and > implement the glance side part and instead of having placeholder (barbican > consumer API secret register and deletion) code as commented we can have a > WIP patch for the same with depending on glance side work. > > Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 3 > > # Default Glance to configure multiple stores > Glance has deprecated single stores configuration since Stein cycle and > now will start putting efforts to deploy glance using multistore by-default > and then remove single store support from glance. > This might be going to take a couple of cycles, so in yoga we are going to > migrate internal unit and functional tests to use multistore config and > also going to modify devstack to deploy glance using multistore > configuration for swift and Ceph (for file and cinder it's already > supported). > > Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 4 > > # Quotas Usage API > In Xena we have implemented quotas for images API using unified limits. > This cycle we will add new APIs which will help enduser to get the clear > picture of quotas like What is total quota, used quota and remaining quota. > So first we are coming up with the spec for the design and then > implementation for the same. > > Recordings: https://bluejeans.com/s/QuCBmIrMuVv - Chapter 5 > > > Wednesday, October 20th 2021 > > # Policy Refactoring - Part 2 > In Xena we have managed to move all policy checks to the API layer. This > cycle we need to work on removing dead code of policy and authorization > layer. > Before removing both the layers we need to make sure that property > protection is working as expected and for the same we need to add one job > with a POST script to verify that removing auth and policy layer will not > break property protection. > > Recordings: https://bluejeans.com/s/AMZzGObPhK4 - Chapter 1 > > # Database read only checks > We have added new RBAC policy checks which are equivalent to readonly > checks in our db layer, e.g. image ownership check, visibility check etc. > To kick start work for this dansmith (thanks for volunteering) will work > on PoC and abhishekk will work on specs about how we will modify/improve > our db layer. > > # Secure RBAC - System Scope/Project Admin scope > In Xena we have managed to move all policy checks to API layer and > implemented project scope of metadef APIs. So as of now we have project > scope for all glance APIs. During this discussion Security Team has updated > us that discussions are still going about how the system scope should be > used/implemented and they are planning to introduce a new role 'manager' > which will act between 'admin' and 'member' roles. We need to keep an eye > on this new development. > > https://etherpad.opendev.org/p/tc-yoga-ptg - line #446 > https://etherpad.opendev.org/p/policy-popup-yoga-ptg - line #122 > > Recordings: https://bluejeans.com/s/AMZzGObPhK4 - Chapter 3 > > > Thursday, October 21st 2021 > > # Glance- Interop interlock > Due to confusion between timings this discussion didn't happen as planned. > The InterOP team has added some questions later to PTG etherpad (refer line > no #270). @Glance team please respond to those questions as I will be out > for the next couple of weeks. > > # Upload volume to image in RBD backend > In case when we upload a volume as an image to glance's rbd backend, it > starts with a 0 size rbd image and performs resize in chunks of 8 MB which > makes the operation very slow. Current idea is to pass volume size as image > size to avoid these resize operations. As this change will lead to use > locations API which are not advisable to use in glance due to security > concerns and also checksum and multi-hash will not be available, cinder > side will have new config option (default False) to use this optimization > if set to True. Also this change needs a glance side to expose ceph pool > information, so Rajat (whoami-rajat) will coordinate with the glance team > to set up a spec and implement the same. > Just wanted to update the topic summary with a few details. Currently the RBD store uses a faster mechanism by increasing the resize chunk two times i.e. initially it will be 8 MB then 16M, 32M, 64M, 128M... and will take 7 resizes to reach 1GB size which is an improvement but still the whole volume is copied chunk by chunk. The current discussion around this is for cinder to use RBD COW cloning which is significantly faster than the generic approach and can be seen in my performance testing here[1]. Further details can be found in the spec. Thanks for the summary Abhshek! [1] https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_131/810363/6/check/openstack-tox-docs/131e18d/docs/specs/yoga/optimize-upload-volume-to-rbd-store.html#performance-impact > > # Upload volume to image when glance backend is cinder > Similar to above topic this will have same security concerns and same > config option can be used to have this optimization available. Rajat will > coordinate with the glance team to implement glance side changes. > > Note, as we have joined the cinder team, Brian Rosmaita/Rajat will > update/share the recording links for above two sessions in glance PTG > etherpad once it is available. > > # Adding multiple tags overrides existing tags > We are going to modify the create multiple tags API which will have one > boolean parameter (Default to False) in the header to maintain backward > compatibility. If it is True then we are going to add new tags to the > existing list of tags rather than replacing those. > > Recordings: https://bluejeans.com/s/@ttsNs8vIFq - Chapter 1 > > # Delete newly created metadef resource types from DB after deassociating > To maintain the consistency with other metadef APIs, we are going to add > two new APIs. > 1. Create resource types > 2. Delete given resource type > > Recordings: https://bluejeans.com/s/@ttsNs8vIFq - Chapter 2 > > You will find the detailed information about the same in the PTG etherpad > [1] along with the recordings of the sessions. Kindly let me know if you > have any questions about the same. > > [1] https://etherpad.opendev.org/p/yoga-glance-ptg > > Thank you, > > Abhishek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Oct 25 07:55:08 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 25 Oct 2021 09:55:08 +0200 Subject: [Neutron] Bug Deputy Report October 19 - 23 In-Reply-To: <3542704c-b691-ddab-61c5-0cfc4d524edc@redhat.com> References: <3542704c-b691-ddab-61c5-0cfc4d524edc@redhat.com> Message-ID: <1875460.PYKUYFuaPT@p1> Hi, On sobota, 23 pa?dziernika 2021 23:53:16 CEST Jakub Libosvar wrote: > Hi all, > > I was the bug deputy for the week starting Oct 19. Here is the report, > no critical bugs, it was a quiet week, maybe because of the PTG :) > > > Medium > ------ > * "network_namespace_exists"can timeout in loaded systems > Link: https://bugs.launchpad.net/neutron/+bug/1947974 > Assigned to Rodolfo > Fix proposed: https://review.opendev.org/c/openstack/neutron/+/814868 > > * Non HA router - missing iptables rule for redirect metadata > queries to haproxy > Link: https://bugs.launchpad.net/neutron/+bug/1947993 > Assigned to Slawek > Fix proposed: https://review.opendev.org/c/openstack/neutron/+/814892 > > > * [OVN] Logical router port must have at least one network Edit > Link: https://bugs.launchpad.net/neutron/+bug/1948457 > Needs an assignee > > Low > --- > * qvo ports are not removed correctly when an instance is deleted > immediately after creation > Link: https://bugs.launchpad.net/neutron/+bug/1948452 > Needs an assignee I moved that one to os-vif as Neutron don't create/remove qvo ports in the br- int. It's os-vif who is doing that AFAIR. > > * [OVN] Mech driver fails to delete DHCP options during subnet deletion > Link: https://bugs.launchpad.net/neutron/+bug/1948466 > Needs an assignee > > > Kuba -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From thierry at openstack.org Mon Oct 25 12:17:38 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 25 Oct 2021 14:17:38 +0200 Subject: [largescale-sig] Next meeting: Oct 27th, 15utc Message-ID: <923c8336-01ee-83f9-d6ad-997c11bb78bb@openstack.org> Hi everyone, The Large Scale SIG meeting is back this Wednesday in #openstack-operators on OFTC IRC, at 15UTC. You can doublecheck how that time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20211027T15 A number of topics have already been added to the agenda, including discussing the topic of our next OpenInfra.Live show! Feel free to add other topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From munnaeebd at gmail.com Mon Oct 25 12:22:26 2021 From: munnaeebd at gmail.com (Md. Hejbul Tawhid MUNNA) Date: Mon, 25 Oct 2021 18:22:26 +0600 Subject: Octavia loadbalancer status offline Message-ID: Hi, We have installed openstack ussuri version from ubuntu universe repository. We have installed octavia 6.2.0 version. after creating loadbalancer , listener and pool all are offline. but the LB operation is working as expected. changing the pool member is also working. octavia is installed in compute node. 5555 is listening and allowed in iptables (iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT). amphora to octavia-worker(172.16.0.2) is reachable. Any idea to troubleshoot this issue Please find the log from octavia-worker ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// 2021-10-25 18:15:13.482 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2021-10-25 18:15:18.490 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2021-10-25 18:15:23.507 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2021-10-25 18:15:28.511 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2021-10-25 18:15:33.521 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2021-10-25 18:15:38.529 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2021-10-25 18:15:43.540 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2021-10-25 18:15:48.549 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2021-10-25 18:15:53.554 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2021-10-25 18:15:58.561 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2021-10-25 18:16:04.707 1192307 INFO octavia.controller.worker.v1.tasks.database_tasks [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Mark ALLOCATED in DB for amphora: c661b828-1690-4866-8152-f745c43e0977 with compute id c9133819-b8e0-42d6-9544-bf83e3ad4b3f for load balancer: c0bd3e21-6983-40c9-8713-859194496b37 2021-10-25 18:16:40.660 1192307 INFO octavia.controller.worker.v1.tasks.database_tasks [req-78e78f29-3bdb-4b12-ae76-b8daa4926c09 - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Mark ACTIVE in DB for load balancer id: c0bd3e21-6983-40c9-8713-859194496b37 2021-10-25 18:16:44.317 1192307 INFO octavia.controller.queue.v1.endpoints [-] Creating listener 'cc45192d-de70-4d59-857b-ac23c4fc8d07'... 2021-10-25 18:16:44.325 1192307 WARNING octavia.controller.worker.v1.controller_worker [-] Failed to fetch listener cc45192d-de70-4d59-857b-ac23c4fc8d07 from DB. Retrying for up to 60 seconds. 2021-10-25 18:17:35.375 1192307 INFO octavia.controller.queue.v1.endpoints [-] Creating pool '9d23855f-d849-4ad9-9de1-66ab5cd268eb'... 2021-10-25 18:17:35.382 1192307 WARNING octavia.controller.worker.v1.controller_worker [-] Failed to fetch pool 9d23855f-d849-4ad9-9de1-66ab5cd268eb from DB. Retrying for up to 60 seconds. 2021-10-25 18:18:02.814 1192307 INFO octavia.controller.queue.v1.endpoints [-] Creating member '29bb41e5-457c-43ba-9149-5af55e73fe38'... 2021-10-25 18:18:02.825 1192307 WARNING octavia.controller.worker.v1.controller_worker [-] Failed to fetch member 29bb41e5-457c-43ba-9149-5af55e73fe38 from DB. Retrying for up to 60 seconds. Regards, Munna -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Oct 25 13:14:25 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 25 Oct 2021 09:14:25 -0400 Subject: [cinder] reminder: this week's meeting in video+IRC Message-ID: <14efed32-a39b-2806-6354-955f6902e863@gmail.com> Quick reminder that this week's Cinder team meeting on Wednesday 27 October, being the final meeting of the month, will be held in both videoconference and IRC at the regularly scheduled time of 1400 UTC. These are the video meeting rules we've agreed to: * Everyone will keep IRC open during the meeting. * We'll take notes in IRC to leave a record similar to what we have for our regular IRC meetings. * Some people are more comfortable communicating in written English. So at any point, any attendee may request that the discussion of the current topic be conducted entirely in IRC. * The meeting will be recorded. connection info: https://bluejeans.com/3228528973 meeting agenda: https://etherpad.opendev.org/p/cinder-yoga-meetings cheers, brian From peljasz at yahoo.co.uk Mon Oct 25 14:20:41 2021 From: peljasz at yahoo.co.uk (lejeczek) Date: Mon, 25 Oct 2021 15:20:41 +0100 Subject: Guest's secondary/virtual IP References: Message-ID: Hi guys. What I expected turns out not to be enough, must be something trivial - what am I missing? I set a port with --allowed-address and on the instance/guest using the port I did: -> $ ip add add 10.0.1.99/24 dev eth1 yet that IP other guest cannot reach. many thanks, L. From kchamart at redhat.com Mon Oct 25 14:49:36 2021 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 25 Oct 2021 16:49:36 +0200 Subject: CentOS-9 guests & 'qemu64' CPU model are incompatible; and reasons to avoid 'qemu64' in general In-Reply-To: References: Message-ID: On Thu, Oct 21, 2021 at 10:56:42AM -0700, Clark Boylan wrote: > On Thu, Oct 21, 2021, at 10:49 AM, Kashyap Chamarthy wrote: [...] > > To fix this, please update the CPU model to "Nehalem". It is the oldest > > CPU model that is compatible with CentOS-9/RHEL-9 "x86-64-v2". Further, > > Nehalem also works with `virt_type=kvm|qemu`, _and_ on both Intel and > > AMD hardware. So this is a good alternative. > > Thank you for looking into this and providing such detailed > information. It has been really helpful. No problem at all. I should've wrote this a bit sooner. [...] > > Why is "qemu64" model undesirable for production? > > ------------------------------------------------- [...] > > An understandable reason why CI systems running in a cloud environment > > go with 'qemu64' is convenience: with 'qemu64', you can live-migrate a > > guest regardless of its underlying hardware (whether it's Intel or AMD). > > That's one main reason why upstream DevStack defaults to it. > > I've got a change up to Devstack to convert it over to Nehalem by > default [5]. So far it looks good, but we will want to recheck it a > few times and make sure we have good test coverage across the clouds > we run testing on just to be sure that the CPUs we get from those > clouds are able to support this CPU type. Good news is that we > successfully built a centos-9-stream image and booted it with the > Nehalem change in place [6]. I see that the DevStack default has now merged. Very cool. If anyone is wondering: "How come the 'Nehalem' QEMU CPU model works on both Intel and AMD hardware?". The answer is the CPU feature flags in Nehalem happened to supported by both Intel and AMD. Joy to us! > > Overall, the thumb-rule here is to either always explicitly specify a > > "sane" CPU model, based on the recommendations here[3]. Or to use > > Nova/libvirt's default ("host-model"). > > Devstack is currently setting cpu_mode to none. Should Nova be updated > to make this result in a better behavior? Is this literally not > passing a cpu mode to libvirt/qemu and allowing them to choose a > default? If so maybe libvirt/qemu need to update their defaults? Yes, if you explicitly set `cpu_mode=none`, that does mean "use whatever is the default of QEMU". And no, we cannot update Nova to "result in better behaviour" for `cpu_mode=none` -- it essentially means changing the hypervisor-reported default from "qemu64" to something else in Nova. Changing the default in libvirt/QEMU is also very difficult -- "hysterical raisins" :-(. The reason is the following (thanks, Daniel Berrang?): Historically, QEMU never reported[1] what the default CPU model was. So libvirt assumed it was "qemu64". But unfortunately, until very recently[2][3] libvirt didn't expand this into the XML configs. So if upstream QEMU ever changes its default it would impact guests without an explicit XML config for CPU. * * * In any case, FWIW, Daniel also echoes what I noted in my previous email: in practise, both the upstream QEMU and libvirt defaults are "reasonably irrelevant" -- essentially any serious management tool will be setting an CPU model explicitly (Nova sets it to `host-model` for the KVM/QEMU driver). [1] [QEMU] https://gitlab.com/qemu-project/qemu/-/commit/04109957d4 -- qapi: report the default CPU type for each machine [2] https://bugzilla.redhat.com/show_bug.cgi?id=1598151 -- [RFE] Add 'qemu64' as the CPU model if user doesn't supply a element [3] [libvirt] https://gitlab.com/libvirt/libvirt/-/commit/5e939cea89 -- qemu: Store default CPU in domain XML -- /kashyap From katonalala at gmail.com Mon Oct 25 15:50:09 2021 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 25 Oct 2021 17:50:09 +0200 Subject: [neutron] October 2021 - PTG Summary Message-ID: Hi, Below is my summary of the Neutron sessions during last week's PTG. Here is the etherpad with the agenda: https://etherpad.opendev.org/p/neutron-yoga-ptg # Day 1 (Monday Oct 18.) ## Xena retrospective (I just list here the topics, not every discussion under them): * Good things: ** Less meetings ** New features ** CI jobs have again meaningful names ** Housekeeping - new payload style callbacks switch finished ** Healthy team - we have people from various companies, doing reviews and proposing patches * Bad / not so good / change it ** CI stability could be better ** lack of people involved in the CI meetings and CI issues ** is there a chance to decrease the number of jobs/backends to be * Actions coming out from the above: ** Having video based team meetings every few weeks. ** Video meeting with screen sharing for CI meeting every few weeks. ** ping people before the meetings ## Should we move some parts of neutron devstack plugin to devstack repo? Slaweq brought this topic to rationalize devstack plugin code between neutron and devstack. As a rule of thumb: "what is used/tested by jobs from other repositories (like tempest) should be in devstack repo and not in neutron's devstack plugin". The bad part is that we should loose review velocity if there's nobody to review in devstack. As a consequence some code should be moved back from devstack to neutron's devstack plugin (for drivers that are not that common now like linuxbridge) ## ML2/OVS -> ML2/OVN migration - presentation (continued on Friday) We got an overview from Jakub (jlibosva) of how the migration tool works now, what is finished and which are the possible directions to improve it. https://docs.google.com/presentation/d/1dCdvoi-Cdbl27AL0wjFJ_51NatOxepNOplm4lcehOeI/edit?usp=sharing * Problematic parts: ** steps that require the tool to touch the db (no API currently) ** no rollback ** no CI (at least upstream) ** no devstack based tools (related to the previous one) * Actions ** New job for ovs-to-ovn migration *** Now start with tripleo based job and go for a devstack based one ** Don't create new APIs for job type change and other operations during the migration as it seems there is no other usecase for them. ** Add developer documentation for the migration (preparation, steps, rollback....) ** add automatic rollback # Day 2 (Tuesday Oct. 19.) ## tempest_neutron_plugin - a RedHat downstream tests We got an overview of these tests from Eduardo Olivares. * tests requiring advance images ** move them to neutron-tempest-plugin with special tag and run them periodically ** use ubuntu cloud minimal image * tests requiring multinode topology + capturing traffic on hosts/computes ** These tests use a node to run tempest (tripleo based deployment) ** these tests should be changed to run in upstream CI * Tests that restart nodes / services / containers ** better not to run them in upstream CI ** check if it is possible to migrate some to tobiko ( https://opendev.org/x/tobiko ) * sriov tests (macvtap, direct and direct-physical ports) ** No possibility today to run these upstream. ## Edge session (https://etherpad.opendev.org/p/ecg-ptg-october-2021 ) * Thursday we had a session together with the Designate team where we discussed possible action items related to edge use cases. ## RBAC together with Nova * https://etherpad.opendev.org/p/nova-yoga-ptg l108. - l137. # Day 3 (Wednesday Oct. 20.) ## Job reorganization recap * We need fresh data to see where we still have pain points. * Ongoing CI rationalization items: ** Use Ubuntu minimal image where possible. ** Serialize the run of tests with advanced images (reduce memory usage): *** https://review.opendev.org/c/openstack/tempest/+/807790 ## ryu / os-ken Recently we realized that ryu has some activity to fix bugs that can hit Neutron as well (as we forked ryu as os-ken). * regularly checking ryu bugs and fixes (sync on team meeting) * Do not return to ryu as os-ken is adopted for Openstack workflow with CI, etc... ## Nova-Neutron cross-project session * OVN live-migration (https://etherpad.opendev.org/p/ovn_live_migration ) * RBAC continued (https://etherpad.opendev.org/p/nova-yoga-ptg l144 - l165) * libvirt virt driver does not wait for network-vif-plugged event during hard reboot ** document which backend sends what event and when ** workaround in nova for telling whether we should wait for an event ( https://review.opendev.org/c/openstack/nova/+/813419 ) ** long-term solution has to be Neutron providing event information thru the port binding information ** Improved testing of move operations *** Nova will keep one ML2/OVS multinode job running ** Supporting ports with QoS min bw and min pps in multi segment networks with multiple physnets *** finish the ongoing QoS min pps support *** extend placement to support any-trait queries ## Interop with Martin Kopec * Check which tests should be included to refstack * https://etherpad.opendev.org/p/refstack-test-analysis ## Quick introduction of the Tobiko test tool We got an overview of Tobiko from Frederico Ressi, and how it can be used to test scenarios which are destructive. * https://tobiko.readthedocs.io/en/master/ * http://kaplonski.pl/files/Tobiko_-_OpenStack_upgrade_and_disruptive_testing.pdf * Neutron already has a periodic job with tobiko. # Day 4 (Thursday Oct. 21.) ## OVN and BGP * https://opendev.org/x/ovn-bgp-agent ** not just for neutron-dynamic-routing but for networking-bgpvpn as well. ** ovn-bgp-agent is another backend which could be behind neutron-dynamic-routing, and uses FRR. ** In the future it will be a Neutron stadium project ** need more tests, integration with Neutron and better documentation. * Make neutron-dynamic-routing work with OVN ** Frickler and bcafarel checked if the current jobs and tests can work with OVN. ** more investigation is necessary if it works as is now or some change is needed. ** Redhat and Ericsson help in the maintenance of neutron-dynamic-routing. ## Multi segments and provider networks on a host, with 1 subnet per segment * different from the usual routed network use case where we have one segment available in each rack * WIP patch which was never finished: ** https://review.opendev.org/c/openstack/neutron/+/623115 ## Use openstacksdk as client for Heat * Now Heat uses python-neutronclient for python bindings / client (not the CLI part which is deprecated but the python bindings) ** We collected a buch of projects over than heat which uses python-neutronclient: Nova, Horizon, Rally.... ** Actions: *** Write mail about it to have the attention of the community *** Have a migration plan and document it. *** Have a slot during the team meeting to track progress. ## Designate session The topics coming from Designate team were really helpful to translate the requirements from edge session to action items for Neutron. * https://etherpad.opendev.org/p/octavia-designate-neutron-ptg ** dns resolver is not specified for a subnet, (RFE for OVN as it works only for OVS currently) ** initiate a cross-project documentation effort for setting up an edge site ** initiate cross-project job for testing an edge site # Day 5 (Friday Oct. 22.) ## Continued discussion of neutron-dynamic-routing's future * Who will review new features (Both Redhat and Ericsson will increase review attention for this project) * Fixing the scheduler: ** New scheduler whith which the user can select which host/dragent to schedule ** https://review.opendev.org/c/openstack/neutron-dynamic-routing/+/780675 * OVN and neutron-dynamic-routing ** add the missing features to the OVN gap list ( https://docs.openstack.org/neutron/latest/ovn/gaps.html ) ## RBAC discussion continued with TC * https://etherpad.opendev.org/p/tc-yoga-ptg ## ML2/OVS -> ML2/OVN migration - continue We continued the discussion from Monday, on how to integrate the ovs-ovn migration tool to Neutron CI. ## Team photo: https://photos.app.goo.gl/8ERv59XYoFVQz5oT7 I would like to thank you for the great discussions during last week, it was really useful and productive week. Based on these we have some plan what we will work on in the Yoga cycle. Lajos Katona (lajoskatona) Ericsson Software Technologies -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon Oct 25 16:22:55 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 25 Oct 2021 09:22:55 -0700 Subject: Octavia loadbalancer status offline In-Reply-To: References: Message-ID: Hi Munna, I am guessing you are seeing the operating status offline? This is commonly caused by the amphora being unable to reach the health manager process. Another symptom of this is the statistics for the load balancer will not increase. Some things to check: 1. Is your controller IP and port list correct? https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list 2. Are you seeing the heartbeat packets arrive on the network interface on your health manager instance? 3. Is the health manager log reporting any issues, such as an incorrect heartbeat key? 4. If you enable debug logging on the health manager, do you see log messages indicating the health manager has received heartbeat packets from the amphora? "Received packet from" Michael On Mon, Oct 25, 2021 at 5:30 AM Md. Hejbul Tawhid MUNNA wrote: > > Hi, > > We have installed openstack ussuri version from ubuntu universe repository. > > We have installed octavia 6.2.0 version. > > after creating loadbalancer , listener and pool all are offline. but the LB operation is working as expected. changing the pool member is also working. > > > octavia is installed in compute node. 5555 is listening and allowed in iptables (iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT). > > amphora to octavia-worker(172.16.0.2) is reachable. > > Any idea to troubleshoot this issue > > > Please find the log from octavia-worker > > > > ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// > 2021-10-25 18:15:13.482 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) > 2021-10-25 18:15:18.490 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) > 2021-10-25 18:15:23.507 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) > 2021-10-25 18:15:28.511 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) > 2021-10-25 18:15:33.521 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) > 2021-10-25 18:15:38.529 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) > 2021-10-25 18:15:43.540 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) > 2021-10-25 18:15:48.549 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) > 2021-10-25 18:15:53.554 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) > 2021-10-25 18:15:58.561 1192307 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) > 2021-10-25 18:16:04.707 1192307 INFO octavia.controller.worker.v1.tasks.database_tasks [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Mark ALLOCATED in DB for amphora: c661b828-1690-4866-8152-f745c43e0977 with compute id c9133819-b8e0-42d6-9544-bf83e3ad4b3f for load balancer: c0bd3e21-6983-40c9-8713-859194496b37 > 2021-10-25 18:16:40.660 1192307 INFO octavia.controller.worker.v1.tasks.database_tasks [req-78e78f29-3bdb-4b12-ae76-b8daa4926c09 - 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Mark ACTIVE in DB for load balancer id: c0bd3e21-6983-40c9-8713-859194496b37 > 2021-10-25 18:16:44.317 1192307 INFO octavia.controller.queue.v1.endpoints [-] Creating listener 'cc45192d-de70-4d59-857b-ac23c4fc8d07'... > 2021-10-25 18:16:44.325 1192307 WARNING octavia.controller.worker.v1.controller_worker [-] Failed to fetch listener cc45192d-de70-4d59-857b-ac23c4fc8d07 from DB. Retrying for up to 60 seconds. > 2021-10-25 18:17:35.375 1192307 INFO octavia.controller.queue.v1.endpoints [-] Creating pool '9d23855f-d849-4ad9-9de1-66ab5cd268eb'... > 2021-10-25 18:17:35.382 1192307 WARNING octavia.controller.worker.v1.controller_worker [-] Failed to fetch pool 9d23855f-d849-4ad9-9de1-66ab5cd268eb from DB. Retrying for up to 60 seconds. > 2021-10-25 18:18:02.814 1192307 INFO octavia.controller.queue.v1.endpoints [-] Creating member '29bb41e5-457c-43ba-9149-5af55e73fe38'... > 2021-10-25 18:18:02.825 1192307 WARNING octavia.controller.worker.v1.controller_worker [-] Failed to fetch member 29bb41e5-457c-43ba-9149-5af55e73fe38 from DB. Retrying for up to 60 seconds. > > Regards, > Munna From wodel.youchi at gmail.com Sun Oct 24 21:04:50 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Sun, 24 Oct 2021 22:04:50 +0100 Subject: [magnum] [fcos33] In-Reply-To: <1658669092.121319863.1635076896727.JavaMail.zimbra@tubitak.gov.tr> References: <1135213573.121319386.1635076726264.JavaMail.zimbra@tubitak.gov.tr> <1658669092.121319863.1635076896727.JavaMail.zimbra@tubitak.gov.tr> Message-ID: Hi, Try this link : https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64 then search for 33.20210426.3.0 Then scroll down. Regards. Le dim. 24 oct. 2021 ? 17:28, Yasemin DEM?RAL (BILGEM BTE) < yasemin.demiral at tubitak.gov.tr> a ?crit : > Hi, > > How can I dowloand fcos 33? I can't find any link for dowloanding it. > > *Yasemin DEM?RAL* > > > Senior Researcher at TUBITAK BILGEM B3LAB > > Safir Cloud Scrum Master > > > ------------------------------ > *Kimden: *"Vikarna Tathe" > *Kime: *"Ammad Syed" > *Kk: *"openstack-discuss" > *G?nderilenler: *19 Ekim Sal? 2021 16:23:20 > *Konu: *Re: Openstack magnum > > Hi Ammad, > Thanks!!! It worked. > > On Tue, 19 Oct 2021 at 15:00, Vikarna Tathe > wrote: > >> Hi Ammad, >> Yes, fcos34. Let me try with fcos33. Thanks >> >> On Tue, 19 Oct 2021 at 14:52, Ammad Syed wrote: >> >>> Hi, >>> >>> Which fcos image you are using ? It looks like you are using fcos 34. >>> Which is currently not supported. Use fcos 33. >>> >>> On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe >>> wrote: >>> >>>> Hi All, >>>> I was able to login to the instance. I see that kubelet service is in >>>> activating state. When I checked the journalctl, found the below. >>>> >>>> >>>> >>>> >>>> >>>> >>>> *Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: >>>> Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 >>>> kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs >>>> /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 >>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main >>>> process exited, code=exited, status=125/n/aOct 19 05:18:34 >>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: >>>> Failed with result 'exit-code'.Oct 19 05:18:44 >>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: >>>> Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 >>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via >>>> Hyperkube (System Container).* >>>> >>>> Executed the below command to fix this issue. >>>> *mkdir -p /sys/fs/cgroup/systemd* >>>> >>>> >>>> Now I am getiing the below error. Has anybody seen this issue. >>>> >>>> >>>> >>>> *failed to get the kubelet's cgroup: mountpoint for cpu not found. >>>> Kubelet system container metrics may be missing.failed to get the container >>>> runtime's cgroup: failed to get container name for docker process: >>>> mountpoint for cpu not found. failed to run Kubelet: mountpoint for not >>>> found* >>>> >>>> On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe >>>> wrote: >>>> >>>>> >>>>>> Hi Ammad, >>>>>> Thanks for responding. >>>>>> >>>>>> Yes the instance is getting created, but i am unable to login >>>>>> though i have generated the keypair. There is no default password for this >>>>>> image to login via console. >>>>>> >>>>>> openstack server list >>>>>> >>>>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>>>>> | ID | Name >>>>>> | Status | Networks | Image >>>>>> | Flavor | >>>>>> >>>>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>>>>> | cf955a75-8cd2-4f91-a01f-677159b57cb2 | >>>>>> k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, >>>>>> 10.14.20.181 | fedora-coreos-latest | m1.large | >>>>>> >>>>>> >>>>>> ssh -i id_rsa core at 10.14.20.181 >>>>>> The authenticity of host '10.14.20.181 (10.14.20.181)' can't be >>>>>> established. >>>>>> ECDSA key fingerprint is >>>>>> SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. >>>>>> Are you sure you want to continue connecting (yes/no/[fingerprint])? >>>>>> yes >>>>>> Warning: Permanently added '10.14.20.181' (ECDSA) to the list of >>>>>> known hosts. >>>>>> core at 10.14.20.181: Permission denied >>>>>> (publickey,gssapi-keyex,gssapi-with-mic). >>>>>> >>>>>> On Mon, 18 Oct 2021 at 14:02, Ammad Syed >>>>>> wrote: >>>>>> >>>>>>> Hi, >>>>>>> Can you check if the master server is deployed as a nova instance ? >>>>>>> if yes, then login to the instance and check cloud-init and heat agent logs >>>>>>> to see the errors. >>>>>>> >>>>>>> Ammad >>>>>>> >>>>>>> On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe < >>>>>>> vikarnatathe at gmail.com> wrote: >>>>>>> >>>>>>>> Hello All, >>>>>>>> I am trying to create a kubernetes cluster using magnum. Image: >>>>>>>> fedora-coreos. >>>>>>>> >>>>>>>> >>>>>>>> The stack gets stucked in CREATE_IN_PROGRESS. See the output below. >>>>>>>> openstack coe cluster list >>>>>>>> >>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>>> | uuid | name | keypair | >>>>>>>> node_count | master_count | status | health_status | >>>>>>>> >>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>>> | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | >>>>>>>> 2 | 1 | CREATE_IN_PROGRESS | None | >>>>>>>> >>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>>> >>>>>>>> openstack stack resource show k8s-cluster-01-2nyejxo3hyvb >>>>>>>> kube_masters >>>>>>>> >>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>> | Field | Value >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> >>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>> | attributes | {'refs_map': None, 'removed_rsrc_list': >>>>>>>> [], 'attributes': None, 'refs': None} >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | creation_time | 2021-10-18T06:44:02Z >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | description | >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | links | [{'href': ' >>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', >>>>>>>> 'rel': 'self'}, {'href': ' >>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', >>>>>>>> 'rel': 'stack'}, {'href': ' >>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', >>>>>>>> 'rel': 'nested'}] | >>>>>>>> | logical_resource_id | kube_masters >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | required_by | ['kube_cluster_deploy', >>>>>>>> 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | resource_name | kube_masters >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | resource_status | CREATE_IN_PROGRESS >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | resource_status_reason | state changed >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | resource_type | OS::Heat::ResourceGroup >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | updated_time | 2021-10-18T06:44:02Z >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> >>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>> >>>>>>>> Vikarna >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Regards, >>>>>>> >>>>>>> Syed Ammad Ali >>>>>>> >>>>>> -- >>> Regards, >>> >>> Syed Ammad Ali >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasemin.demiral at tubitak.gov.tr Mon Oct 25 08:55:21 2021 From: yasemin.demiral at tubitak.gov.tr (Yasemin =?utf-8?Q?DEM=C4=B0RAL_=28BILGEM_BTE=29?=) Date: Mon, 25 Oct 2021 11:55:21 +0300 (EET) Subject: [magnum] [fcos33] In-Reply-To: References: <1135213573.121319386.1635076726264.JavaMail.zimbra@tubitak.gov.tr> <1658669092.121319863.1635076896727.JavaMail.zimbra@tubitak.gov.tr> Message-ID: <103116119.121724269.1635152121703.JavaMail.zimbra@tubitak.gov.tr> Hi, Thank you, i can dowloanded that image, I can build kubernetes cluster this image but I can't connect the master node with SSH. How can I connect kubernetes cluster ? Regards Yasemin DEM?RAL Senior Researcher at TUBITAK BILGEM B3LAB Safir Cloud Scrum Master Kimden: "wodel youchi" Kime: "Yasemin DEM?RAL, B?LGEM BTE" Kk: "openstack-discuss" , "Ammad Syed" , "Vikarna Tathe" G?nderilenler: 25 Ekim Pazartesi 2021 0:04:50 Konu: Re: [magnum] [fcos33] Hi, Try this link : [ https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64 | https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64 ] then search for 33.20210426.3.0 Then scroll down. Regards. Le dim. 24 oct. 2021 ? 17:28, Yasemin DEM?RAL (BILGEM BTE) < [ mailto:yasemin.demiral at tubitak.gov.tr | yasemin.demiral at tubitak.gov.tr ] > a ?crit : Hi, How can I dowloand fcos 33? I can't find any link for dowloanding it. Yasemin DEM?RAL [ http://www.tubitak.gov.tr/tr/icerik-sorumluluk-reddi ] Senior Researcher at TUBITAK BILGEM B3LAB Safir Cloud Scrum Master Kimden: "Vikarna Tathe" < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > Kime: "Ammad Syed" < [ mailto:syedammad83 at gmail.com | syedammad83 at gmail.com ] > Kk: "openstack-discuss" < [ mailto:openstack-discuss at lists.openstack.org | openstack-discuss at lists.openstack.org ] > G?nderilenler: 19 Ekim Sal? 2021 16:23:20 Konu: Re: Openstack magnum Hi Ammad, Thanks!!! It worked. On Tue, 19 Oct 2021 at 15:00, Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: BQ_BEGIN Hi Ammad, Yes, fcos34. Let me try with fcos33. Thanks On Tue, 19 Oct 2021 at 14:52, Ammad Syed < [ mailto:syedammad83 at gmail.com | syedammad83 at gmail.com ] > wrote: BQ_BEGIN Hi, Which fcos image you are using ? It looks like you are using fcos 34. Which is currently not supported. Use fcos 33. On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: BQ_BEGIN Hi All, I was able to login to the instance. I see that kubelet service is in activating state. When I checked the journalctl, found the below. Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Started Kubelet via Hyperkube (System Container). Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs /sys/fs/cgroup/systemd: no such file or directory Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=125/n/a Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via Hyperkube (System Container). Executed the below command to fix this issue. mkdir -p /sys/fs/cgroup/systemd Now I am getiing the below error. Has anybody seen this issue. failed to get the kubelet's cgroup: mountpoint for cpu not found. Kubelet system container metrics may be missing. failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. failed to run Kubelet: mountpoint for not found On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: BQ_BEGIN BQ_BEGIN Hi Ammad, Thanks for responding. Yes the instance is getting created, but i am unable to login though i have generated the keypair. There is no default password for this image to login via console. openstack server list +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | cf955a75-8cd2-4f91-a01f-677159b57cb2 | k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, 10.14.20.181 | fedora-coreos-latest | m1.large | ssh -i id_rsa [ mailto:core at 10.14.20.181 | core at 10.14.20.181 ] The authenticity of host '10.14.20.181 (10.14.20.181)' can't be established. ECDSA key fingerprint is SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known hosts. [ mailto:core at 10.14.20.181 | core at 10.14.20.181 ] : Permission denied (publickey,gssapi-keyex,gssapi-with-mic). On Mon, 18 Oct 2021 at 14:02, Ammad Syed < [ mailto:syedammad83 at gmail.com | syedammad83 at gmail.com ] > wrote: BQ_BEGIN Hi, Can you check if the master server is deployed as a nova instance ? if yes, then login to the instance and check cloud-init and heat agent logs to see the errors. Ammad On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: BQ_BEGIN Hello All, I am trying to create a kubernetes cluster using magnum. Image: fedora-coreos. The stack gets stucked in CREATE_IN_PROGRESS. See the output below. openstack coe cluster list +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | uuid | name | keypair | node_count | master_count | status | health_status | +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | 2 | 1 | CREATE_IN_PROGRESS | None | +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'refs_map': None, 'removed_rsrc_list': [], 'attributes': None, 'refs': None} | | creation_time | 2021-10-18T06:44:02Z | | description | | | links | [{'href': ' [ http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters | http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters ] ', 'rel': 'self'}, {'href': ' [ http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17 | http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17 ] ', 'rel': 'stack'}, {'href': ' [ http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028 | http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028 ] ', 'rel': 'nested'}] | | logical_resource_id | kube_masters | | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 | | required_by | ['kube_cluster_deploy', 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] | | resource_name | kube_masters | | resource_status | CREATE_IN_PROGRESS | | resource_status_reason | state changed | | resource_type | OS::Heat::ResourceGroup | | updated_time | 2021-10-18T06:44:02Z | +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Vikarna -- Regards, Syed Ammad Ali BQ_END BQ_END BQ_END BQ_END -- Regards, Syed Ammad Ali BQ_END BQ_END BQ_END -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Mon Oct 25 09:31:51 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Mon, 25 Oct 2021 10:31:51 +0100 Subject: [magnum] [fcos33] In-Reply-To: <103116119.121724269.1635152121703.JavaMail.zimbra@tubitak.gov.tr> References: <1135213573.121319386.1635076726264.JavaMail.zimbra@tubitak.gov.tr> <1658669092.121319863.1635076896727.JavaMail.zimbra@tubitak.gov.tr> <103116119.121724269.1635152121703.JavaMail.zimbra@tubitak.gov.tr> Message-ID: Hi, When you create your cluster you can attach an ssh key, so create your own ssh key, push it on openstack and use it with your cluster. you can then ssh to *core@*master-kub-vm-ip with your ssh key. Regards. Le lun. 25 oct. 2021 ? 09:55, Yasemin DEM?RAL (BILGEM BTE) < yasemin.demiral at tubitak.gov.tr> a ?crit : > Hi, > > Thank you, i can dowloanded that image, I can build kubernetes cluster > this image but I can't connect the master node with SSH. How can I connect > kubernetes cluster ? > > Regards > > *Yasemin DEM?RAL* > > Senior Researcher at TUBITAK BILGEM B3LAB > > Safir Cloud Scrum Master > > ------------------------------ > *Kimden: *"wodel youchi" > *Kime: *"Yasemin DEM?RAL, B?LGEM BTE" > *Kk: *"openstack-discuss" , "Ammad > Syed" , "Vikarna Tathe" > *G?nderilenler: *25 Ekim Pazartesi 2021 0:04:50 > *Konu: *Re: [magnum] [fcos33] > > Hi, > > Try this link : > https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64 > then search for 33.20210426.3.0 > > Then scroll down. > > Regards. > > Le dim. 24 oct. 2021 ? 17:28, Yasemin DEM?RAL (BILGEM BTE) < > yasemin.demiral at tubitak.gov.tr> a ?crit : > >> Hi, >> >> How can I dowloand fcos 33? I can't find any link for dowloanding it. >> >> *Yasemin DEM?RAL* >> >> >> Senior Researcher at TUBITAK BILGEM B3LAB >> >> Safir Cloud Scrum Master >> >> >> ------------------------------ >> *Kimden: *"Vikarna Tathe" >> *Kime: *"Ammad Syed" >> *Kk: *"openstack-discuss" >> *G?nderilenler: *19 Ekim Sal? 2021 16:23:20 >> *Konu: *Re: Openstack magnum >> >> Hi Ammad, >> Thanks!!! It worked. >> >> On Tue, 19 Oct 2021 at 15:00, Vikarna Tathe >> wrote: >> >>> Hi Ammad, >>> Yes, fcos34. Let me try with fcos33. Thanks >>> >>> On Tue, 19 Oct 2021 at 14:52, Ammad Syed wrote: >>> >>>> Hi, >>>> >>>> Which fcos image you are using ? It looks like you are using fcos 34. >>>> Which is currently not supported. Use fcos 33. >>>> >>>> On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe >>>> wrote: >>>> >>>>> Hi All, >>>>> I was able to login to the instance. I see that kubelet service is in >>>>> activating state. When I checked the journalctl, found the below. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> *Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: >>>>> Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 >>>>> kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs >>>>> /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 >>>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main >>>>> process exited, code=exited, status=125/n/aOct 19 05:18:34 >>>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: >>>>> Failed with result 'exit-code'.Oct 19 05:18:44 >>>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: >>>>> Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 >>>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via >>>>> Hyperkube (System Container).* >>>>> >>>>> Executed the below command to fix this issue. >>>>> *mkdir -p /sys/fs/cgroup/systemd* >>>>> >>>>> >>>>> Now I am getiing the below error. Has anybody seen this issue. >>>>> >>>>> >>>>> >>>>> *failed to get the kubelet's cgroup: mountpoint for cpu not found. >>>>> Kubelet system container metrics may be missing.failed to get the container >>>>> runtime's cgroup: failed to get container name for docker process: >>>>> mountpoint for cpu not found. failed to run Kubelet: mountpoint for not >>>>> found* >>>>> >>>>> On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe >>>>> wrote: >>>>> >>>>>> >>>>>>> Hi Ammad, >>>>>>> Thanks for responding. >>>>>>> >>>>>>> Yes the instance is getting created, but i am unable to login >>>>>>> though i have generated the keypair. There is no default password for this >>>>>>> image to login via console. >>>>>>> >>>>>>> openstack server list >>>>>>> >>>>>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>>>>>> | ID | Name >>>>>>> | Status | Networks | Image >>>>>>> | Flavor | >>>>>>> >>>>>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>>>>>> | cf955a75-8cd2-4f91-a01f-677159b57cb2 | >>>>>>> k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, >>>>>>> 10.14.20.181 | fedora-coreos-latest | m1.large | >>>>>>> >>>>>>> >>>>>>> ssh -i id_rsa core at 10.14.20.181 >>>>>>> The authenticity of host '10.14.20.181 (10.14.20.181)' can't be >>>>>>> established. >>>>>>> ECDSA key fingerprint is >>>>>>> SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. >>>>>>> Are you sure you want to continue connecting (yes/no/[fingerprint])? >>>>>>> yes >>>>>>> Warning: Permanently added '10.14.20.181' (ECDSA) to the list of >>>>>>> known hosts. >>>>>>> core at 10.14.20.181: Permission denied >>>>>>> (publickey,gssapi-keyex,gssapi-with-mic). >>>>>>> >>>>>>> On Mon, 18 Oct 2021 at 14:02, Ammad Syed >>>>>>> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> Can you check if the master server is deployed as a nova instance ? >>>>>>>> if yes, then login to the instance and check cloud-init and heat agent logs >>>>>>>> to see the errors. >>>>>>>> >>>>>>>> Ammad >>>>>>>> >>>>>>>> On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe < >>>>>>>> vikarnatathe at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hello All, >>>>>>>>> I am trying to create a kubernetes cluster using magnum. Image: >>>>>>>>> fedora-coreos. >>>>>>>>> >>>>>>>>> >>>>>>>>> The stack gets stucked in CREATE_IN_PROGRESS. See the output >>>>>>>>> below. >>>>>>>>> openstack coe cluster list >>>>>>>>> >>>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>>>> | uuid | name | keypair >>>>>>>>> | node_count | master_count | status | health_status | >>>>>>>>> >>>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>>>> | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey >>>>>>>>> | 2 | 1 | CREATE_IN_PROGRESS | None | >>>>>>>>> >>>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>>>> >>>>>>>>> openstack stack resource show k8s-cluster-01-2nyejxo3hyvb >>>>>>>>> kube_masters >>>>>>>>> >>>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>>> | Field | Value >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> >>>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>>> | attributes | {'refs_map': None, 'removed_rsrc_list': >>>>>>>>> [], 'attributes': None, 'refs': None} >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> | creation_time | 2021-10-18T06:44:02Z >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> | description | >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> | links | [{'href': ' >>>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', >>>>>>>>> 'rel': 'self'}, {'href': ' >>>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', >>>>>>>>> 'rel': 'stack'}, {'href': ' >>>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', >>>>>>>>> 'rel': 'nested'}] | >>>>>>>>> | logical_resource_id | kube_masters >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> | required_by | ['kube_cluster_deploy', >>>>>>>>> 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> | resource_name | kube_masters >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> | resource_status | CREATE_IN_PROGRESS >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> | resource_status_reason | state changed >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> | resource_type | OS::Heat::ResourceGroup >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> | updated_time | 2021-10-18T06:44:02Z >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> | >>>>>>>>> >>>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>>> >>>>>>>>> Vikarna >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Regards, >>>>>>>> >>>>>>>> Syed Ammad Ali >>>>>>>> >>>>>>> -- >>>> Regards, >>>> >>>> Syed Ammad Ali >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Mon Oct 25 14:51:45 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Mon, 25 Oct 2021 15:51:45 +0100 Subject: kolla-ansible wallaby manila ceph pacific In-Reply-To: References: Message-ID: Hi, I tried with pacific then with octopus, the same problem. The patch was applied to kolla-ansible. Regards. Le ven. 22 oct. 2021 00:34, Goutham Pacha Ravi a ?crit : > > > On Thu, Oct 21, 2021 at 1:56 AM wodel youchi > wrote: > >> Hi, >> >> I did that already, I changed the keyring to "*ceph auth get-or-create >> client.manila -o manila.keyring mgr 'allow rw' mon 'allow r'*" it didn't >> work, then I tried with ceph octopus, same error. >> I applied the patch, then I recreated the keyring for manila as wallaby >> documentation, I get the error "*Bad target type 'mon-mgr'*" >> > > Thanks, the error seems similar to this issue: > https://tracker.ceph.com/issues/51039 > > Can you confirm the ceph version installed? On the ceph side, some changes > land after GA and get back ported; > > > >> >> Regards. >> >> Le jeu. 21 oct. 2021 ? 05:29, Buddhika Godakuru a >> ?crit : >> >>> Dear Wodel, >>> I think this is because manila has changed the way how to set/create >>> auth ID in Wallaby for native CephFS driver. >>> For the patch to work, you should change the command >>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, allow >>> rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>> to something like, >>> ceph auth get-or-create client.manila -o manila.keyring mgr 'allow >>> rw' mon 'allow r' >>> >>> Please see Manila Wallaby CephFS Driver document [1] >>> >>> Hope this helps. >>> >>> Thank you >>> [1] >>> https://docs.openstack.org/manila/wallaby/admin/cephfs_driver.html#authorizing-the-driver-to-communicate-with-ceph >>> >>> On Wed, 20 Oct 2021 at 23:19, wodel youchi >>> wrote: >>> >>>> Hi, and thanks >>>> >>>> I tried to apply the patch, but it didn't work, this is the >>>> manila-share.log. >>>> By the way, I did change to caps for the manila client to what is said >>>> in wallaby documentation, that is : >>>> [client.manila] >>>> key = keyyyyyyyy..... >>>> >>>> * caps mgr = "allow rw" caps mon = "allow r"* >>>> >>>> [root at ControllerA manila]# cat manila-share.log >>>> 2021-10-20 10:03:22.286 7 INFO oslo_service.periodic_task [-] Skipping >>>> periodic task update_share_usage_size because it is disabled >>>> 2021-10-20 10:03:22.310 7 INFO oslo_service.service >>>> [req-5b253656-4fe2-4087-b4ab-9ba2a8a0443f - - - - -] Starting 1 workers >>>> 2021-10-20 10:03:22.315 30 INFO manila.service [-] Starting >>>> manila-share node (version 12.0.1) >>>> 2021-10-20 10:03:22.320 30 INFO manila.share.drivers.cephfs.driver >>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep >>>> h client found, connecting... >>>> 2021-10-20 10:03:22.368 30 INFO manila.share.drivers.cephfs.driver >>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep >>>> h client connection complete. >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>> during i >>>> n*itialization* >>>> >>>> * of driver CephFSDriver at ControllerA@cephfsnative1: >>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>> volume ls, argdict={'format': 'json'} - exception message: Bad target type >>>> 'mon-mgr'. 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback >>>> (most recent call last): * >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 191, in rados_command >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>> timeout=RADOS_TIMEOUT) >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>> command >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager inbuf, >>>> timeout, verbose) >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>> command_retry >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager return >>>> send_command(*args, **kwargs) >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>> command >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise >>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager During handling >>>> of the above exception, another exception occurred: >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>> ", line 346, in _driver_setup >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>> self.driver.do_setup(ctxt) >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 251, in do_setup >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>> volname=self.volname) >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 401, in volname >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>> self.rados_client, "fs volume ls", json_obj=True) >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 205, in rados_command >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise >>>> exception.ShareBackendException(msg) >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>> volume >>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>> 'mon-mgr'. >>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>> during i >>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>> target type 'mon-mgr'. >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 191, in rados_command >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>> timeout=RADOS_TIMEOUT) >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>> command >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager inbuf, >>>> timeout, verbose) >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>> command_retry >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager return >>>> send_command(*args, **kwargs) >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>> command >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise >>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager During handling >>>> of the above exception, another exception occurred: >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>> ", line 346, in _driver_setup >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>> self.driver.do_setup(ctxt) >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 251, in do_setup >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>> volname=self.volname) >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 401, in volname >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>> self.rados_client, "fs volume ls", json_obj=True) >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 205, in rados_command >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise >>>> exception.ShareBackendException(msg) >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>> volume >>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>> 'mon-mgr'. >>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>> during i >>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>> target type 'mon-mgr'. >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 191, in rados_command >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>> timeout=RADOS_TIMEOUT) >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>> command >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager inbuf, >>>> timeout, verbose) >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>> command_retry >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager return >>>> send_command(*args, **kwargs) >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>> command >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise >>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager During handling >>>> of the above exception, another exception occurred: >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>> ", line 346, in _driver_setup >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>> self.driver.do_setup(ctxt) >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 251, in do_setup >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>> volname=self.volname) >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 401, in volname >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>> self.rados_client, "fs volume ls", json_obj=True) >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 205, in rados_command >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise >>>> exception.ShareBackendException(msg) >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>> volume >>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>> 'mon-mgr'. >>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>> during i >>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>> target type 'mon-mgr'. >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 191, in rados_command >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>> timeout=RADOS_TIMEOUT) >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>> command >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager inbuf, >>>> timeout, verbose) >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>> command_retry >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager return >>>> send_command(*args, **kwargs) >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>> command >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise >>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager During handling >>>> of the above exception, another exception occurred: >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>> ", line 346, in _driver_setup >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>> self.driver.do_setup(ctxt) >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 251, in do_setup >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>> volname=self.volname) >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 401, in volname >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>> self.rados_client, "fs volume ls", json_obj=True) >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 205, in rados_command >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise >>>> exception.ShareBackendException(msg) >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>> volume >>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>> 'mon-mgr'. >>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>> during i >>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>> target type 'mon-mgr'. >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 191, in rados_command >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>> timeout=RADOS_TIMEOUT) >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>> command >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager inbuf, >>>> timeout, verbose) >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>> command_retry >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager return >>>> send_command(*args, **kwargs) >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>> command >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise >>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager During handling >>>> of the above exception, another exception occurred: >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>> ", line 346, in _driver_setup >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>> self.driver.do_setup(ctxt) >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 251, in do_setup >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>> volname=self.volname) >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 401, in volname >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>> self.rados_client, "fs volume ls", json_obj=True) >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 205, in rados_command >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise >>>> exception.ShareBackendException(msg) >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>> volume >>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>> 'mon-mgr'. >>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>> during i >>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>> target type 'mon-mgr'. >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 191, in rados_command >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>> timeout=RADOS_TIMEOUT) >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>> command >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager inbuf, >>>> timeout, verbose) >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>> command_retry >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager return >>>> send_command(*args, **kwargs) >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>> command >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise >>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager During handling >>>> of the above exception, another exception occurred: >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>> ", line 346, in _driver_setup >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>> self.driver.do_setup(ctxt) >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 251, in do_setup >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>> volname=self.volname) >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 401, in volname >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>> self.rados_client, "fs volume ls", json_obj=True) >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 205, in rados_command >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise >>>> exception.ShareBackendException(msg) >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>> volume >>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>> 'mon-mgr'. >>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>> during i >>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>> target type 'mon-mgr'. >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 191, in rados_command >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>> timeout=RADOS_TIMEOUT) >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>> command >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager inbuf, >>>> timeout, verbose) >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>> command_retry >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager return >>>> send_command(*args, **kwargs) >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>> command >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise >>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager During handling >>>> of the above exception, another exception occurred: >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>> ", line 346, in _driver_setup >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>> self.driver.do_setup(ctxt) >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 251, in do_setup >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>> volname=self.volname) >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 401, in volname >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>> self.rados_client, "fs volume ls", json_obj=True) >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 205, in rados_command >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise >>>> exception.ShareBackendException(msg) >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>> volume >>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>> 'mon-mgr'. >>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>> during i >>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>> target type 'mon-mgr'. >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 191, in rados_command >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>> timeout=RADOS_TIMEOUT) >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>> command >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager inbuf, >>>> timeout, verbose) >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>> command_retry >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager return >>>> send_command(*args, **kwargs) >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>> command >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager During handling >>>> of the above exception, another exception occurred: >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most >>>> recent call last): >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>> ", line 346, in _driver_setup >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>> self.driver.do_setup(ctxt) >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 251, in do_setup >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>> volname=self.volname) >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 401, in volname >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>> self.rados_client, "fs volume ls", json_obj=True) >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>> phfs/driver.py", line 205, in rados_command >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >>>> exception.ShareBackendException(msg) >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>> volume >>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>> 'mon-mgr'. >>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>> >>>> Regards >>>> >>>> Le mer. 20 oct. 2021 ? 00:14, Goutham Pacha Ravi < >>>> gouthampravi at gmail.com> a ?crit : >>>> >>>>> >>>>> On Tue, Oct 19, 2021 at 2:35 PM wodel youchi >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> Has anyone been successful in deploying Manila wallaby using >>>>>> kolla-ansible with ceph pacific as a backend? >>>>>> >>>>>> I have created the manila client in ceph pacific like this : >>>>>> >>>>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, >>>>>> allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>>>> >>>>>> When I deploy, I get this error in manila's log file : >>>>>> Bad target type 'mon-mgr' >>>>>> Any ideas? >>>>>> >>>>> >>>>> Could you share the full log from the manila-share service? >>>>> There's an open bug related to manila/cephfs deployment: >>>>> https://bugs.launchpad.net/kolla-ansible/+bug/1935784 >>>>> Proposed fix: >>>>> https://review.opendev.org/c/openstack/kolla-ansible/+/802743 >>>>> >>>>> >>>>> >>>>> >>>>>> >>>>>> Regards. >>>>>> >>>>> >>> >>> -- >>> >>> ??????? ????? ???????? >>> Buddhika Sanjeewa Godakuru >>> >>> Systems Analyst/Programmer >>> Deputy Webmaster / University of Kelaniya >>> >>> Information and Communication Technology Centre (ICTC) >>> University of Kelaniya, Sri Lanka, >>> Kelaniya, >>> Sri Lanka. >>> >>> Mobile : (+94) 071 5696981 >>> Office : (+94) 011 2903420 / 2903424 >>> >>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> University of Kelaniya Sri Lanka, accepts no liability for the content >>> of this email, or for the consequences of any actions taken on the basis of >>> the information provided, unless that information is subsequently confirmed >>> in writing. If you are not the intended recipient, this email and/or any >>> information it contains should not be copied, disclosed, retained or used >>> by you or any other party and the email and all its contents should be >>> promptly deleted fully from our system and the sender informed. >>> >>> E-mail transmission cannot be guaranteed to be secure or error-free as >>> information could be intercepted, corrupted, lost, destroyed, arrive late >>> or incomplete. >>> >>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Oct 25 16:50:12 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 25 Oct 2021 18:50:12 +0200 Subject: [nova][placement] Yoga PTG summary Message-ID: Well, it's a try given we discussed for 4 days and it could be a large summary ;) You can see all the notes in a read-only etherpad here : https://etherpad.opendev.org/p/r.e70aa851abf8644c29c8abe4bce32b81 ### Cross-project discussions # Cyborg cross-project discussion with Nova We agreed on adding a OWNED_BY_NOVA trait for all Resource Providers creating by Nova so Cyborg would provide their own OWNED_BY_CYBORG trait for knowing which inventories are used by either Nova or Cyborg. Cyborg contributors need to modify https://review.opendev.org/c/openstack/nova-specs/+/780452 We also agreed on the fact that https://blueprints.launchpad.net/nova/+spec/cyborg-suspend-and-resume is a specless blueprint. # Oslo cross-project discussion with Nova gmann agreed on providing a new flag for oslopolicy-sample-generator for adding deprecated rules in the generated policy file. # RBAC popup team discussion with Nova Eventually, we found no consensus for this topic as there were some left open questions about system-scope. FWIW, the popup team then discussed on Friday with the TC about this, so please look at the TC etherpad if you want to know more. A side impact is about https://review.opendev.org/c/openstack/nova-specs/+/793011 which is now punted until we figure out a good path. # Neutron cross-project discussion with Nova Again, about RBAC for external events api interaction, we discussed about the scopes and eventually punted the discussion. About specific events related to Neutron backends, ralonso accepted to provide a documentation explaining what backends sends which events, and we accepted to merge https://review.opendev.org/c/openstack/nova/+/813419 as a short-term solution while we would like to get a long-term solution by having Neutron providing the event information by the port binding information. About testing move operations, we agreed on continuing to have a ML2/OVS multinode job. During another Nova session, we also agreed on changing libvirt to directly unplug (not unbind) ports during VM shutdown. # Interop cross-project disussion with Nova We agreed on reviewing https://review.opendev.org/c/openinfra/interop/+/811049/3/guidelines/2021.11.json # Cinder cross-project discussion with Nova We discussed about https://blueprints.launchpad.net/nova/+spec/volume-backed-server-rebuild and we said the workflow should be something like user > nova > cinder > nova. We also discussed about some upgrade questions for this one, but we eventually agreed on it. We also discussed about devstack-plugin-nfs gate issues and how to contact the nova events API for resize. # Manila integration with libvirt driver The first spec looks promising https://review.opendev.org/c/openstack/nova-specs/+/813180 There were discussions about cloud-init and the Ironic case, but we agree on the smaller scope for the Yoga release that's proposed in the spec. ### Nova specific topics ### # Xena retrospective We agreed on stopping to have an Asian-friendly meeting timeslot once per month as unfortunately, no contributors went to meetings when we had them. We also agreed on modifying the review-priority label for abling contributors to use it too, but we first need to provide a documentation explaining it before we change Gerrit. # Continuing to use Storyboard for Placement or not ? We eventually said we would look at how create a script for moving Storyboard (SB) stories to Launchpad (as either features or bugs) but we still want to continue verifying whether contributors use Storyboard for Placement bugs or feature requests and bauzas accepted to provide visibility on Placement SB stories during the weekly meeting. gmann also agreed on asking contributors to move off the #openstack-placement IRC channel to #openstack-nova so we would delete this IRC channel eventually during this cycle. # Yoga release deadlines for Nova No real change in deadlines compared to the Xena timeframe. We agreed on having two spec review days, one around mid-Nov (before yoga-1) and one around mid-Dec (before Christmas period) in order to prioritize implementation reviews after Jan 1st even if we can continue to review specs until yoga-2. # python version pinning with tox We agreed on the fact this is a pain for developers. We then had a consensus on modifying tox to accept multiple python versions automatically with https://review.opendev.org/c/openstack/nova/+/804292 so it would fix the issue. # Nova DB issues with DB coalation We agreed on the fact it's a problem, so we'll document the fact that Nova APIs are case-insensitive at the moment even if Python is case-sensitive, which creates problems. What we propose in order to fix this is to provide a nova-manage db upgrade command that would modify the DB tables to use COLLATE utf8_bin but we also agree we can't ask operators to migrate by one cycle and we accept the fact this command could be there for a while. # SQLAlchemy 2.0 The wave is coming and we need contributors to help us change what we have in our nova DB modules to no longer use the deprecated calls. This looks a low-hanging-fruit and I'll try to find some contributor for this. # Bumping minimum microversion from v2.1 No, we said no because $users use it. # Unified limits We agreed on providing a read-only API for knowing the limits but *not* providing proxy APIs for setting limits *as a first attempt*. We prefer operators to come back with usecases and feedback from their use of Unified Limits 1.0 before we start drafting some proxy API to Keystone. Also, we agreed on *not* having config-driven quotas. # Central vncproxy in a multiple-cells environment We understand the usecase which is very specific and we accept to create a central vncproxy service that would proxy calls to the cell-related vncproxy service but this is not a pattern we want to follow for every cell-specific nova service. # Move instances between projects. Well, we totally get the usecase but we absolutely lack of resources in order to work on this very large effort that would span multiple services. # Nova service healthchecks We agreed on providing a way for monitoring tools to ping Nova services for healthness thru http or unix socket, from every service that would return a status based on cached data. A spec has to be written. # Zombie Resource Providers no longer corresponding to Nova resources Thanks to the OWNED_BY_NOVA trait we agreed when discussing with the Cyborg team, we could find a way to know the ResourceProviders owned by Nova that are no longer in use and we could consequently delete them, or warn the operator if existing allocations are present. # NUMA balancing This is definitely a bug we will fix by providing a workaround config option that will let operators define the packing or spreading strategy they want for NUMA cell stacking (or not). # Deprecation of the novaclient shell command Yes, now that the SDK is able to negociate, we can deprecate the novaclient CLI. More investigation work has to be done in order to know whether we can also deprecate the novaclient library itself. # Integration with Off-Path Network Backends Lots of technicalities with this very large spec https://review.opendev.org/c/openstack/nova-specs/+/787458. Long story short, we asked the proposer to add a few more details in the spec about upgrade scenarios, move operations and testing, but we also told that we can't accept this spec until the Neutron one lands as there are some usage from the extended Neutron APIs that are proposed in the Nova spec. # 'pc' and 'q35' machine types We agreed on changing the default machine type to 'q35' but we also agreed on *not* deprecating the 'pc' type. Some documentation has to be written in order to explain the intent of the migration. # Nova use of privsep sean-k-mooney agreed on working on a patch to remove usage of CAP_DAC_OVERRIDE and on another patch to provide the new capabilities for the monolith privsep context (a few other topics were discussed but I skipped them from my summary as they're not significant for either operators or other contributors but a very few people - don't get me wrong, I like them but I don't wanna add more verbosity to an already large summary) ### Nova painpoints (taken from https://etherpad.opendev.org/p/pain-point-elimination) # Ironic hashring failure recovery That's a latent bug but we don't want the Ironic virt driver to modify the host value of every instance. We'd rather prefer some operator action in order to communicate Nova it has to shuffle a few things. More brainstorm has honestly to be done with this as we haven't clearly drafted a designed solution yet. # Problems with shelve, unshelve and then shelve back Well, this is a nasty bug and we need to fix it, agreed. We also have a testing gap we need to close. # Naming cinder volumes after nova instance name We said yes, why not. We also considered that 'delete on terminate' has to change so it does delete the volume when you delete the instance (for a boot-for-volume case) # Orphaned instances due to underlying network issues We agreed on the fact it would be nice to provide a tool for knowing such orphans and we also think it's import for instance force-delete API call to complete successfully in such case. # reminiscent guests on a recovered compute node while instance records were purged Well, we should avoid to purge instance records from the Nova DB if we are still unable to correctly delete the compute bits unless the operator explicitly wants to (in the case of a non-recoverable compute for example). A potential solution can be to add a config flag to *not* archive deleted rows on instances that are running on down computes. # Nova APIs leaking hypervisor hardware details No, we don't want this but we can try to add more traits for giving more visibility on a case-by-case basis # RabbitMQ replacement Well, we're unfortunately lacking resources on this. We recommend the community to investigate on use of a NATS backend for oslo.messaging # Placement strictness with move operations We agree this is a bug that needs to be fixed. Okay, if you reach this point, you're very brave. Kudos to you. I don't really want to reproduce this exercice often, but I just hope it helps you summarizing a thousand-line large etherpad. -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsanjeewa at kln.ac.lk Mon Oct 25 18:23:03 2021 From: bsanjeewa at kln.ac.lk (Buddhika Godakuru) Date: Mon, 25 Oct 2021 23:53:03 +0530 Subject: kolla-ansible wallaby manila ceph pacific In-Reply-To: References: Message-ID: Is your deployment type is source or binary? If it is binay, I wonder if this patch [1] is built into the repos. If source, could you try rebuilding the manila docker images? [1] https://review.opendev.org/c/openstack/manila/+/797955 On Mon, 25 Oct 2021 at 20:21, wodel youchi wrote: > Hi, > > I tried with pacific then with octopus, the same problem. > The patch was applied to kolla-ansible. > > Regards. > > Le ven. 22 oct. 2021 00:34, Goutham Pacha Ravi a > ?crit : > >> >> >> On Thu, Oct 21, 2021 at 1:56 AM wodel youchi >> wrote: >> >>> Hi, >>> >>> I did that already, I changed the keyring to "*ceph auth get-or-create >>> client.manila -o manila.keyring mgr 'allow rw' mon 'allow r'*" it >>> didn't work, then I tried with ceph octopus, same error. >>> I applied the patch, then I recreated the keyring for manila as wallaby >>> documentation, I get the error "*Bad target type 'mon-mgr'*" >>> >> >> Thanks, the error seems similar to this issue: >> https://tracker.ceph.com/issues/51039 >> >> Can you confirm the ceph version installed? On the ceph side, some >> changes land after GA and get back ported; >> >> >> >>> >>> Regards. >>> >>> Le jeu. 21 oct. 2021 ? 05:29, Buddhika Godakuru a >>> ?crit : >>> >>>> Dear Wodel, >>>> I think this is because manila has changed the way how to set/create >>>> auth ID in Wallaby for native CephFS driver. >>>> For the patch to work, you should change the command >>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, >>>> allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>> to something like, >>>> ceph auth get-or-create client.manila -o manila.keyring mgr 'allow >>>> rw' mon 'allow r' >>>> >>>> Please see Manila Wallaby CephFS Driver document [1] >>>> >>>> Hope this helps. >>>> >>>> Thank you >>>> [1] >>>> https://docs.openstack.org/manila/wallaby/admin/cephfs_driver.html#authorizing-the-driver-to-communicate-with-ceph >>>> >>>> On Wed, 20 Oct 2021 at 23:19, wodel youchi >>>> wrote: >>>> >>>>> Hi, and thanks >>>>> >>>>> I tried to apply the patch, but it didn't work, this is the >>>>> manila-share.log. >>>>> By the way, I did change to caps for the manila client to what is said >>>>> in wallaby documentation, that is : >>>>> [client.manila] >>>>> key = keyyyyyyyy..... >>>>> >>>>> * caps mgr = "allow rw" caps mon = "allow r"* >>>>> >>>>> [root at ControllerA manila]# cat manila-share.log >>>>> 2021-10-20 10:03:22.286 7 INFO oslo_service.periodic_task [-] Skipping >>>>> periodic task update_share_usage_size because it is disabled >>>>> 2021-10-20 10:03:22.310 7 INFO oslo_service.service >>>>> [req-5b253656-4fe2-4087-b4ab-9ba2a8a0443f - - - - -] Starting 1 workers >>>>> 2021-10-20 10:03:22.315 30 INFO manila.service [-] Starting >>>>> manila-share node (version 12.0.1) >>>>> 2021-10-20 10:03:22.320 30 INFO manila.share.drivers.cephfs.driver >>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep >>>>> h client found, connecting... >>>>> 2021-10-20 10:03:22.368 30 INFO manila.share.drivers.cephfs.driver >>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep >>>>> h client connection complete. >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>> during i >>>>> n*itialization* >>>>> >>>>> * of driver CephFSDriver at ControllerA@cephfsnative1: >>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>> volume ls, argdict={'format': 'json'} - exception message: Bad target type >>>>> 'mon-mgr'. 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback >>>>> (most recent call last): * >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 191, in rados_command >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>> timeout=RADOS_TIMEOUT) >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>> command >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager inbuf, >>>>> timeout, verbose) >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>> command_retry >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager return >>>>> send_command(*args, **kwargs) >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>> command >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise >>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager During handling >>>>> of the above exception, another exception occurred: >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>> ", line 346, in _driver_setup >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>> self.driver.do_setup(ctxt) >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 251, in do_setup >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>> volname=self.volname) >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 401, in volname >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 205, in rados_command >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise >>>>> exception.ShareBackendException(msg) >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>> volume >>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>> 'mon-mgr'. >>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>> during i >>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>> target type 'mon-mgr'. >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 191, in rados_command >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>> timeout=RADOS_TIMEOUT) >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>> command >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager inbuf, >>>>> timeout, verbose) >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>> command_retry >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager return >>>>> send_command(*args, **kwargs) >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>> command >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise >>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager During handling >>>>> of the above exception, another exception occurred: >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>> ", line 346, in _driver_setup >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>> self.driver.do_setup(ctxt) >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 251, in do_setup >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>> volname=self.volname) >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 401, in volname >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 205, in rados_command >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise >>>>> exception.ShareBackendException(msg) >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>> volume >>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>> 'mon-mgr'. >>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>> during i >>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>> target type 'mon-mgr'. >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 191, in rados_command >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>> timeout=RADOS_TIMEOUT) >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>> command >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager inbuf, >>>>> timeout, verbose) >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>> command_retry >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager return >>>>> send_command(*args, **kwargs) >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>> command >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise >>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager During handling >>>>> of the above exception, another exception occurred: >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>> ", line 346, in _driver_setup >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>> self.driver.do_setup(ctxt) >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 251, in do_setup >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>> volname=self.volname) >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 401, in volname >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 205, in rados_command >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise >>>>> exception.ShareBackendException(msg) >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>> volume >>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>> 'mon-mgr'. >>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>> during i >>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>> target type 'mon-mgr'. >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 191, in rados_command >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>> timeout=RADOS_TIMEOUT) >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>> command >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager inbuf, >>>>> timeout, verbose) >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>> command_retry >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager return >>>>> send_command(*args, **kwargs) >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>> command >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise >>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager During handling >>>>> of the above exception, another exception occurred: >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>> ", line 346, in _driver_setup >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>> self.driver.do_setup(ctxt) >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 251, in do_setup >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>> volname=self.volname) >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 401, in volname >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 205, in rados_command >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise >>>>> exception.ShareBackendException(msg) >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>> volume >>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>> 'mon-mgr'. >>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>> during i >>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>> target type 'mon-mgr'. >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 191, in rados_command >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>> timeout=RADOS_TIMEOUT) >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>> command >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager inbuf, >>>>> timeout, verbose) >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>> command_retry >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager return >>>>> send_command(*args, **kwargs) >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>> command >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise >>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager During handling >>>>> of the above exception, another exception occurred: >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>> ", line 346, in _driver_setup >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>> self.driver.do_setup(ctxt) >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 251, in do_setup >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>> volname=self.volname) >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 401, in volname >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 205, in rados_command >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise >>>>> exception.ShareBackendException(msg) >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>> volume >>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>> 'mon-mgr'. >>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>> during i >>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>> target type 'mon-mgr'. >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 191, in rados_command >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>> timeout=RADOS_TIMEOUT) >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>> command >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager inbuf, >>>>> timeout, verbose) >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>> command_retry >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager return >>>>> send_command(*args, **kwargs) >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>> command >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise >>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager During handling >>>>> of the above exception, another exception occurred: >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>> ", line 346, in _driver_setup >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>> self.driver.do_setup(ctxt) >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 251, in do_setup >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>> volname=self.volname) >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 401, in volname >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 205, in rados_command >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise >>>>> exception.ShareBackendException(msg) >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>> volume >>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>> 'mon-mgr'. >>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>> during i >>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>> target type 'mon-mgr'. >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 191, in rados_command >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>> timeout=RADOS_TIMEOUT) >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>> command >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager inbuf, >>>>> timeout, verbose) >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>> command_retry >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager return >>>>> send_command(*args, **kwargs) >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>> command >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise >>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager During handling >>>>> of the above exception, another exception occurred: >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>> ", line 346, in _driver_setup >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>> self.driver.do_setup(ctxt) >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 251, in do_setup >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>> volname=self.volname) >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 401, in volname >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 205, in rados_command >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise >>>>> exception.ShareBackendException(msg) >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>> volume >>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>> 'mon-mgr'. >>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>> during i >>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>> target type 'mon-mgr'. >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 191, in rados_command >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>> timeout=RADOS_TIMEOUT) >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>> command >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager inbuf, >>>>> timeout, verbose) >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>> command_retry >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager return >>>>> send_command(*args, **kwargs) >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>> command >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager During handling >>>>> of the above exception, another exception occurred: >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback (most >>>>> recent call last): >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>> ", line 346, in _driver_setup >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>> self.driver.do_setup(ctxt) >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 251, in do_setup >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>> volname=self.volname) >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 401, in volname >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>> phfs/driver.py", line 205, in rados_command >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >>>>> exception.ShareBackendException(msg) >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>> volume >>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>> 'mon-mgr'. >>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>> >>>>> Regards >>>>> >>>>> Le mer. 20 oct. 2021 ? 00:14, Goutham Pacha Ravi < >>>>> gouthampravi at gmail.com> a ?crit : >>>>> >>>>>> >>>>>> On Tue, Oct 19, 2021 at 2:35 PM wodel youchi >>>>>> wrote: >>>>>> >>>>>>> Hi, >>>>>>> Has anyone been successful in deploying Manila wallaby using >>>>>>> kolla-ansible with ceph pacific as a backend? >>>>>>> >>>>>>> I have created the manila client in ceph pacific like this : >>>>>>> >>>>>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, >>>>>>> allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>>>>> >>>>>>> When I deploy, I get this error in manila's log file : >>>>>>> Bad target type 'mon-mgr' >>>>>>> Any ideas? >>>>>>> >>>>>> >>>>>> Could you share the full log from the manila-share service? >>>>>> There's an open bug related to manila/cephfs deployment: >>>>>> https://bugs.launchpad.net/kolla-ansible/+bug/1935784 >>>>>> Proposed fix: >>>>>> https://review.opendev.org/c/openstack/kolla-ansible/+/802743 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> Regards. >>>>>>> >>>>>> >>>> >>>> -- >>>> >>>> ??????? ????? ???????? >>>> Buddhika Sanjeewa Godakuru >>>> >>>> Systems Analyst/Programmer >>>> Deputy Webmaster / University of Kelaniya >>>> >>>> Information and Communication Technology Centre (ICTC) >>>> University of Kelaniya, Sri Lanka, >>>> Kelaniya, >>>> Sri Lanka. >>>> >>>> Mobile : (+94) 071 5696981 >>>> Office : (+94) 011 2903420 / 2903424 >>>> >>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>> University of Kelaniya Sri Lanka, accepts no liability for the content >>>> of this email, or for the consequences of any actions taken on the basis of >>>> the information provided, unless that information is subsequently confirmed >>>> in writing. If you are not the intended recipient, this email and/or any >>>> information it contains should not be copied, disclosed, retained or used >>>> by you or any other party and the email and all its contents should be >>>> promptly deleted fully from our system and the sender informed. >>>> >>>> E-mail transmission cannot be guaranteed to be secure or error-free as >>>> information could be intercepted, corrupted, lost, destroyed, arrive late >>>> or incomplete. >>>> >>>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>> >>> -- ??????? ????? ???????? Buddhika Sanjeewa Godakuru Systems Analyst/Programmer Deputy Webmaster / University of Kelaniya Information and Communication Technology Centre (ICTC) University of Kelaniya, Sri Lanka, Kelaniya, Sri Lanka. Mobile : (+94) 071 5696981 Office : (+94) 011 2903420 / 2903424 -- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++? University of Kelaniya Sri Lanka, accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information provided, unless that information is subsequently confirmed in writing. If you are not the intended recipient, this email and/or any information it contains should not be copied, disclosed, retained or used by you or any other party and the email and all its contents should be promptly deleted fully from our system and the sender informed. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Tue Oct 26 01:01:28 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 25 Oct 2021 21:01:28 -0400 Subject: Guest's secondary/virtual IP In-Reply-To: References: Message-ID: Couple of things to try - At the VM level, ping your own address on eth1 to see if local traffic works. - Using your existing port config, capture traffic at the VM level to see if the packets are reaching the VM. - Disable port-security on the port level and validate if the traffic is reaching the VM. - If you have access to the compute, capture traffic at the interface/tap/bridge level. Where to capture will depend on if you are using OVS/OVN/Linux-bridge. - I do believe that even with allowed-address on the port, you will need to have the corresponding traffic allowed in your sec-group. Can you paste the port info with "openstack port show $port_id_here"? On Mon, Oct 25, 2021 at 10:23 AM lejeczek wrote: > Hi guys. > > What I expected turns out not to be enough, must be > something trivial - what am I missing? > I set a port with --allowed-address and on the > instance/guest using the port I did: > -> $ ip add add 10.0.1.99/24 dev eth1 > yet that IP other guest cannot reach. > > many thanks, L. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From munnaeebd at gmail.com Tue Oct 26 04:40:14 2021 From: munnaeebd at gmail.com (Md. Hejbul Tawhid MUNNA) Date: Tue, 26 Oct 2021 10:40:14 +0600 Subject: Octavia loadbalancer status offline In-Reply-To: References: Message-ID: Hi Michael, We have checked as per your advice. Please find the below details 1) [health_manager] bind_port = 5555 bind_ip = 0.0.0.0 controller_ip_port_list = 172.16.0.2:5555 2) # tcpdump -n -vv -i o-hm0 tcpdump: listening on o-hm0, link-type EN10MB (Ethernet), capture size 262144 bytes 10:37:37.440219 IP (tos 0x0, ttl 64, id 12636, offset 0, flags [DF], proto UDP (17), length 319) 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3b -> 0xa82a!] UDP, length 291 10:37:47.495440 IP (tos 0x0, ttl 64, id 13942, offset 0, flags [DF], proto UDP (17), length 319) 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3b -> 0x7078!] UDP, length 291 10:37:57.754072 IP (tos 0x0, ttl 64, id 15228, offset 0, flags [DF], proto UDP (17), length 319) 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3b -> 0x3088!] UDP, length 291 10:38:07.814541 IP (tos 0x0, ttl 64, id 16645, offset 0, flags [DF], proto UDP (17), length 318) 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3a -> 0xc7e8!] UDP, length 290 3 & 4 ) Enabled debug, No errors. # tail -f /var/log/octavia/octavia-health-manager.log 2021-10-26 10:33:05.659 1277703 WARNING octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager experienced an exception processing a heartbeat message from ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' 2021-10-26 10:33:05.744 1277704 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 2021-10-26 10:33:08.744 1277704 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 2021-10-26 10:33:11.744 1277704 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 2021-10-26 10:33:14.744 1277704 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 2021-10-26 10:33:15.717 1277703 DEBUG octavia.amphorae.drivers.health.heartbeat_udp [-] *Received packet from* ('172.16.1.220', 59727) dorecv /usr/lib/python3/dist-packages/octavia/amphorae/drivers/health/heartbeat_udp.py:189 2021-10-26 10:33:15.717 1277703 WARNING octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager experienced an exception processing a heartbeat message from ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' 2021-10-26 10:33:17.744 1277704 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 2021-10-26 10:33:20.744 1277704 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 2021-10-26 10:33:23.744 1277704 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 2021-10-26 10:33:25.799 1277703 DEBUG octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from ('172.16.1.220', 59727) dorecv /usr/lib/python3/dist-packages/octavia/amphorae/drivers/health/heartbeat_udp.py:189 2021-10-26 10:33:25.799 1277703 WARNING octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager experienced an exception processing a heartbeat message from ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' 2021-10-26 10:33:26.744 1277704 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 2021-10-26 10:33:29.744 1277704 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 2021-10-26 10:33:32.744 1277704 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 2021-10-26 10:33:35.744 1277704 DEBUG futurist.periodics [-] Submitting periodic callback 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 2021-10-26 10:33:35.842 1277703 DEBUG octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from ('172.16.1.220', 59727) dorecv /usr/lib/python3/dist-packages/octavia/amphorae/drivers/health/heartbeat_udp.py:189 2021-10-26 10:33:35.843 1277703 WARNING octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager experienced an exception processing a heartbeat message from ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' Regards, Munna On Mon, Oct 25, 2021 at 10:23 PM Michael Johnson wrote: > Hi Munna, > > I am guessing you are seeing the operating status offline? > > This is commonly caused by the amphora being unable to reach the > health manager process. > > Another symptom of this is the statistics for the load balancer will > not increase. > > Some things to check: > 1. Is your controller IP and port list correct? > > https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list > 2. Are you seeing the heartbeat packets arrive on the network > interface on your health manager instance? > 3. Is the health manager log reporting any issues, such as an > incorrect heartbeat key? > 4. If you enable debug logging on the health manager, do you see log > messages indicating the health manager has received heartbeat packets > from the amphora? "Received packet from" > > Michael > > On Mon, Oct 25, 2021 at 5:30 AM Md. Hejbul Tawhid MUNNA > wrote: > > > > Hi, > > > > We have installed openstack ussuri version from ubuntu universe > repository. > > > > We have installed octavia 6.2.0 version. > > > > after creating loadbalancer , listener and pool all are offline. but the > LB operation is working as expected. changing the pool member is also > working. > > > > > > octavia is installed in compute node. 5555 is listening and allowed in > iptables (iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT). > > > > amphora to octavia-worker(172.16.0.2) is reachable. > > > > Any idea to troubleshoot this issue > > > > > > Please find the log from octavia-worker > > > > > > > > > ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// > > 2021-10-25 18:15:13.482 1192307 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. > Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded > with url: // (Caused by > NewConnectionError(' 0x7f540e192be0>: Failed to establish a new connection: [Errno 111] > Connection refused')) > > 2021-10-25 18:15:18.490 1192307 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. > Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded > with url: // (Caused by > NewConnectionError(' 0x7f540c0ddd00>: Failed to establish a new connection: [Errno 111] > Connection refused')) > > 2021-10-25 18:15:23.507 1192307 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. > Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded > with url: // (Caused by > NewConnectionError(' 0x7f540c0ddd90>: Failed to establish a new connection: [Errno 111] > Connection refused')) > > 2021-10-25 18:15:28.511 1192307 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. > Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded > with url: // (Caused by > NewConnectionError(' 0x7f540c0ddbb0>: Failed to establish a new connection: [Errno 111] > Connection refused')) > > 2021-10-25 18:15:33.521 1192307 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. > Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded > with url: // (Caused by > NewConnectionError(' 0x7f540c064190>: Failed to establish a new connection: [Errno 111] > Connection refused')) > > 2021-10-25 18:15:38.529 1192307 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. > Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded > with url: // (Caused by > NewConnectionError(' 0x7f540d111790>: Failed to establish a new connection: [Errno 111] > Connection refused')) > > 2021-10-25 18:15:43.540 1192307 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. > Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded > with url: // (Caused by > NewConnectionError(' 0x7f53ec7a3670>: Failed to establish a new connection: [Errno 111] > Connection refused')) > > 2021-10-25 18:15:48.549 1192307 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. > Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded > with url: // (Caused by > NewConnectionError(' 0x7f53ec74d0d0>: Failed to establish a new connection: [Errno 111] > Connection refused')) > > 2021-10-25 18:15:53.554 1192307 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. > Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded > with url: // (Caused by > NewConnectionError(' 0x7f53ec75a7f0>: Failed to establish a new connection: [Errno 111] > Connection refused')) > > 2021-10-25 18:15:58.561 1192307 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. > Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded > with url: // (Caused by > NewConnectionError(' 0x7f53ec75a6a0>: Failed to establish a new connection: [Errno 111] > Connection refused')) > > 2021-10-25 18:16:04.707 1192307 INFO > octavia.controller.worker.v1.tasks.database_tasks > [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Mark ALLOCATED in DB for amphora: > c661b828-1690-4866-8152-f745c43e0977 with compute id > c9133819-b8e0-42d6-9544-bf83e3ad4b3f for load balancer: > c0bd3e21-6983-40c9-8713-859194496b37 > > 2021-10-25 18:16:40.660 1192307 INFO > octavia.controller.worker.v1.tasks.database_tasks > [req-78e78f29-3bdb-4b12-ae76-b8daa4926c09 - > 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Mark ACTIVE in DB for load balancer > id: c0bd3e21-6983-40c9-8713-859194496b37 > > 2021-10-25 18:16:44.317 1192307 INFO > octavia.controller.queue.v1.endpoints [-] Creating listener > 'cc45192d-de70-4d59-857b-ac23c4fc8d07'... > > 2021-10-25 18:16:44.325 1192307 WARNING > octavia.controller.worker.v1.controller_worker [-] Failed to fetch listener > cc45192d-de70-4d59-857b-ac23c4fc8d07 from DB. Retrying for up to 60 seconds. > > 2021-10-25 18:17:35.375 1192307 INFO > octavia.controller.queue.v1.endpoints [-] Creating pool > '9d23855f-d849-4ad9-9de1-66ab5cd268eb'... > > 2021-10-25 18:17:35.382 1192307 WARNING > octavia.controller.worker.v1.controller_worker [-] Failed to fetch pool > 9d23855f-d849-4ad9-9de1-66ab5cd268eb from DB. Retrying for up to 60 seconds. > > 2021-10-25 18:18:02.814 1192307 INFO > octavia.controller.queue.v1.endpoints [-] Creating member > '29bb41e5-457c-43ba-9149-5af55e73fe38'... > > 2021-10-25 18:18:02.825 1192307 WARNING > octavia.controller.worker.v1.controller_worker [-] Failed to fetch member > 29bb41e5-457c-43ba-9149-5af55e73fe38 from DB. Retrying for up to 60 seconds. > > > > Regards, > > Munna > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gthiemonge at redhat.com Tue Oct 26 06:45:15 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Tue, 26 Oct 2021 08:45:15 +0200 Subject: Octavia loadbalancer status offline In-Reply-To: References: Message-ID: Hi, The health-manager receives the messages but cannot decrypt them. There's one configuration setting that is missing in the doc, there's an open review to add it: https://review.opendev.org/c/openstack/octavia/+/784022/1/doc/source/install/install-ubuntu.rst The [health_manager]/heartbeat_key is a parameter that is used to encrypt the heartbeat messages sent by the amphora, and there's a known issue that occurs when the key is empty, so it should be required. Greg On Tue, Oct 26, 2021 at 6:48 AM Md. Hejbul Tawhid MUNNA wrote: > Hi Michael, > > We have checked as per your advice. Please find the below details > 1) > [health_manager] > bind_port = 5555 > bind_ip = 0.0.0.0 > controller_ip_port_list = 172.16.0.2:5555 > > 2) > > # tcpdump -n -vv -i o-hm0 > tcpdump: listening on o-hm0, link-type EN10MB (Ethernet), capture size > 262144 bytes > 10:37:37.440219 IP (tos 0x0, ttl 64, id 12636, offset 0, flags [DF], proto > UDP (17), length 319) > 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3b -> > 0xa82a!] UDP, length 291 > 10:37:47.495440 IP (tos 0x0, ttl 64, id 13942, offset 0, flags [DF], proto > UDP (17), length 319) > 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3b -> > 0x7078!] UDP, length 291 > 10:37:57.754072 IP (tos 0x0, ttl 64, id 15228, offset 0, flags [DF], proto > UDP (17), length 319) > 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3b -> > 0x3088!] UDP, length 291 > 10:38:07.814541 IP (tos 0x0, ttl 64, id 16645, offset 0, flags [DF], proto > UDP (17), length 318) > 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3a -> > 0xc7e8!] UDP, length 290 > > > 3 & 4 ) > > Enabled debug, No errors. > > # tail -f /var/log/octavia/octavia-health-manager.log > 2021-10-26 10:33:05.659 1277703 WARNING > octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager > experienced an exception processing a heartbeat message from > ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object > has no attribute 'encode' > 2021-10-26 10:33:05.744 1277704 DEBUG futurist.periodics [-] Submitting > periodic callback > 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' > _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 > 2021-10-26 10:33:08.744 1277704 DEBUG futurist.periodics [-] Submitting > periodic callback > 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' > _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 > 2021-10-26 10:33:11.744 1277704 DEBUG futurist.periodics [-] Submitting > periodic callback > 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' > _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 > 2021-10-26 10:33:14.744 1277704 DEBUG futurist.periodics [-] Submitting > periodic callback > 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' > _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 > 2021-10-26 10:33:15.717 1277703 DEBUG > octavia.amphorae.drivers.health.heartbeat_udp [-] *Received packet from* > ('172.16.1.220', 59727) dorecv > /usr/lib/python3/dist-packages/octavia/amphorae/drivers/health/heartbeat_udp.py:189 > 2021-10-26 10:33:15.717 1277703 WARNING > octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager > experienced an exception processing a heartbeat message from > ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object > has no attribute 'encode' > 2021-10-26 10:33:17.744 1277704 DEBUG futurist.periodics [-] Submitting > periodic callback > 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' > _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 > 2021-10-26 10:33:20.744 1277704 DEBUG futurist.periodics [-] Submitting > periodic callback > 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' > _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 > 2021-10-26 10:33:23.744 1277704 DEBUG futurist.periodics [-] Submitting > periodic callback > 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' > _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 > 2021-10-26 10:33:25.799 1277703 DEBUG > octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from > ('172.16.1.220', 59727) dorecv > /usr/lib/python3/dist-packages/octavia/amphorae/drivers/health/heartbeat_udp.py:189 > 2021-10-26 10:33:25.799 1277703 WARNING > octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager > experienced an exception processing a heartbeat message from > ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object > has no attribute 'encode' > 2021-10-26 10:33:26.744 1277704 DEBUG futurist.periodics [-] Submitting > periodic callback > 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' > _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 > 2021-10-26 10:33:29.744 1277704 DEBUG futurist.periodics [-] Submitting > periodic callback > 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' > _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 > 2021-10-26 10:33:32.744 1277704 DEBUG futurist.periodics [-] Submitting > periodic callback > 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' > _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 > 2021-10-26 10:33:35.744 1277704 DEBUG futurist.periodics [-] Submitting > periodic callback > 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' > _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 > 2021-10-26 10:33:35.842 1277703 DEBUG > octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from > ('172.16.1.220', 59727) dorecv > /usr/lib/python3/dist-packages/octavia/amphorae/drivers/health/heartbeat_udp.py:189 > 2021-10-26 10:33:35.843 1277703 WARNING > octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager > experienced an exception processing a heartbeat message from > ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object > has no attribute 'encode' > > Regards, > Munna > > > On Mon, Oct 25, 2021 at 10:23 PM Michael Johnson > wrote: > >> Hi Munna, >> >> I am guessing you are seeing the operating status offline? >> >> This is commonly caused by the amphora being unable to reach the >> health manager process. >> >> Another symptom of this is the statistics for the load balancer will >> not increase. >> >> Some things to check: >> 1. Is your controller IP and port list correct? >> >> https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list >> 2. Are you seeing the heartbeat packets arrive on the network >> interface on your health manager instance? >> 3. Is the health manager log reporting any issues, such as an >> incorrect heartbeat key? >> 4. If you enable debug logging on the health manager, do you see log >> messages indicating the health manager has received heartbeat packets >> from the amphora? "Received packet from" >> >> Michael >> >> On Mon, Oct 25, 2021 at 5:30 AM Md. Hejbul Tawhid MUNNA >> wrote: >> > >> > Hi, >> > >> > We have installed openstack ussuri version from ubuntu universe >> repository. >> > >> > We have installed octavia 6.2.0 version. >> > >> > after creating loadbalancer , listener and pool all are offline. but >> the LB operation is working as expected. changing the pool member is also >> working. >> > >> > >> > octavia is installed in compute node. 5555 is listening and allowed in >> iptables (iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT). >> > >> > amphora to octavia-worker(172.16.0.2) is reachable. >> > >> > Any idea to troubleshoot this issue >> > >> > >> > Please find the log from octavia-worker >> > >> > >> > >> > >> ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// >> > 2021-10-25 18:15:13.482 1192307 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >> Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >> with url: // (Caused by >> NewConnectionError('> 0x7f540e192be0>: Failed to establish a new connection: [Errno 111] >> Connection refused')) >> > 2021-10-25 18:15:18.490 1192307 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >> Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >> with url: // (Caused by >> NewConnectionError('> 0x7f540c0ddd00>: Failed to establish a new connection: [Errno 111] >> Connection refused')) >> > 2021-10-25 18:15:23.507 1192307 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >> Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >> with url: // (Caused by >> NewConnectionError('> 0x7f540c0ddd90>: Failed to establish a new connection: [Errno 111] >> Connection refused')) >> > 2021-10-25 18:15:28.511 1192307 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >> Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >> with url: // (Caused by >> NewConnectionError('> 0x7f540c0ddbb0>: Failed to establish a new connection: [Errno 111] >> Connection refused')) >> > 2021-10-25 18:15:33.521 1192307 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >> Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >> with url: // (Caused by >> NewConnectionError('> 0x7f540c064190>: Failed to establish a new connection: [Errno 111] >> Connection refused')) >> > 2021-10-25 18:15:38.529 1192307 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >> Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >> with url: // (Caused by >> NewConnectionError('> 0x7f540d111790>: Failed to establish a new connection: [Errno 111] >> Connection refused')) >> > 2021-10-25 18:15:43.540 1192307 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >> Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >> with url: // (Caused by >> NewConnectionError('> 0x7f53ec7a3670>: Failed to establish a new connection: [Errno 111] >> Connection refused')) >> > 2021-10-25 18:15:48.549 1192307 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >> Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >> with url: // (Caused by >> NewConnectionError('> 0x7f53ec74d0d0>: Failed to establish a new connection: [Errno 111] >> Connection refused')) >> > 2021-10-25 18:15:53.554 1192307 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >> Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >> with url: // (Caused by >> NewConnectionError('> 0x7f53ec75a7f0>: Failed to establish a new connection: [Errno 111] >> Connection refused')) >> > 2021-10-25 18:15:58.561 1192307 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >> Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >> with url: // (Caused by >> NewConnectionError('> 0x7f53ec75a6a0>: Failed to establish a new connection: [Errno 111] >> Connection refused')) >> > 2021-10-25 18:16:04.707 1192307 INFO >> octavia.controller.worker.v1.tasks.database_tasks >> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Mark ALLOCATED in DB for amphora: >> c661b828-1690-4866-8152-f745c43e0977 with compute id >> c9133819-b8e0-42d6-9544-bf83e3ad4b3f for load balancer: >> c0bd3e21-6983-40c9-8713-859194496b37 >> > 2021-10-25 18:16:40.660 1192307 INFO >> octavia.controller.worker.v1.tasks.database_tasks >> [req-78e78f29-3bdb-4b12-ae76-b8daa4926c09 - >> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Mark ACTIVE in DB for load balancer >> id: c0bd3e21-6983-40c9-8713-859194496b37 >> > 2021-10-25 18:16:44.317 1192307 INFO >> octavia.controller.queue.v1.endpoints [-] Creating listener >> 'cc45192d-de70-4d59-857b-ac23c4fc8d07'... >> > 2021-10-25 18:16:44.325 1192307 WARNING >> octavia.controller.worker.v1.controller_worker [-] Failed to fetch listener >> cc45192d-de70-4d59-857b-ac23c4fc8d07 from DB. Retrying for up to 60 seconds. >> > 2021-10-25 18:17:35.375 1192307 INFO >> octavia.controller.queue.v1.endpoints [-] Creating pool >> '9d23855f-d849-4ad9-9de1-66ab5cd268eb'... >> > 2021-10-25 18:17:35.382 1192307 WARNING >> octavia.controller.worker.v1.controller_worker [-] Failed to fetch pool >> 9d23855f-d849-4ad9-9de1-66ab5cd268eb from DB. Retrying for up to 60 seconds. >> > 2021-10-25 18:18:02.814 1192307 INFO >> octavia.controller.queue.v1.endpoints [-] Creating member >> '29bb41e5-457c-43ba-9149-5af55e73fe38'... >> > 2021-10-25 18:18:02.825 1192307 WARNING >> octavia.controller.worker.v1.controller_worker [-] Failed to fetch member >> 29bb41e5-457c-43ba-9149-5af55e73fe38 from DB. Retrying for up to 60 seconds. >> > >> > Regards, >> > Munna >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From munnaeebd at gmail.com Tue Oct 26 08:01:15 2021 From: munnaeebd at gmail.com (Md. Hejbul Tawhid MUNNA) Date: Tue, 26 Oct 2021 14:01:15 +0600 Subject: Octavia loadbalancer status offline In-Reply-To: References: Message-ID: Dear Gregory, It's working now. Thank you so much for your assistance. Regards, Munna On Tue, Oct 26, 2021 at 12:45 PM Gregory Thiemonge wrote: > Hi, > > The health-manager receives the messages but cannot decrypt them. There's > one configuration setting that is missing in the doc, there's an open > review to add it: > > > https://review.opendev.org/c/openstack/octavia/+/784022/1/doc/source/install/install-ubuntu.rst > > The [health_manager]/heartbeat_key is a parameter that is used to encrypt > the heartbeat messages sent by the amphora, and there's a known issue that > occurs when the key is empty, so it should be required. > > Greg > > On Tue, Oct 26, 2021 at 6:48 AM Md. Hejbul Tawhid MUNNA < > munnaeebd at gmail.com> wrote: > >> Hi Michael, >> >> We have checked as per your advice. Please find the below details >> 1) >> [health_manager] >> bind_port = 5555 >> bind_ip = 0.0.0.0 >> controller_ip_port_list = 172.16.0.2:5555 >> >> 2) >> >> # tcpdump -n -vv -i o-hm0 >> tcpdump: listening on o-hm0, link-type EN10MB (Ethernet), capture size >> 262144 bytes >> 10:37:37.440219 IP (tos 0x0, ttl 64, id 12636, offset 0, flags [DF], >> proto UDP (17), length 319) >> 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3b -> >> 0xa82a!] UDP, length 291 >> 10:37:47.495440 IP (tos 0x0, ttl 64, id 13942, offset 0, flags [DF], >> proto UDP (17), length 319) >> 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3b -> >> 0x7078!] UDP, length 291 >> 10:37:57.754072 IP (tos 0x0, ttl 64, id 15228, offset 0, flags [DF], >> proto UDP (17), length 319) >> 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3b -> >> 0x3088!] UDP, length 291 >> 10:38:07.814541 IP (tos 0x0, ttl 64, id 16645, offset 0, flags [DF], >> proto UDP (17), length 318) >> 172.16.1.220.59727 > 172.16.0.2.5555: [bad udp cksum 0x5b3a -> >> 0xc7e8!] UDP, length 290 >> >> >> 3 & 4 ) >> >> Enabled debug, No errors. >> >> # tail -f /var/log/octavia/octavia-health-manager.log >> 2021-10-26 10:33:05.659 1277703 WARNING >> octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager >> experienced an exception processing a heartbeat message from >> ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object >> has no attribute 'encode' >> 2021-10-26 10:33:05.744 1277704 DEBUG futurist.periodics [-] Submitting >> periodic callback >> 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' >> _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 >> 2021-10-26 10:33:08.744 1277704 DEBUG futurist.periodics [-] Submitting >> periodic callback >> 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' >> _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 >> 2021-10-26 10:33:11.744 1277704 DEBUG futurist.periodics [-] Submitting >> periodic callback >> 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' >> _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 >> 2021-10-26 10:33:14.744 1277704 DEBUG futurist.periodics [-] Submitting >> periodic callback >> 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' >> _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 >> 2021-10-26 10:33:15.717 1277703 DEBUG >> octavia.amphorae.drivers.health.heartbeat_udp [-] *Received packet from* >> ('172.16.1.220', 59727) dorecv >> /usr/lib/python3/dist-packages/octavia/amphorae/drivers/health/heartbeat_udp.py:189 >> 2021-10-26 10:33:15.717 1277703 WARNING >> octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager >> experienced an exception processing a heartbeat message from >> ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object >> has no attribute 'encode' >> 2021-10-26 10:33:17.744 1277704 DEBUG futurist.periodics [-] Submitting >> periodic callback >> 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' >> _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 >> 2021-10-26 10:33:20.744 1277704 DEBUG futurist.periodics [-] Submitting >> periodic callback >> 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' >> _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 >> 2021-10-26 10:33:23.744 1277704 DEBUG futurist.periodics [-] Submitting >> periodic callback >> 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' >> _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 >> 2021-10-26 10:33:25.799 1277703 DEBUG >> octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from >> ('172.16.1.220', 59727) dorecv >> /usr/lib/python3/dist-packages/octavia/amphorae/drivers/health/heartbeat_udp.py:189 >> 2021-10-26 10:33:25.799 1277703 WARNING >> octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager >> experienced an exception processing a heartbeat message from >> ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object >> has no attribute 'encode' >> 2021-10-26 10:33:26.744 1277704 DEBUG futurist.periodics [-] Submitting >> periodic callback >> 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' >> _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 >> 2021-10-26 10:33:29.744 1277704 DEBUG futurist.periodics [-] Submitting >> periodic callback >> 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' >> _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 >> 2021-10-26 10:33:32.744 1277704 DEBUG futurist.periodics [-] Submitting >> periodic callback >> 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' >> _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 >> 2021-10-26 10:33:35.744 1277704 DEBUG futurist.periodics [-] Submitting >> periodic callback >> 'octavia.cmd.health_manager.hm_health_check..periodic_health_check' >> _process_scheduled /usr/lib/python3/dist-packages/futurist/periodics.py:641 >> 2021-10-26 10:33:35.842 1277703 DEBUG >> octavia.amphorae.drivers.health.heartbeat_udp [-] Received packet from >> ('172.16.1.220', 59727) dorecv >> /usr/lib/python3/dist-packages/octavia/amphorae/drivers/health/heartbeat_udp.py:189 >> 2021-10-26 10:33:35.843 1277703 WARNING >> octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager >> experienced an exception processing a heartbeat message from >> ('172.16.1.220', 59727). Ignoring this packet. Exception: 'NoneType' object >> has no attribute 'encode' >> >> Regards, >> Munna >> >> >> On Mon, Oct 25, 2021 at 10:23 PM Michael Johnson >> wrote: >> >>> Hi Munna, >>> >>> I am guessing you are seeing the operating status offline? >>> >>> This is commonly caused by the amphora being unable to reach the >>> health manager process. >>> >>> Another symptom of this is the statistics for the load balancer will >>> not increase. >>> >>> Some things to check: >>> 1. Is your controller IP and port list correct? >>> >>> https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list >>> 2. Are you seeing the heartbeat packets arrive on the network >>> interface on your health manager instance? >>> 3. Is the health manager log reporting any issues, such as an >>> incorrect heartbeat key? >>> 4. If you enable debug logging on the health manager, do you see log >>> messages indicating the health manager has received heartbeat packets >>> from the amphora? "Received packet from" >>> >>> Michael >>> >>> On Mon, Oct 25, 2021 at 5:30 AM Md. Hejbul Tawhid MUNNA >>> wrote: >>> > >>> > Hi, >>> > >>> > We have installed openstack ussuri version from ubuntu universe >>> repository. >>> > >>> > We have installed octavia 6.2.0 version. >>> > >>> > after creating loadbalancer , listener and pool all are offline. but >>> the LB operation is working as expected. changing the pool member is also >>> working. >>> > >>> > >>> > octavia is installed in compute node. 5555 is listening and allowed in >>> iptables (iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT). >>> > >>> > amphora to octavia-worker(172.16.0.2) is reachable. >>> > >>> > Any idea to troubleshoot this issue >>> > >>> > >>> > Please find the log from octavia-worker >>> > >>> > >>> > >>> > >>> ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// >>> > 2021-10-25 18:15:13.482 1192307 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver >>> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >>> Retrying.: requests.exceptions.ConnectionError: >>> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >>> with url: // (Caused by >>> NewConnectionError('>> 0x7f540e192be0>: Failed to establish a new connection: [Errno 111] >>> Connection refused')) >>> > 2021-10-25 18:15:18.490 1192307 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver >>> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >>> Retrying.: requests.exceptions.ConnectionError: >>> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >>> with url: // (Caused by >>> NewConnectionError('>> 0x7f540c0ddd00>: Failed to establish a new connection: [Errno 111] >>> Connection refused')) >>> > 2021-10-25 18:15:23.507 1192307 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver >>> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >>> Retrying.: requests.exceptions.ConnectionError: >>> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >>> with url: // (Caused by >>> NewConnectionError('>> 0x7f540c0ddd90>: Failed to establish a new connection: [Errno 111] >>> Connection refused')) >>> > 2021-10-25 18:15:28.511 1192307 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver >>> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >>> Retrying.: requests.exceptions.ConnectionError: >>> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >>> with url: // (Caused by >>> NewConnectionError('>> 0x7f540c0ddbb0>: Failed to establish a new connection: [Errno 111] >>> Connection refused')) >>> > 2021-10-25 18:15:33.521 1192307 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver >>> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >>> Retrying.: requests.exceptions.ConnectionError: >>> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >>> with url: // (Caused by >>> NewConnectionError('>> 0x7f540c064190>: Failed to establish a new connection: [Errno 111] >>> Connection refused')) >>> > 2021-10-25 18:15:38.529 1192307 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver >>> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >>> Retrying.: requests.exceptions.ConnectionError: >>> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >>> with url: // (Caused by >>> NewConnectionError('>> 0x7f540d111790>: Failed to establish a new connection: [Errno 111] >>> Connection refused')) >>> > 2021-10-25 18:15:43.540 1192307 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver >>> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >>> Retrying.: requests.exceptions.ConnectionError: >>> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >>> with url: // (Caused by >>> NewConnectionError('>> 0x7f53ec7a3670>: Failed to establish a new connection: [Errno 111] >>> Connection refused')) >>> > 2021-10-25 18:15:48.549 1192307 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver >>> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >>> Retrying.: requests.exceptions.ConnectionError: >>> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >>> with url: // (Caused by >>> NewConnectionError('>> 0x7f53ec74d0d0>: Failed to establish a new connection: [Errno 111] >>> Connection refused')) >>> > 2021-10-25 18:15:53.554 1192307 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver >>> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >>> Retrying.: requests.exceptions.ConnectionError: >>> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >>> with url: // (Caused by >>> NewConnectionError('>> 0x7f53ec75a7f0>: Failed to establish a new connection: [Errno 111] >>> Connection refused')) >>> > 2021-10-25 18:15:58.561 1192307 WARNING >>> octavia.amphorae.drivers.haproxy.rest_api_driver >>> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Could not connect to instance. >>> Retrying.: requests.exceptions.ConnectionError: >>> HTTPSConnectionPool(host='172.16.1.220', port=9443): Max retries exceeded >>> with url: // (Caused by >>> NewConnectionError('>> 0x7f53ec75a6a0>: Failed to establish a new connection: [Errno 111] >>> Connection refused')) >>> > 2021-10-25 18:16:04.707 1192307 INFO >>> octavia.controller.worker.v1.tasks.database_tasks >>> [req-806e5e6b-46b0-4a52-900f-7c2d22d4442d - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Mark ALLOCATED in DB for amphora: >>> c661b828-1690-4866-8152-f745c43e0977 with compute id >>> c9133819-b8e0-42d6-9544-bf83e3ad4b3f for load balancer: >>> c0bd3e21-6983-40c9-8713-859194496b37 >>> > 2021-10-25 18:16:40.660 1192307 INFO >>> octavia.controller.worker.v1.tasks.database_tasks >>> [req-78e78f29-3bdb-4b12-ae76-b8daa4926c09 - >>> 9a817a70161d45fd9f0b5fe2cad30f5c - - -] Mark ACTIVE in DB for load balancer >>> id: c0bd3e21-6983-40c9-8713-859194496b37 >>> > 2021-10-25 18:16:44.317 1192307 INFO >>> octavia.controller.queue.v1.endpoints [-] Creating listener >>> 'cc45192d-de70-4d59-857b-ac23c4fc8d07'... >>> > 2021-10-25 18:16:44.325 1192307 WARNING >>> octavia.controller.worker.v1.controller_worker [-] Failed to fetch listener >>> cc45192d-de70-4d59-857b-ac23c4fc8d07 from DB. Retrying for up to 60 seconds. >>> > 2021-10-25 18:17:35.375 1192307 INFO >>> octavia.controller.queue.v1.endpoints [-] Creating pool >>> '9d23855f-d849-4ad9-9de1-66ab5cd268eb'... >>> > 2021-10-25 18:17:35.382 1192307 WARNING >>> octavia.controller.worker.v1.controller_worker [-] Failed to fetch pool >>> 9d23855f-d849-4ad9-9de1-66ab5cd268eb from DB. Retrying for up to 60 seconds. >>> > 2021-10-25 18:18:02.814 1192307 INFO >>> octavia.controller.queue.v1.endpoints [-] Creating member >>> '29bb41e5-457c-43ba-9149-5af55e73fe38'... >>> > 2021-10-25 18:18:02.825 1192307 WARNING >>> octavia.controller.worker.v1.controller_worker [-] Failed to fetch member >>> 29bb41e5-457c-43ba-9149-5af55e73fe38 from DB. Retrying for up to 60 seconds. >>> > >>> > Regards, >>> > Munna >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Oct 26 08:56:45 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 26 Oct 2021 10:56:45 +0200 Subject: [nova][placement] Yoga PTG summary In-Reply-To: References: Message-ID: On Mon, Oct 25 2021 at 06:50:12 PM +0200, Sylvain Bauza wrote: > > Well, it's a try given we discussed for 4 days and it could be a > large summary ;) > You can see all the notes in a read-only etherpad here : > https://etherpad.opendev.org/p/r.e70aa851abf8644c29c8abe4bce32b81 Thank you Sylvain! > > ### Cross-project discussions > > # Cyborg cross-project discussion with Nova > We agreed on adding a OWNED_BY_NOVA trait for all Resource Providers > creating by Nova so Cyborg would provide their own OWNED_BY_CYBORG > trait for knowing which inventories are used by either Nova or Cyborg. > Cyborg contributors need to modify > https://review.opendev.org/c/openstack/nova-specs/+/780452 > We also agreed on the fact that > https://blueprints.launchpad.net/nova/+spec/cyborg-suspend-and-resume > is a specless blueprint. A small correction. The name of the trait did not changed during the PTG discussion. It is still OWNER_ in the etherpad so OWNED_BY_ seems like a honest mistake here. [snip] > > # Zombie Resource Providers no longer corresponding to Nova resources > Thanks to the OWNED_BY_NOVA trait we agreed when discussing with the > Cyborg team, we could find a way to know the ResourceProviders owned > by Nova that are no longer in use and we could consequently delete > them, or warn the operator if existing allocations are present. and also here. [snip] > > Okay, if you reach this point, you're very brave. Kudos to you. > I don't really want to reproduce this exercice often, but I just hope > it helps you summarizing a thousand-line large etherpad. > > > -Sylvain cheers, gibi From bxzhu_5355 at 163.com Tue Oct 26 09:32:46 2021 From: bxzhu_5355 at 163.com (Boxiang Zhu) Date: Tue, 26 Oct 2021 17:32:46 +0800 (GMT+08:00) Subject: [skyline][tc] October 2021 - PTG Summary Message-ID: <7c7a8066.ad25.17cbbf1eb09.Coremail.bxzhu_5355@163.com> Hi, Well, it's Skyline Team's first time to attend the PTG : ) Thank you for your attention and discussion of Skyline. And welcome more friends to join us! You can see all the notes in a read-only etherpad here: - skyline etherpad: https://etherpad.opendev.org/p/r.b44d07e6504a4b38514b2717f01aca62 - tc etherpad: https://etherpad.opendev.org/p/r.a07136b01df95515f32c4cb779faa45c#L395 ## reorganize to one Python package per Git repository Now skyline-apiserver has divided the source code into six sections: - skyline-apiserver: core source code of skyline apiserver - skyline-config: config library with yaml to parse config file - skyline-console: git submodule of skyline-console - skyline-log: log library with loguru - skyline-nginx: generate the nginx.conf file with openstack environment - skyline-policy-manager: generate policy yaml file Generally, we just focus on skyline-apiserver. ## stop using Git's submodule feature Is there any reason why we should stop using git's submodule? ## use Python package configuration and tooling consistent with existing OpenStack projects Now skyline-apiserver use yaml to parse config and poetry to manage dependency. ## call Python entrypoints directly from tox instead of using tox as a wrapper around make Scripts[0] get converted to console_scripts entry points, plugins[1] would allow for arbitrary entry points. https://python-poetry.org/docs/pyproject/#scripts https://python-poetry.org/docs/pyproject/#plugins ## constrain Python dependencies and test dependencies in tox configuration (pip -c) Yet, there is no constraints like pip -c in poetry. Find some discussion about this in poetry. But all are closed, not merged. https://github.com/python-poetry/poetry/pull/4005 https://github.com/python-poetry/poetry-core/pull/172 ## track Python dependencies and test dependencies in requirements files Now we track python depentdencies and test dependencies in poetry's tool.poetry.dependencies and tool.poetry.dev-dependencies ## get Python package versions from Git tags rather than hard-coded in configuration files Sure, but hard-coded package version is not serious problem. ## rely more on Oslo libraries (currently only oslo.policy is used) Reuse wherever possible. oslo.policy is used to generate the policy yaml for skyline to use. ## Horizon has plugin support so that projects like Manila can add dashboard support. Skyline's architecture and technology stacks are different from horizon. So we will continue to explore how to gracefully add non-core module functionality via plug-ins. At the same time, we will also improve core component functions. Best Regards Boxiang Zhu -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Oct 26 09:48:51 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 26 Oct 2021 11:48:51 +0200 Subject: [nova][placement] Yoga PTG summary In-Reply-To: References: Message-ID: On Tue, Oct 26, 2021 at 10:57 AM Balazs Gibizer wrote: > > > On Mon, Oct 25 2021 at 06:50:12 PM +0200, Sylvain Bauza > wrote: > > > > Well, it's a try given we discussed for 4 days and it could be a > > large summary ;) > > You can see all the notes in a read-only etherpad here : > > https://etherpad.opendev.org/p/r.e70aa851abf8644c29c8abe4bce32b81 > > Thank you Sylvain! > > > > > ### Cross-project discussions > > > > # Cyborg cross-project discussion with Nova > > We agreed on adding a OWNED_BY_NOVA trait for all Resource Providers > > creating by Nova so Cyborg would provide their own OWNED_BY_CYBORG > > trait for knowing which inventories are used by either Nova or Cyborg. > > Cyborg contributors need to modify > > https://review.opendev.org/c/openstack/nova-specs/+/780452 > > We also agreed on the fact that > > https://blueprints.launchpad.net/nova/+spec/cyborg-suspend-and-resume > > is a specless blueprint. > > A small correction. The name of the trait did not changed during the > PTG discussion. It is still OWNER_ in the etherpad so > OWNED_BY_ seems like a honest mistake here. > > Yeah, just a clarification here : I haven't wanted to bikeshed about the trait name during our PTG session but I'm also not sure we have a consensus about it. FWIW, I'll ask about it on the spec but here, given I was saying "we agreed on adding a 'XXX' trait", I used an adjective instead of a name. TBC, we have both adjectives and names for our standard traits [1] so I'm fine with both of the traits. HTH, -Sylvain [1] https://docs.openstack.org/os-traits/latest/reference/traits.html [snip] > > > > > # Zombie Resource Providers no longer corresponding to Nova resources > > Thanks to the OWNED_BY_NOVA trait we agreed when discussing with the > > Cyborg team, we could find a way to know the ResourceProviders owned > > by Nova that are no longer in use and we could consequently delete > > them, or warn the operator if existing allocations are present. > > and also here. > > [snip] > > > > > Okay, if you reach this point, you're very brave. Kudos to you. > > I don't really want to reproduce this exercice often, but I just hope > > it helps you summarizing a thousand-line large etherpad. > > > > > > -Sylvain > > cheers, > gibi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Oct 26 12:20:55 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 26 Oct 2021 12:20:55 +0000 Subject: [skyline][tc] October 2021 - PTG Summary In-Reply-To: <7c7a8066.ad25.17cbbf1eb09.Coremail.bxzhu_5355@163.com> References: <7c7a8066.ad25.17cbbf1eb09.Coremail.bxzhu_5355@163.com> Message-ID: <20211026122055.kku27khaug36yxog@yuggoth.org> On 2021-10-26 17:32:46 +0800 (+0800), Boxiang Zhu wrote: [...] > ## reorganize to one Python package per Git repository > Now skyline-apiserver has divided the source code into six sections: > - skyline-apiserver: core source code of skyline apiserver > - skyline-config: config library with yaml to parse config file > - skyline-console: git submodule of skyline-console > - skyline-log: log library with loguru > - skyline-nginx: generate the nginx.conf file with openstack environment > - skyline-policy-manager: generate policy yaml file > Generally, we just focus on skyline-apiserver. Each of these will need to be in a separate Git repository, rather than packaging them from a single skyline-apiserver Git repository. > ## stop using Git's submodule feature > Is there any reason why we should stop using git's submodule? The distinct Git repositories should be more loosely coupled so that you can rely on OpenDev's CI system (Zuul) to use cross-project change dependencies correctly. > ## use Python package configuration and tooling consistent with existing OpenStack projects > Now skyline-apiserver use yaml to parse config and poetry to manage dependency. OpenStack does not use poetry, currently PBR and SetupTools are used for packaging other OpenStack deliverables. > ## call Python entrypoints directly from tox instead of using tox as a wrapper around make > Scripts[0] get converted to console_scripts entry points, plugins[1] would allow for arbitrary entry points. > > https://python-poetry.org/docs/pyproject/#scripts > > https://python-poetry.org/docs/pyproject/#plugins I understand, but this is inconsistent with how other OpenStack deliverables and developed and tested. > ## constrain Python dependencies and test dependencies in tox configuration (pip -c) > Yet, there is no constraints like pip -c in poetry. Find some discussion about this in poetry. But all are closed, not merged. > https://github.com/python-poetry/poetry/pull/4005 > https://github.com/python-poetry/poetry-core/pull/172 Yes, stop using poetry. > ## track Python dependencies and test dependencies in requirements files > Now we track python depentdencies and test dependencies in poetry's tool.poetry.dependencies and tool.poetry.dev-dependencies I feel like I'm a broken record here, but stop using poetry and you will be able to do these things the way they're done in OpenStack. > ## get Python package versions from Git tags rather than hard-coded in configuration files > Sure, but hard-coded package version is not serious problem. Using PBR and SetupTools will solve that for you automatically, and will be compatible with OpenStack's release automation. > ## rely more on Oslo libraries (currently only oslo.policy is used) > Reuse wherever possible. oslo.policy is used to generate the policy yaml for skyline to use. [...] Yes, but does Skyline have configuration? Does it perform any logging? Does it use any databases? There are Oslo libraries for handling tasks common to many OpenStack services so that all services can do them in a consistent fashion and fixes only need to be applied in one place to address them for all of OpenStack. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From vikarnatathe at gmail.com Tue Oct 26 04:33:04 2021 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Tue, 26 Oct 2021 10:03:04 +0530 Subject: Openstack magnum In-Reply-To: <1135213573.121319386.1635076726264.JavaMail.zimbra@tubitak.gov.tr> References: <1135213573.121319386.1635076726264.JavaMail.zimbra@tubitak.gov.tr> Message-ID: Hi Yasemin, You can find all the stable releases in the below link https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64 Vikarna On Sun, 24 Oct 2021 at 17:28, Yasemin DEM?RAL (BILGEM BTE) < yasemin.demiral at tubitak.gov.tr> wrote: > Hi, > > How can I dowloand fcos 33? I can't find any link for dowloanding it. > > *Yasemin DEM?RAL* > > > Senior Researcher at TUBITAK BILGEM B3LAB > > Safir Cloud Scrum Master > > > ------------------------------ > *Kimden: *"Vikarna Tathe" > *Kime: *"Ammad Syed" > *Kk: *"openstack-discuss" > *G?nderilenler: *19 Ekim Sal? 2021 16:23:20 > *Konu: *Re: Openstack magnum > > Hi Ammad, > Thanks!!! It worked. > > On Tue, 19 Oct 2021 at 15:00, Vikarna Tathe > wrote: > >> Hi Ammad, >> Yes, fcos34. Let me try with fcos33. Thanks >> >> On Tue, 19 Oct 2021 at 14:52, Ammad Syed wrote: >> >>> Hi, >>> >>> Which fcos image you are using ? It looks like you are using fcos 34. >>> Which is currently not supported. Use fcos 33. >>> >>> On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe >>> wrote: >>> >>>> Hi All, >>>> I was able to login to the instance. I see that kubelet service is in >>>> activating state. When I checked the journalctl, found the below. >>>> >>>> >>>> >>>> >>>> >>>> >>>> *Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: >>>> Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 >>>> kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs >>>> /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 >>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main >>>> process exited, code=exited, status=125/n/aOct 19 05:18:34 >>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: >>>> Failed with result 'exit-code'.Oct 19 05:18:44 >>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: >>>> Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 >>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via >>>> Hyperkube (System Container).* >>>> >>>> Executed the below command to fix this issue. >>>> *mkdir -p /sys/fs/cgroup/systemd* >>>> >>>> >>>> Now I am getiing the below error. Has anybody seen this issue. >>>> >>>> >>>> >>>> *failed to get the kubelet's cgroup: mountpoint for cpu not found. >>>> Kubelet system container metrics may be missing.failed to get the container >>>> runtime's cgroup: failed to get container name for docker process: >>>> mountpoint for cpu not found. failed to run Kubelet: mountpoint for not >>>> found* >>>> >>>> On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe >>>> wrote: >>>> >>>>> >>>>>> Hi Ammad, >>>>>> Thanks for responding. >>>>>> >>>>>> Yes the instance is getting created, but i am unable to login >>>>>> though i have generated the keypair. There is no default password for this >>>>>> image to login via console. >>>>>> >>>>>> openstack server list >>>>>> >>>>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>>>>> | ID | Name >>>>>> | Status | Networks | Image >>>>>> | Flavor | >>>>>> >>>>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ >>>>>> | cf955a75-8cd2-4f91-a01f-677159b57cb2 | >>>>>> k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, >>>>>> 10.14.20.181 | fedora-coreos-latest | m1.large | >>>>>> >>>>>> >>>>>> ssh -i id_rsa core at 10.14.20.181 >>>>>> The authenticity of host '10.14.20.181 (10.14.20.181)' can't be >>>>>> established. >>>>>> ECDSA key fingerprint is >>>>>> SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. >>>>>> Are you sure you want to continue connecting (yes/no/[fingerprint])? >>>>>> yes >>>>>> Warning: Permanently added '10.14.20.181' (ECDSA) to the list of >>>>>> known hosts. >>>>>> core at 10.14.20.181: Permission denied >>>>>> (publickey,gssapi-keyex,gssapi-with-mic). >>>>>> >>>>>> On Mon, 18 Oct 2021 at 14:02, Ammad Syed >>>>>> wrote: >>>>>> >>>>>>> Hi, >>>>>>> Can you check if the master server is deployed as a nova instance ? >>>>>>> if yes, then login to the instance and check cloud-init and heat agent logs >>>>>>> to see the errors. >>>>>>> >>>>>>> Ammad >>>>>>> >>>>>>> On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe < >>>>>>> vikarnatathe at gmail.com> wrote: >>>>>>> >>>>>>>> Hello All, >>>>>>>> I am trying to create a kubernetes cluster using magnum. Image: >>>>>>>> fedora-coreos. >>>>>>>> >>>>>>>> >>>>>>>> The stack gets stucked in CREATE_IN_PROGRESS. See the output below. >>>>>>>> openstack coe cluster list >>>>>>>> >>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>>> | uuid | name | keypair | >>>>>>>> node_count | master_count | status | health_status | >>>>>>>> >>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>>> | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | >>>>>>>> 2 | 1 | CREATE_IN_PROGRESS | None | >>>>>>>> >>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ >>>>>>>> >>>>>>>> openstack stack resource show k8s-cluster-01-2nyejxo3hyvb >>>>>>>> kube_masters >>>>>>>> >>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>> | Field | Value >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> >>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>> | attributes | {'refs_map': None, 'removed_rsrc_list': >>>>>>>> [], 'attributes': None, 'refs': None} >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | creation_time | 2021-10-18T06:44:02Z >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | description | >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | links | [{'href': ' >>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters', >>>>>>>> 'rel': 'self'}, {'href': ' >>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17', >>>>>>>> 'rel': 'stack'}, {'href': ' >>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028', >>>>>>>> 'rel': 'nested'}] | >>>>>>>> | logical_resource_id | kube_masters >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | required_by | ['kube_cluster_deploy', >>>>>>>> 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | resource_name | kube_masters >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | resource_status | CREATE_IN_PROGRESS >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | resource_status_reason | state changed >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | resource_type | OS::Heat::ResourceGroup >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> | updated_time | 2021-10-18T06:44:02Z >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> | >>>>>>>> >>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >>>>>>>> >>>>>>>> Vikarna >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Regards, >>>>>>> >>>>>>> Syed Ammad Ali >>>>>>> >>>>>> -- >>> Regards, >>> >>> Syed Ammad Ali >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peljasz at yahoo.co.uk Tue Oct 26 16:54:14 2021 From: peljasz at yahoo.co.uk (lejeczek) Date: Tue, 26 Oct 2021 17:54:14 +0100 Subject: Guest's secondary/virtual IP In-Reply-To: References: Message-ID: On 25/10/2021 15:20, lejeczek wrote: > Hi guys. > > What I expected turns out not to be enough, must be > something trivial - what am I missing? > I set a port with --allowed-address and on the > instance/guest using the port I did: > -> $ ip add add 10.0.1.99/24 dev eth1 > yet that IP other guest cannot reach. > > many thanks, L. > turns out, even though I do not admin but only consume that deployment, is was "trivial" router missing on my part/end. thanks, L. From Venkata.Krishna.Reddy at ibm.com Tue Oct 26 16:54:49 2021 From: Venkata.Krishna.Reddy at ibm.com (Venkata Krishna Reddy) Date: Tue, 26 Oct 2021 16:54:49 +0000 Subject: Devstack stack run issue Message-ID: An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Oct 26 17:05:09 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 26 Oct 2021 17:05:09 +0000 Subject: Devstack stack run issue In-Reply-To: References: Message-ID: <20211026170509.7eiwleemkohf6bcl@yuggoth.org> [keeping originator in Cc since they don't seem to be subscribed] On 2021-10-26 16:54:49 +0000 (+0000), Venkata Krishna Reddy wrote: > While running stack on latest devstack, ended up with the > following error message: [...] > Oct 26 16:36:49 ubuntu nova-compute[337228]: ERROR > oslo_service.service nova.exception.InvalidCPUInfo: Configured > CPU model: Nehalem is not compatible with host CPU. Please > correct your config and try again. Unacceptable CPU info: CPU > doesn't have compatibility. > > A week ago, the stack was successful and it is failing now. [...] This is due to https://review.opendev.org/815020 which merged a week ago to update the default CPU model in DevStack with one which will work for CentOS 9. See also this mailing list post and its replies which go into greater detail on the situation: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025500.html You can probably override LIBVIRT_CPU_MODEL in your configuration setting the old default of "none" if you want the old behavior for now, though there are likely better long-term solutions. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Tue Oct 26 17:19:15 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 26 Oct 2021 10:19:15 -0700 Subject: Devstack stack run issue In-Reply-To: <20211026170509.7eiwleemkohf6bcl@yuggoth.org> References: <20211026170509.7eiwleemkohf6bcl@yuggoth.org> Message-ID: <34a031f6-3efd-4a0a-9529-bf50212b330a@www.fastmail.com> On Tue, Oct 26, 2021, at 10:05 AM, Jeremy Stanley wrote: > [keeping originator in Cc since they don't seem to be subscribed] > > On 2021-10-26 16:54:49 +0000 (+0000), Venkata Krishna Reddy wrote: >> While running stack on latest devstack, ended up with the >> following error message: > [...] >> Oct 26 16:36:49 ubuntu nova-compute[337228]: ERROR >> oslo_service.service nova.exception.InvalidCPUInfo: Configured >> CPU model: Nehalem is not compatible with host CPU. Please >> correct your config and try again. Unacceptable CPU info: CPU >> doesn't have compatibility. >> >> A week ago, the stack was successful and it is failing now. > [...] > > This is due to https://review.opendev.org/815020 which merged a week > ago to update the default CPU model in DevStack with one which will > work for CentOS 9. See also this mailing list post and its replies > which go into greater detail on the situation: > > http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025500.html > > You can probably override LIBVIRT_CPU_MODEL in your configuration > setting the old default of "none" if you want the old behavior for > now, though there are likely better long-term solutions. You would need to set LIBVIRT_CPU_MODE to "none" then LIBVIRT_CPU_MODEL should be ignored (note MODE vs MODEL). If that doesn't work it is a bug and we should fix it. The other thing that might be worth double checking on is what CPU you are running devstack on. We expected that this would work for any reasonably new x86 based system. Aarch64 has an explicit override for the MODEL later in the code. I suppose it is possible we broke this for powerpc and need to add an explicit override similar to aarch64? If you are on x86 more details about the specific CPU might be useful to determine if there are problems with the expected Nehalem compatibility. Clark From gmann at ghanshyammann.com Tue Oct 26 17:44:45 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Oct 2021 12:44:45 -0500 Subject: [all][tc] Technical Committee next weekly meeting on Oct 28th at 1500 UTC Message-ID: <17cbdb45881.de21028388758.8633614813058075848@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for Oct 28th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, Oct 27th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From james.slagle at gmail.com Tue Oct 26 18:28:39 2021 From: james.slagle at gmail.com (James Slagle) Date: Tue, 26 Oct 2021 14:28:39 -0400 Subject: [TripleO] PTG Summary Message-ID: TripleO had PTG sessions on Monday, Tuesday, and Wednesday, followed by a Directord+Task-Core hackfest on Thursday. The main etherpad for the PTG is at https://etherpad.opendev.org/p/tripleo-yoga-topics (includes links to recordings) Overall Summary =============== Our sessions were well attended with good cross collaboration from external stakeholders (storage, network, compute, etc). We had a good mix of topics, which I felt helped maintain interest and engagement. Topics ranged from future proposals, integrations, CI, and infrastructure. Our attendance was between 30-40 for most sessions. The underlying theme of the week was about the proposed migration from Ansible to Directord+task-core. Overall, I felt we accomplished clarifying the scope of the proposed change and articulating the reasons why the change is necessary, as well as understanding the benefits. With the heavy current investment in Ansible specific components, we emphasized the need for migration tooling wherever possible. No significant objections were raised about the approach. However, we settled on the next steps being agreement in the spec, and agreement around the end to end integration in TripleO. We continued to tie most other topics we discussed back to this topic to make sure we were considering the future impacts. The hackfest on Thursday further helped to understand the proposal and illustrated several key concepts that are presently missing in TripleO. Namely, those of dynamic service dependency management, job result caching and fingerprinting, and the messaging architecture. We had around 30 participants during the hackfest. We took a team photo on Thursday which we can be seen at https://slagle.fedorapeople.org/TripleO-Yoga-PTG-Team-Photo.png Thanks for the participation everyone! Sessions ======== Monday ------ 1310-1350 Directord/task-core introduction and overview https://etherpad.opendev.org/p/tripleo-directord-task-core The overview of Directord and task-core was presented, and we clarified the scope of the change, which is primarily about replacing Ansible in TripleO. As part of the change, we will need to migrate from all Ansible specific implementations (playbooks, roles, tasks, modules, actions, etc) to pure Python or Directord orchestrations and components. We emphasized the need for migration tooling wherever possible. No major objections were raised, and I felt the team understood the proposed benefits. Next steps are to finish the spec and continue on the TripleO integration. 1405-1445 task-core task graphs and execution concepts https://etherpad.opendev.org/p/tripleo-task-core-execution Further illustrated the benefits and functionality gained with task-core, particularly around the dynamic dependency ordering for tasks. Explained the benefits for services with a lot of external deployment tasks, such as Octavia. 1500-1540 (CI) Tripleo Health https://etherpad.opendev.org/p/tripleo-ptg-tripleo-health Really great work by the CI team on http://ci-health.tripleo.org/ and https://opendev.org/openstack/tripleo-ci-health-queries. We need to get the whole team committed to maintaining the queries for the most benefit. 1540-1620 OS Migrate update/feedback session https://etherpad.opendev.org/p/yoga-os-migrate Update on new features added to os-migrate, and version/compatibility testing. Tuesday ------- 1310-1350 ceph integration https://etherpad.opendev.org/p/tripleo-ceph-yoga Plan to remove support for ceph-ansible, and add cephadm ingress with haproxy+keepalived. Upgrade challenges around migrating from PCS to ingress. We also need to keep ceph integration as a priority and proving ground for migration to Directord+task-core. 1405-1445 healthchecks https://etherpad.opendev.org/p/tripleo-healthchecks-yoga Participation from Sean Mooney from the Nova team to present the proposed solution in Nova. Agreement this would work well for TripleO. We need to continue the collaboration and participate in the pending Nova spec to help drive the healthcheck model. 1500-1540 (HA) Pacemaker brainstorming https://etherpad.opendev.org/p/tripleo-yoga-pacemaker General consensus is that we still need pacemaker for some tasks. But, we need to evaluate each usage on a case by case basis and migrate services out of pacemaker where there is opportunity. 1540-1620 FIPS Updates & Secret Management https://etherpad.opendev.org/p/yoga-tripleo-fips Plan to go forward with the spec for secret management in TripleO. Wednesday --------- 1310-1350 (CI) tripleo-repos updates https://etherpad.opendev.org/p/ci-tripleo-repos Update on tripleo-repos and new functionality (yum config, tripleo-get-hash). Covered ansible integration (for CI). 1405-1445 (CI) Update on CI and centos9 prep work https://etherpad.opendev.org/p/centos-stream-9-upstream Discussed needed CI jobs and CentOS 9 timeline. 1500-1540 Dependency clashes installing branched and non-branches repos in Zuul jobs https://etherpad.opendev.org/p/branched-unbranched-dependency-clashes Discussed limiting the usage of upper constraints as the overall approach. 1540-1620 RDO Releases Agreed that providing the list of last tested hashes to RDO for releases that TripleO doesn't support is sufficient. We reviewed the RDO/TripleO support statement around intermediary releases that TripleO doesn't plan to maintain. https://review.rdoproject.org/r/c/rdo-website/+/36342 Hackfest ======== The team participated in the hackfest by following through the etherpad at https://etherpad.opendev.org/p/tripleo-directord-hackfest. Several participants were able to get the environment up. We were able to work through various issues as they arose. Good discussion around UX improvements that are needed, and the apparent benefits for different services due to the dynamic dependencies, job caching, execution time, etc. Also shared the POC of the end to end integration within TripleO. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Tue Oct 26 19:27:48 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Tue, 26 Oct 2021 21:27:48 +0200 Subject: [kolla-ansible][neutron]BGP Problems Message-ID: <91918168-AF6B-43C5-BAC2-3880C6055848@univ-grenoble-alpes.fr> Hello, I'm trying to get BGP / BGP speaker / DRAgent to work with Neutron on one side and a Cisco router on the other. I work under Centos / Kolla-ansible / Wallaby. I adapt this technical documentation (https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html) to my configuration but I am having difficulties (the cisco router sends a TCP SYN frame to the router in Openstack, TCP port 179, but the router sends back an RST. I don't understand why, no security on the router port). Do you have any links to help, tutorials, that would be of great help to me. Thanks in advance. Franck From rigault.francois at gmail.com Tue Oct 26 20:43:52 2021 From: rigault.francois at gmail.com (Francois) Date: Tue, 26 Oct 2021 22:43:52 +0200 Subject: [kolla-ansible][neutron]BGP Problems In-Reply-To: <91918168-AF6B-43C5-BAC2-3880C6055848@univ-grenoble-alpes.fr> References: <91918168-AF6B-43C5-BAC2-3880C6055848@univ-grenoble-alpes.fr> Message-ID: Hello On Tue, 26 Oct 2021 at 21:29, Franck VEDEL wrote: > > Hello, > I'm trying to get BGP / BGP speaker / DRAgent to work with Neutron on one side and a Cisco router on the other. I work under Centos / Kolla-ansible / Wallaby. > I adapt this technical documentation (https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html) to my configuration but I am having difficulties (the cisco router sends a TCP SYN frame to the router in Openstack, TCP port 179, but the router sends back an RST. I don't understand why, no security on the router port). My understanding is that the agent will connect to the switch, there is no connection initiated from the switch to the agent. There should be a "neighbor... passive" option in the BGP configuration on the switch that you could set to ensure it's not trying to initiate the connection. If you follow this doc (and you use ovs as driver, not ovn) you should see the dragent connecting to the switch and publishing routes if any. hope that helps > Do you have any links to help, tutorials, that would be of great help to me. Thanks in advance. > > Franck > > Francois From jasonanderson at uchicago.edu Tue Oct 26 22:27:23 2021 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Tue, 26 Oct 2021 22:27:23 +0000 Subject: [kuryr] Using kuryr-kubernetes CNI without neutron agent(s)? Message-ID: <9FF2989B-69FA-494B-B60A-B066E5BF13DA@uchicago.edu> Hello all, I?m interested in letting Neutron provide the network configuration frontend for networks realized on k8s kubelets. I have been reading a lot about kuryr-kubernetes and it looks like it fits the bill, but some of the older architecture diagrams I?ve seen indicate that OVS and the neutron-openvswitch-agent (or similar) must also run on the kubelet node. Is this still accurate? I am hoping to avoid this because my understanding is that running the OVS agent means giving the kubelet node access to RabbitMQ and potentially storing admin keystone creds on the node as well. Can kuryr-kubernetes work without such an agent co-located? Thanks! Jason Anderson --- Chameleon DevOps Lead Department of Computer Science, University of Chicago Mathematics and Computer Science, Argonne National Laboratory From gmann at ghanshyammann.com Wed Oct 27 04:15:50 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Oct 2021 23:15:50 -0500 Subject: [tc][all][ Yoga Virtual PTG Summary Message-ID: <17cbff61d41.12128830b97927.6784906069242137279@ghanshyammann.com> Hello Everyone, I am summarizing the Technical Committee discussion that happened in the Yoga cycle PTG last week. Recordings: Day1: https://www.youtube.com/watch?v=uQRKYummsHM Day2: https://www.youtube.com/watch?v=IHj7cpBlbYo Day3: https://www.youtube.com/watch?v=RT492bi6Xto TC + Community leaders interaction ------------------------------------------ This is our first session in PTG to interact with community leaders and ask for feedback on TC. Below are the topics we discussed in this feedback session * What is TC responsibility: This is the first feedback we started this session, to discuss what is the role of TC other than accepting/ removing the projects and community-wide goals. does TC need to have a strong vision and start working on that? Why not TC do more technical work than spend time on composing resolutions or so? All those were valid points and we all agreed that TC should be doing more technical work and providing more guidance to the community on the cross-project works for example RBAC, Unified limit (more of similar way API SIG is doing on API side). * Improving community-wide goal process: Another feedback was on the effectiveness of community-wide goals. A few of the community-wide goals were not ready when they were selected. Even a few like pdf goal changed the implementation in milestone-2 and projects have to update the previous implementation. Or sometimes we try too much to do instead of small steps like IPV6 testing. With all feedback, we agreed to have a more concrete checklist when we select any goal and also check project bandwidth to complete the work. * Fewer contributors: Fewer contributors issue is from many projects, to solve that at some extent, TC will encourage projects to apply in outreachy/mentorship programs where few projects have received a good amount of help. * Improving CI: Improving CI is the next topic which we discussed in TC slots on Thursday. One key topic in this was "Request to be able to recheck on a single test instead of the whole check test" as many projects like Ironic gate has more often neutron failure and they have to recheck the complete set of jobs. The issue can be from DHCP, memory etc. We do not think allowing job-specific recheck is the right direction here and instead, we should encourage projects to solve the failures we get in the gate which can improve the code stability. Also, we should encourage projects to monitor the gate failures. >From the above discussion, we are taking below two action items for TC: 1. TC to start composing the technical guidelines on various topics like 'unified limit' etc similar way API SIG has done. 2. TC to add a more specific checklist for community-wide goal readiness and make them more successful as they are currently. TC interop sync on guidelines ----------------------------------- We discussed interop tests requirements. One of the points brought that if starter kits impact interop guidelines to consider but it does not. Can cinder be separated from "OpenStack Powered Compute" if it can be used as standalone? The Interop team need to check if there are cloud without cinder and how they apply for interop certification. On tests requirement by interop, our current process stays same that if interop defined the capabilities in new guidelines then the test writing can be requested or even an interop group can propose the test in Tempest or Tempest plugins. Technical Writing (doc) SIG need a chair and more maintainers ----------------------------------------------------------------------- We are merging this SIG repo into TC responsibility. There are not many activities nowadays on this SIG or repo it own so it should not impact the TC work as such. We agreed to distribute this SIG repo in * TC: api-site, constellations, and openstack-manuals repos under * FC SIG: contributor-guide, training-guides, and upstream-institute-virtual-environment * For training-labs which is for COA, I will check with Foundation on ownership or it. Thanks, Stephen for serving as chair for this SIG. Yoga testing runtime: Review & Update if needed ---------------------------------------------------------- We discussed/iterate over the Yoga testing runtime on new distro and python versions to tests. Distro to tests: * We agreed on including "Debian Bullseye" as a distro in the testing runtime. * centos9-stream release time is not clear so we are keeping centos8-stream for now. * centos8 is planned to be EOL at end of 2021, If you see failure on or stable branch testing, then you can update it to centos8-stream (devstack support is already there) But we do not need to change the testing runtime for stable branches as such. Python Versions to test: * py3.8 as voting (no change from currently tested) * py3.9 as voting * py3.10 as non voting. * keep py3.6 until we move from centos8-stream to centos9-stream I will update the testing runtime doc and job template for that. TC tags review ------------------ You might have seen the email from yoctozepto about a query on TC tag usage[1]. As we did not get any response in the email, we did final checks in PTG. We agreed to start the tag framework removal proposal on ML and if there is no objection for a week or so then we will remove it. Improvement in project governance ------------------------------------------ In the past couple of cycles, we have seen few projects that are not so active or not getting more attention from users. This discussion was to brainstorm the process to keep monitoring such projects and especially when there is a new project application how we can monitor those in initial time to check if they have all the required things working to stay as official OpenStack project. The first thing we discussed introducing the different levels of the project something similar to CNCF (sandbox, incubated, graduated) or OpenStack used to have (incubation -> incubated projects). But it is not known if these levels will work well in our governance. With the low cost to maintain projects in OpenStack governance (as long as they are active). Being an OpenStack official project is one way to give that project more visibility in the market and attract users/contributors. With all these points, we agreed not to introduce the different levels in the project hierarchy but we will introduce a new process of 'Tech Pre-review' for new project applications so that we can monitor the new projects if they are doing good and all minimum required things are working smoothly. As the next step, we will be working to define the "Tech Pre-review" checklist and timeline etc. Release name process change ----------------------------------- We want to discuss two points 1. allow community members to vote on release name 2. some pre-trademark (non-paid) checks, in this topic but due to time limit we discussed first only. As you might know, we changed the electorate on release naming from community to TC which has been objected to in the past when we changed it and during the naming voting also in the past couple of cycles. We have simplified the name proposal criteria in the past which solved the issue that occurred in past on TC removing the name. We agreed on: * The community can propose names that meet the convention. * The TC will ensure that the name is not offensive based on community feedback. * Allow community members to vote on the proposed names. * After Z cycle, go back to 'A' and go through the alphabet (obviously don't re-use names). Pain Point targeting ----------------------- Two key pain points we collected during TC weekly meetings were 1. Rabbitmq 2. OpenStackClient support. But the etherpad has a lot more pain points not only from projects but also from operator or project has collected those from operators. We need to iterate the list again which we will continue doing in TC weekly meetings. TC members will encourage projects to fix or have a look into those pain points if the operator has added them. TC also can help a few of the pain points to solve consistently by defining the technical guidelines for such common pain points. As the next step, we will continue reviewing the list in etherpad[2]. 'Skyline' application to become an official OpenStack Project ------------------------------------------------------------------------ We discussed the new project application Skyline which is a dashboard project for OpenStack. One of the key points raised is about non-integrated projects UI-plugins (like every project has horizon plugins for their UI). How and who is going to implement the new UI plugins on the project's side? Also, there is more work needed on repos and python packaging side for which discussion is going in separate mailing thread[3]. Overall, we are ok to proceed with the proposal once the python package/repos things are set up correctly. Discussion on new RBAC ------------------------------ There were a lot of discussions on the new RBAC on various project sessions. it was clear from the discussion or points raised in those sessions that we are not using the system scope in the way it should be used. We should keep system-level operation only for system scoped and project level for the project. If any system users want to perform the project level operation then they can request the project token and then request. Also, 'admin' term at all levels domain, system, and the project is the most confusing term and we would like to change that to 'manager' or so. The discussion is not yet over and due to the time limit, we wanted to continue it after PTG too. As a summary, we selected this (current way) as Yoga cycle goal which we will re-iterate. I will set up the call to continue the discussion and send on openstack-discuss mailing list. For a detailed summary, please refer to L122 in policy popup etherpad[4]. Community-wide goals ---------------------------- Based on the feedback received in TC+community leaders sessions, we would like to make sure we have the proper checklist to know this goal is ready to start (in terms of implementation details as well as projects bandwidth). Rico and I had worked on such a checklist in the past but did not push for review which we should do now. We are also restructuring the community-wide goal timeline by decoupling them from the release cycle. At a time we will select one or more goals to start with one or more milestones. They can be multi-cycle goals or single based on the goal nature and work it needs. This way we will make sure that we complete the work needed to be done for that goal irrespective of the current cycle release. Xena Retrospective ----------------------- What went well? * Xena cycle tracker status: Completed the 9 items out of 13, others in progress. * Think having the weekly meetings went well instead of just office hours. * Got good feedback from the community about what they expect from the TC. * We have been good at keeping our open reviews low. What to change or any other feedback? * Be more technical, provide more technical guidance. * Get a feedback loop from the community and work on that feedback. Pop Up Team Check In ---------------------------- * Secure RBAC / Policy As discussed in RBAC discussion, there are a lot of work still needed along with community-wide goal. We decided to continue this pop-up team for the Yoga cycle at least. * Encryption Slow progress but still more work to do (waiting on changes in Barbican). The Team is hoping for more progress in the coming cycle. We will continue this pop-up team. Meeting check ----------------- We will continue the weekly meeting at the same time they are currently and also continue the monthly video call. We will continue using google meet until we figure out the recording things in meetpad or so. We did not discuss or check on office hours whether to continue those or not which we will continue in weekly meetings. User Survey Feedback Responses --------------------------------------- We will start working on analyzing the feedback from the user survey. Jay will summarize the feedback and then we will start composing the action items accordingly. k8s Steering Committee Cross community Discussions ----------------------------------------------------------------- This is our last hour of the PTG. This is cross-community sessions we continue organizing in PTG. From Kubernetes steering committee, we had Dims, Bob, Christoph joined along with OpenStack TC and a few non-tc members. The first topic we talked about is the different levels in the CNCF ladder and project confirmation process. This area is mostly taken care of by CNCF and Kubernetes community more concentrate on the technical part. Like OpenStack, Kubernetes is also seeing a slow down in project/SIG activities especially on community leaders. Kubernetes is trying mentor/cohort and more distributed roles by documenting the responsibility of that role that will help contributors to discuss it with their employer. OpenStack Upstream investment opportunity is one of the things we discussed and how successful that is. One of the benefits of this is we can communicate the help-needed things on various platforms/forums. ELK is one (the only as of now) example of success. Like OpenStack, Kubernetes also lacks in CI/CD resourcing and funding. This seems like a common problem in many OSS communities. Yoga cycle tracker ---------------------- Like we did in Xena cycle, TC is defining the cycle-wise tracker to complete the set of things TC would like to work on. For the Yoga cycle also we will continue with the tracker. Based on the PTG discussion, we have collected 9 items for the Yoga cycle to complete. I have captured those in the below etherpad which will be using it for progress tracking also. - https://etherpad.opendev.org/p/tc-yoga-tracker [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-September/024804.html [2] https://etherpad.opendev.org/p/pain-point-elimination [3] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025397.html [4] https://etherpad.opendev.org/p/policy-popup-yoga-ptg [5] https://etherpad.opendev.org/p/tc-xena-tracker -gmann From adriant at catalystcloud.nz Wed Oct 27 05:41:29 2021 From: adriant at catalystcloud.nz (Adrian Turjak) Date: Wed, 27 Oct 2021 18:41:29 +1300 Subject: Adjutant needs contributors (and a PTL) to survive! Message-ID: <3026c411-688b-c773-8577-e8eed40b995a@catalystcloud.nz> Hello fellow OpenStackers! I'm moving on to a different opportunity and my new role will not involve OpenStack, and there sadly isn't anyone at Catalystcloud who will be able to take over project responsibilities for Adjutant any time soon (not that I've been very onto it lately). As such Adjutant needs people to take over, and lead it going forward. I believe the codebase is in a reasonably good position for others to pick up, and I plan to go through and document a few more of my ideas for where it should go in storyboard so some of those plans exist somewhere should people want to pick up from where I left off before going fairly silent upstream. Plus if people want/need to they can reach out to me or add me to code review and chances are I'll comment/review because I do care about the project. Or I may contract some time to it. There are a few clouds running Adjutant, and people who have previously expressed interest in using it, so if you still are, the project isn't in a bad place at all. The code is stable, and the last few major refactors have cleaned up much of my biggest pain points with it. Best of luck! - adriant From mdulko at redhat.com Wed Oct 27 07:54:55 2021 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Wed, 27 Oct 2021 09:54:55 +0200 Subject: [kuryr] Using kuryr-kubernetes CNI without neutron agent(s)? In-Reply-To: <9FF2989B-69FA-494B-B60A-B066E5BF13DA@uchicago.edu> References: <9FF2989B-69FA-494B-B60A-B066E5BF13DA@uchicago.edu> Message-ID: On Tue, 2021-10-26 at 22:27 +0000, Jason Anderson wrote: > Hello all, > > I?m interested in letting Neutron provide the network configuration > frontend for networks realized on k8s kubelets. I have been reading a > lot about kuryr-kubernetes and it looks like it fits the bill, but > some of the older architecture diagrams I?ve seen indicate that OVS > and the neutron-openvswitch-agent (or similar) must also run on the > kubelet node. Is this still accurate? I am hoping to avoid this > because my understanding is that running the OVS agent means giving > the kubelet node access to RabbitMQ and potentially storing admin > keystone creds on the node as well. > > Can kuryr-kubernetes work without such an agent co-located? Hi, So short answer is - yes it can. And the long answer is that there are some requirements for that to work. It's called the nested mode [1] and currently we treat it as the major way to run K8s with kuryr-kubernetes. The assumption is that the Kubernetes nodes run as VMs on OpenStack and Kuryr services will run as Pods on those nodes. Kuryr requires the main ports of the VMs to be Neutron trunk ports and will create the ports for the Pods as subports of these trunk ports. This removes the need for neutron-openvswitch- agent to exist on the K8s node as Kuryr can bind such ports on its own. The requirements are as follows: * K8s nodes run as VMs on OpenStack. * Trunk extension is enabled in Neutron. * VMs have access to OpenStack API endpoints. * You need Octavia to support K8s Services. In terms of admin credentials - those should not be needed in nested mode, just regular tenant credentials should be fine. If your K8s nodes are required to be baremetal, then maybe using OVN as a Neutron backend instead of OVS will solve the RabbitMQ problem? I think you'll still need the ovn-controller to run on the K8s nodes to bind the Neutron ports there. And I think this mode might actually require admin credentials in order to attach ports to nodes. [1] https://docs.openstack.org/kuryr-kubernetes/latest/nested_vlan_mode.html Thanks, Micha? > Thanks! > Jason Anderson > > --- > > Chameleon DevOps Lead > Department of Computer Science, University of Chicago > Mathematics and Computer Science, Argonne National Laboratory > From noonedeadpunk at ya.ru Wed Oct 27 09:33:57 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 27 Oct 2021 12:33:57 +0300 Subject: [openstack-ansible][osa][ptg] Yoga PTG summary Message-ID: <374511635322241@mail.yandex.ru> An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Wed Oct 27 10:20:52 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Wed, 27 Oct 2021 12:20:52 +0200 Subject: [kolla-ansible][neutron]BGP Problems In-Reply-To: References: <91918168-AF6B-43C5-BAC2-3880C6055848@univ-grenoble-alpes.fr> Message-ID: <8A20BB40-AE78-4DF0-918E-F270E35DC7B6@univ-grenoble-alpes.fr> Thanks Fran?ois for your help.I added the following command: neighbor 172.16.201.121 transport connection-mode passive Celan is not working. On the other hand, between 2 cisco routers, I have no problem. I'm missing something. I continue to search even if the documents I find do not correspond to my situation. Franck > Le 26 oct. 2021 ? 22:43, Francois a ?crit : > > Hello > On Tue, 26 Oct 2021 at 21:29, Franck VEDEL > wrote: >> >> Hello, >> I'm trying to get BGP / BGP speaker / DRAgent to work with Neutron on one side and a Cisco router on the other. I work under Centos / Kolla-ansible / Wallaby. >> I adapt this technical documentation (https://docs.openstack.org/neutron/latest/admin/config-bgp-dynamic-routing.html) to my configuration but I am having difficulties (the cisco router sends a TCP SYN frame to the router in Openstack, TCP port 179, but the router sends back an RST. I don't understand why, no security on the router port). > > My understanding is that the agent will connect to the switch, there > is no connection initiated from the switch to the agent. There should > be a "neighbor... passive" option in the BGP configuration on the > switch that you could set to ensure it's not trying to initiate the > connection. > If you follow this doc (and you use ovs as driver, not ovn) you should > see the dragent connecting to the switch and publishing routes if any. > > hope that helps > > >> Do you have any links to help, tutorials, that would be of great help to me. Thanks in advance. >> >> Franck >> >> > Francois -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Wed Oct 27 11:11:02 2021 From: openinfradn at gmail.com (open infra) Date: Wed, 27 Oct 2021 16:41:02 +0530 Subject: How to control data write into a multiattach volume Message-ID: Hi, I have created a multiattach volume and attached it to several Windows VMs/Instances. Is there away in OpenStack to drop all data written by each VM/instances? I am supposed to periodically detach and attach the multiattach volume. Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Wed Oct 27 12:39:05 2021 From: mkopec at redhat.com (Martin Kopec) Date: Wed, 27 Oct 2021 14:39:05 +0200 Subject: [qa][ptg] PTG Summary Message-ID: Hi everyone, thank you all who participated in the PTG discussions and shared their thoughts and opinions. It's very much appreciated! Here is the summary of the topics we have discussed [1]: * Migration of devstack and Tempest tests to new secure RBAC ** Tempest side appear to be ready, further work is required on Devstack's side. * Xena retrospective ** Among good things belong: *** a few long term unfinished effort got completed *** a participation in an open source contest, which also helped in the above point *** a great response time in devstack patches *** quite prompt fixing devstack gate issues - thank you all who are active in Devstack, great job! ** Bad things to mention: *** not enough active contributors in Tempest and especially in Patrole project *** long review time in Tempest and Patrole * Cleanup of duplicated scenario.manager ** we will start with removing duplicated scenario.manager (Tempest's stable interface) methods from plugins ** this follows the 'making tempest.scenario.manager interface stable' effort done ~2 cycles ago * Test inheritance conventions, are there any ** we discussed usage of DDT library - it has turned out it complicates things when the test names are generated dynamically (it's hard to track the tests then) and because in DDT case the tests are not associated with any UUID ** * FIPS support ** current issues with FIPS at [3] - raised by Ade Lee ** the plan here from high level point of view is we'll use new jobs (with FIPS enabled) in Tempest so that we can identify and fix the tests that don't comply with FIPS * Patrole stable release updates and discussion ** A big question has been raised, do we have someone who can drive the patrole's migration to new RBAC? That's important because as mentioned before, there aren't many contributors in Patrole. ** There was an idea raised to retire the project - mainly because it's hard to maintain it (not enough contributors) and because the gates were broken for several months during the Xena cycle and mostly none had noticed - which raises another question - is the project needed/used by anyone? The discussed topics are transformed into priority items [2] we will be focusing on this cycle. [1] https://etherpad.opendev.org/p/qa-yoga-ptg [2] https://etherpad.opendev.org/p/qa-yoga-priority [3] https://etherpad.opendev.org/p/state-of-fips-in-openstack-ci-yoga Regards, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Oct 27 13:16:50 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 27 Oct 2021 10:16:50 -0300 Subject: [cinder] Bug deputy report for week of 10-27-2021 Message-ID: This is a bug report from 10-13-2021-15-09 to 10-27-2021. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Medium - https://bugs.launchpad.net/cinder/+bug/1948916 'Inconsistency for resource column between quota_usages and reservations table'. Unassigned. - https://bugs.launchpad.net/cinder/+bug/1947518 '[RBD] Cinder started requiring write access to glance images of RBD pool'. Unassigned. - https://bugs.launchpad.net/cinder/+bug/1947123 '[IBM GPFS] NFS driver throws an error while volume creation in copy-on-write mode'. Assigned to Digvijay Ukirde. - https://bugs.launchpad.net/cinder/+bug/1947134 '[IBM GPFS] NFS driver does the same filesystem check locally and fails initializing the driver in copy-on-write mode'. Assigned to Digvijay Ukirde. - https://bugs.launchpad.net/cinder/+bug/1946262 'NetApp ONTAP is losing QoS with assisted storage migration'. Assigned to Felipe Rodrigues. - https://bugs.launchpad.net/cinder/+bug/1948934 '[Victoria] Backported test requires newer ddt than included in test-requirements.txt'. Unassigned. Incomplete - https://bugs.launchpad.net/cinder/+bug/1947066 'snapshot status hang in backing-up'. Unassigned. Low - https://bugs.launchpad.net/cinder/+bug/1948507 'TypeError when computing QOS feature name'. Unassigned. - https://bugs.launchpad.net/cinder/+bug/1947834 'wrong href returned in volume detail'. Unassigned. -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Oct 27 14:37:52 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 27 Oct 2021 09:37:52 -0500 Subject: How to control data write into a multiattach volume In-Reply-To: References: Message-ID: <20211027143752.GA1904683@sm-workstation> On Wed, Oct 27, 2021 at 04:41:02PM +0530, open infra wrote: > Hi, > > I have created a multiattach volume and attached it to several Windows > VMs/Instances. > Is there away in OpenStack to drop all data written by each VM/instances? > > I am supposed to periodically detach and attach the multiattach volume. > > Regards, > Danishka You need to use a cluster-aware filesystem if you are attaching the same volume to multiple VM instances. With Windows I think this is typically done using Cluster Shared Volumes: https://docs.microsoft.com/en-us/windows-server/failover-clustering/failover-cluster-csvs You will get data corruption if you use a standard NTFS filesystem. From thierry at openstack.org Wed Oct 27 15:28:40 2021 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 27 Oct 2021 17:28:40 +0200 Subject: [largescale-sig] Next meeting: Oct 27th, 15utc In-Reply-To: <923c8336-01ee-83f9-d6ad-997c11bb78bb@openstack.org> References: <923c8336-01ee-83f9-d6ad-997c11bb78bb@openstack.org> Message-ID: <1ab92b2c-4db1-5106-0f7b-4530f386d98e@openstack.org> We held our meeting today. Small attendance again. We discussed the topic of our next "Large Scale OpenStack" episode on OpenInfra.Live, which should happen on Dec 9. The Q&A format used for the last episode was great, so I will look for other users interested in asking questions around a specific topic. We might also do an episode around tricks and tools that large deployments use for day to day ops. You can read the meeting logs at: https://meetings.opendev.org/meetings/large_scale_sig/2021/large_scale_sig.2021-10-27-15.01.html Our next IRC meeting will be Nov 10, at 1500utc on #openstack-operators on OFTC, with Belmiro Moreira chairing. Regards, -- Thierry Carrez (ttx) From amy at demarco.com Wed Oct 27 15:32:59 2021 From: amy at demarco.com (Amy Marrich) Date: Wed, 27 Oct 2021 10:32:59 -0500 Subject: Xena RDO Release Announcement Message-ID: If you're having trouble with the formatting, this release announcement is available online at https://blogs.rdoproject.org/2021/10/rdo-xena-released/ ---- *RDO Xena Released* The RDO community is pleased to announce the general availability of the RDO build for OpenStack Xena for RPM-based distributions, CentOS Stream and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Xena is the 24th release from the OpenStack project, which is the work of more than 1,000 contributors from around the world. The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-xena/. The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Stream and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS users looking to build and maintain their own on-premise, public or hybrid clouds. All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first. PLEASE NOTE: RDO Xena provides packages for CentOS Stream 8 only. Please use the Victoria release for CentOS Linux 8 which will reach End Of Life (EOL) on December 31st, 2021 (https://www.centos.org/centos-linux-eol/). *Interesting things in the Xena release include:* - The python-oslo-limit package has been added to RDO. This is the limit enforcement library which assists with quota calculation. Its aim is to provide support for quota enforcement across all OpenStack services. - The glance-tempest-plugin package has been added to RDO. This package provides a set of functional tests to validate Glance using the Tempest framework. - TripleO has been moved to an independend release model (see section TripleO in the RDO Xena release) The highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/xena/highlights.html *TripleO in the RDO Xena release:* - In the Xena development cycle, TripleO has moved to an Independent release model( https://specs.openstack.org/openstack/tripleo-specs/specs/xena/tripleo-independent-release.html ) and will only maintain branches for selected OpenStack releases. In the case of Xena, TripleO will not support the Xena release. For TripleO users in RDO, this means that: - RDO Xena will include packages for TripleO tested at OpenStack Xena GA time. - Those packages will not be updated during the entire Xena maintenance cycle. - RDO will not be able to included patches required to fix bugs in TripleO on RDO Xena. - The lifecycle for the non-TripleO packages will follow the code merged and tested in upstream stable/xena branches. - There will not be any tripleoxena container images built/pushed, so interested users will have to do their own container builds when deploying xena. You can find details about this [in RDO webpage](https://www.rdoproject.org/documentation/tripleo-in-xena/) *Contributors* During the Xena cycle, we saw the following new RDO contributors: - Chris Sibbitt - Gregory Thiemonge - Julia Kreger - Leif Madsen Welcome to all of you and Thank You So Much for participating! But we wouldn?t want to overlook anyone. A super massive Thank You to all 41 contributors who participated in producing this release. This list includes commits to rdo-packages, rdo-infra, and redhat-website repositories: - Alan Bishop - Alan Pevec - Alex Schultz - Alfredo Moralejo - Amy Marrich (spotz) - Bogdan Dobrelya - Chandan Kumar - Chris Sibbitt - Damien Ciabrini - Dmitry Tantsur - Eric Harney - Ga?l Chamoulaud - Giulio Fidente - Goutham Pacha Ravi - Gregory Thiemonge - Grzegorz Grasza - Harald Jensas - James Slagle - Javier Pe?a - Jiri Podivin - Joel Capitao - Jon Schlueter - Julia Kreger - Lee Yarwood - Leif Madsen - Luigi Toscano - Marios Andreou - Mark McClain - Martin Kopec - Mathieu Bultel - Matthias Runge - Michele Baldessari - Pranali Deore - Rabi Mishra - Riccardo Pittau - Sagi Shnaidman - S?awek Kap?o?ski - Steve Baker - Takashi Kajinami - Wes Hayutin - Yatin Karel *The Next Release Cycle* At the end of one release, focus shifts immediately to the next release i.e Yoga. *Get Started* To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works. Finally, for those that don?t have any hardware or physical resources, there?s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world. *Get Help* The RDO Project has our users at lists.rdoproject.org for RDO-specific users and operators. For more developer-oriented content we recommend joining the dev at lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org. The #rdo channel on OFTC IRC is also an excellent place to find and give help. We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel in Libera.Chat network, and #tripleo on OFTC), however we have a more focused audience within the RDO venues. *Get Involved* To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation. Join us in #rdo and #tripleo on the OFTC IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonanderson at uchicago.edu Wed Oct 27 16:00:36 2021 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Wed, 27 Oct 2021 16:00:36 +0000 Subject: [kuryr] Using kuryr-kubernetes CNI without neutron agent(s)? In-Reply-To: References: <9FF2989B-69FA-494B-B60A-B066E5BF13DA@uchicago.edu> Message-ID: <1043CEB9-7386-4842-8912-9DE021DB9BD0@uchicago.edu> Micha?, thank you very much for the reply! > On Oct 27, 2021, at 2:54 AM, Micha? Dulko wrote: > > On Tue, 2021-10-26 at 22:27 +0000, Jason Anderson wrote: >> Hello all, >> >> I?m interested in letting Neutron provide the network configuration >> frontend for networks realized on k8s kubelets. I have been reading a >> lot about kuryr-kubernetes and it looks like it fits the bill, but >> some of the older architecture diagrams I?ve seen indicate that OVS >> and the neutron-openvswitch-agent (or similar) must also run on the >> kubelet node. Is this still accurate? I am hoping to avoid this >> because my understanding is that running the OVS agent means giving >> the kubelet node access to RabbitMQ and potentially storing admin >> keystone creds on the node as well. >> >> Can kuryr-kubernetes work without such an agent co-located? > > Hi, > > So short answer is - yes it can. And the long answer is that there are > some requirements for that to work. > > It's called the nested mode [1] and currently we treat it as the major > way to run K8s with kuryr-kubernetes. The assumption is that the > Kubernetes nodes run as VMs on OpenStack and Kuryr services will run as > Pods on those nodes. Kuryr requires the main ports of the VMs to be > Neutron trunk ports and will create the ports for the Pods as subports > of these trunk ports. This removes the need for neutron-openvswitch- > agent to exist on the K8s node as Kuryr can bind such ports on its own. > > The requirements are as follows: > * K8s nodes run as VMs on OpenStack. > * Trunk extension is enabled in Neutron. > * VMs have access to OpenStack API endpoints. > * You need Octavia to support K8s Services. > > In terms of admin credentials - those should not be needed in nested > mode, just regular tenant credentials should be fine. > > If your K8s nodes are required to be baremetal, then maybe using OVN as > a Neutron backend instead of OVS will solve the RabbitMQ problem? I > think you'll still need the ovn-controller to run on the K8s nodes to > bind the Neutron ports there. And I think this mode might actually > require admin credentials in order to attach ports to nodes. I will have to explore this a bit more, I think. Running the nested configuration unfortunately will not work for me. For context, I?m exploring how to configure a ?public? cluster where worker nodes are enrolled over a WAN. K8s can accommodate this better than OpenStack due to the drastically reduced attack surface on the kubelet compared to, e.g., a Nova compute node. Yet, OpenStack does have some very attractive ?frontend? systems and interfaces (I count Neutron among these) and it?s attractive to somehow allow connectivity between VM instances launched on a ?main? centrally-controlled cloud and container instances launched on the workers exposed over a WAN. (Edge computing) OVN may help if it can remove the need for RabbitMQ, which is probably the most difficult aspect to remove from OpenStack?s dependencies/assumptions, yet also one of the most pernicious from a security angle, as an untrusted worker node can easily corrupt the control plane. Re: admin creds, maybe it is possible to carefully craft a role that only works for some Neutron operations and put that on the worker nodes. I will explore. Cheers! > [1] > https://docs.openstack.org/kuryr-kubernetes/latest/nested_vlan_mode.html > > Thanks, > Micha? > >> Thanks! >> Jason Anderson >> >> --- >> >> Chameleon DevOps Lead >> Department of Computer Science, University of Chicago >> Mathematics and Computer Science, Argonne National Laboratory >> > > > From mdulko at redhat.com Wed Oct 27 17:03:55 2021 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Wed, 27 Oct 2021 19:03:55 +0200 Subject: [kuryr] Using kuryr-kubernetes CNI without neutron agent(s)? In-Reply-To: <1043CEB9-7386-4842-8912-9DE021DB9BD0@uchicago.edu> References: <9FF2989B-69FA-494B-B60A-B066E5BF13DA@uchicago.edu> <1043CEB9-7386-4842-8912-9DE021DB9BD0@uchicago.edu> Message-ID: On Wed, 2021-10-27 at 16:00 +0000, Jason Anderson wrote: > Micha?, thank you very much for the reply! > > > On Oct 27, 2021, at 2:54 AM, Micha? Dulko wrote: > > > > On Tue, 2021-10-26 at 22:27 +0000, Jason Anderson wrote: > > > Hello all, > > > > > > I?m interested in letting Neutron provide the network configuration > > > frontend for networks realized on k8s kubelets. I have been reading a > > > lot about kuryr-kubernetes and it looks like it fits the bill, but > > > some of the older architecture diagrams I?ve seen indicate that OVS > > > and the neutron-openvswitch-agent (or similar) must also run on the > > > kubelet node. Is this still accurate? I am hoping to avoid this > > > because my understanding is that running the OVS agent means giving > > > the kubelet node access to RabbitMQ and potentially storing admin > > > keystone creds on the node as well. > > > > > > Can kuryr-kubernetes work without such an agent co-located? > > > > Hi, > > > > So short answer is - yes it can. And the long answer is that there are > > some requirements for that to work. > > > > It's called the nested mode [1] and currently we treat it as the major > > way to run K8s with kuryr-kubernetes. The assumption is that the > > Kubernetes nodes run as VMs on OpenStack and Kuryr services will run as > > Pods on those nodes. Kuryr requires the main ports of the VMs to be > > Neutron trunk ports and will create the ports for the Pods as subports > > of these trunk ports. This removes the need for neutron-openvswitch- > > agent to exist on the K8s node as Kuryr can bind such ports on its own. > > > > The requirements are as follows: > > * K8s nodes run as VMs on OpenStack. > > * Trunk extension is enabled in Neutron. > > * VMs have access to OpenStack API endpoints. > > * You need Octavia to support K8s Services. > > > > In terms of admin credentials - those should not be needed in nested > > mode, just regular tenant credentials should be fine. > > > > If your K8s nodes are required to be baremetal, then maybe using OVN as > > a Neutron backend instead of OVS will solve the RabbitMQ problem? I > > think you'll still need the ovn-controller to run on the K8s nodes to > > bind the Neutron ports there. And I think this mode might actually > > require admin credentials in order to attach ports to nodes. > > I will have to explore this a bit more, I think. Running the nested configuration > unfortunately will not work for me. For context, I?m exploring how to configure > a ?public? cluster where worker nodes are enrolled over a WAN. K8s can > accommodate this better than OpenStack due to the drastically reduced attack > surface on the kubelet compared to, e.g., a Nova compute node. Yet, OpenStack > does have some very attractive ?frontend? systems and interfaces (I count > Neutron among these) and it?s attractive to somehow allow connectivity between > VM instances launched on a ?main? centrally-controlled cloud and container > instances launched on the workers exposed over a WAN. (Edge computing) Hm, so a mixed OpenStack-K8s edge setup, where edge sites are Kubernetes deployments? We've took a look at some edge use cases with Kuryr and one problem people see is that if an edge site becomes disconnected from the main side, Kuryr will not allow creation of new Pods and Services as it needs connection to Neutron and Octavia APIs for that. If that's not a problem had you gave a thought into running distributed compute nodes [1] as edge sites and then Kubernetes on top of them? This architecture should be doable with Kuryr (probably with minor changes). > OVN may help if it can remove the need for RabbitMQ, which is probably the > most difficult aspect to remove from OpenStack?s dependencies/assumptions, > yet also one of the most pernicious from a security angle, as an untrusted > worker node can easily corrupt the control plane. It's just Kuryr which needs access to the credentials, so possibly you should be able to isolate them, but I get the point, containers are worse at isolation than VMs. > Re: admin creds, maybe it is possible to carefully craft a role that only works > for some Neutron operations and put that on the worker nodes. I will explore. I think those settings [2] is what would require highest Neutron permissions in baremetal case. [1] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html [2] https://opendev.org/openstack/kuryr-kubernetes/src/branch/master/kuryr_kubernetes/controller/drivers/neutron_vif.py#L125-L127 > Cheers! > > [1] > > https://docs.openstack.org/kuryr-kubernetes/latest/nested_vlan_mode.html > > > > Thanks, > > Micha? > > > > > Thanks! > > > Jason Anderson > > > > > > --- > > > > > > Chameleon DevOps Lead > > > Department of Computer Science, University of Chicago > > > Mathematics and Computer Science, Argonne National Laboratory > > > > > > > > > > From Venkata.Krishna.Reddy at ibm.com Wed Oct 27 07:04:33 2021 From: Venkata.Krishna.Reddy at ibm.com (Venkata Krishna Reddy) Date: Wed, 27 Oct 2021 07:04:33 +0000 Subject: Devstack stack run issue In-Reply-To: <34a031f6-3efd-4a0a-9529-bf50212b330a@www.fastmail.com> References: <34a031f6-3efd-4a0a-9529-bf50212b330a@www.fastmail.com>, <20211026170509.7eiwleemkohf6bcl@yuggoth.org> Message-ID: An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Wed Oct 27 12:04:56 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 27 Oct 2021 17:34:56 +0530 Subject: [tripleo] x509 certificate Issue in Overcloud Deployment Message-ID: Hi Team, I am trying to install the Tripleo setup on the Openstack Train. While deploying the overcloud, I am getting an x509 certificate error in downloading the images from quay.io This is because of a proxy certificate in the LAB. Before installing Undercloud, I manually placed the certificate in order to avoid this issue. How can I take care of placing such certificates in case of an overcloud. Regards Anirudh Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Oct 27 18:32:37 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Oct 2021 13:32:37 -0500 Subject: [all][tc] Continuing the RBAC PTG discussion Message-ID: <17cc3068765.11a9586ce162263.9148179786177865922@ghanshyammann.com> Hello Everyone, As decided in PTG, we will continue the RBAC discussion from where we left in PTG. We will have a video call next week based on the availability of most of the interested members. Please vote your available time in below doodle vote by Thursday (or Friday morning central time). - https://doodle.com/poll/6xicntb9tu657nz7 NOTE: this is not specific to TC or people working in RBAC work but more to wider community to get feedback and finalize the direction (like what we did in PTG session). Meanwhile, feel free to review the lance's updated proposal for community-wide goal - https://review.opendev.org/c/openstack/governance/+/815158 -gmann From aschultz at redhat.com Wed Oct 27 18:34:03 2021 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 27 Oct 2021 12:34:03 -0600 Subject: [tripleo] x509 certificate Issue in Overcloud Deployment In-Reply-To: References: Message-ID: On Wed, Oct 27, 2021 at 12:20 PM Anirudh Gupta wrote: > > Hi Team, > > I am trying to install the Tripleo setup on the Openstack Train. > > While deploying the overcloud, I am getting an x509 certificate error in downloading the images from quay.io > This is because of a proxy certificate in the LAB. > > Before installing Undercloud, I manually placed the certificate in order to avoid this issue. > How can I take care of placing such certificates in case of an overcloud. > Best solution is use push_destination: true in your ContainerImagePrepare. This is the --local-push-destination option for `openstack tripleo container image prepare default`. This will load the containers on the undercloud and use the undercloud as the source for the containers in the overcloud. https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/container_image_prepare.html#undercloud-registry Alternatively use the DockerInsecureRegistryAddress to specify the registry you are using. It's a list of just host or host:port that you will be fetching containers from. https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/architecture.html#containers-runtime-deployment-and-configuration-notes > Regards > Anirudh Gupta From radoslaw.piliszek at gmail.com Wed Oct 27 18:50:59 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 27 Oct 2021 20:50:59 +0200 Subject: [tc][operators][all] TC tags framework to be dropped Message-ID: Dear OpenStack Operators, Due to no response to the original query about usefulness of the TC tags framework [1] (which was repeated in each and every following TC newsletter), the TC has decided to drop the tags framework entirely. [2] [3] [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-September/024804.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025554.html [3] https://etherpad.opendev.org/p/tc-yoga-ptg Kind regards, -yoctozepto From jibsan94 at gmail.com Wed Oct 27 19:39:55 2021 From: jibsan94 at gmail.com (Jibsan Joel Rosa Toirac) Date: Wed, 27 Oct 2021 15:39:55 -0400 Subject: Router interfaces are down Message-ID: Hello, I'm trying to route all the requests from a private vlan to internet. I have a private network and all the Virtual Machines inside the subnet config can do everything between they, but if I ping to Internet, it doesn't work. When I see the router_external it says all the interfaces are DOWN. I have search in everywhere but I can't find a solution for this. Thank you for your time -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertarun05 at gmail.com Wed Oct 27 19:59:35 2021 From: ertarun05 at gmail.com (tarun singhal) Date: Thu, 28 Oct 2021 01:29:35 +0530 Subject: SSSE3 CPU flag not available Message-ID: Hello, I have an Openstack cluster which was created using devstack. And recently I was involved in project in which there is a dependency on SSSE3 cpu flag in an Openstack VM instance, to accomplish certain tasks. I searched many posts but couldn't find a definitive solution. The suggestion mentioned didn't worked for me either. I have added below in /etc/nova/nova.conf and restarted devstack at n-cpu.service cpu_mode = host-model cpu_model_extra_flags = ssse3 Please let me know a possible way to enable SSSE3 cpu flag in a vm instance. Thanks, Tarun -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Wed Oct 27 20:52:11 2021 From: helena at openstack.org (helena at openstack.org) Date: Wed, 27 Oct 2021 15:52:11 -0500 (CDT) Subject: [PTL] PTG Summaries Blog Message-ID: <1635367931.11755687@apps.rackspace.com> Hi PTLs, I have loved seeing all the PTG summaries that have been rolling in! I am working to gather all of them and create a blog post for [ openstack.org/blog ]( http://openstack.org/blog ). If you would like for your summary to be included in the blog please post it to the mailing list and send me the link to the archived email by Tuesday, November 2nd. Thank you all for your involvement in PTG and I look forward to seeing all the summaries! Cheers, Helena -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Oct 27 23:10:12 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Oct 2021 18:10:12 -0500 Subject: [all][tc] Technical Committee next weekly meeting on Oct 28th at 1500 UTC In-Reply-To: <17cbdb45881.de21028388758.8633614813058075848@ghanshyammann.com> References: <17cbdb45881.de21028388758.8633614813058075848@ghanshyammann.com> Message-ID: <17cc404aaad.1114d2be0167882.9115941053545510888@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC meeting schedule at 1500 UTC in #openstack-tc IRC channel. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Project Health checks framework ** https://etherpad.opendev.org/p/health_check ** https://review.opendev.org/c/openstack/governance/+/810037 * Stable team process change ** https://review.opendev.org/c/openstack/governance/+/810721 * Adjutant need PTLs and maintainers ** http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025555.html * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Tue, 26 Oct 2021 12:44:45 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for Oct 28th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, Oct 27th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From skaplons at redhat.com Thu Oct 28 06:33:25 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 28 Oct 2021 08:33:25 +0200 Subject: Router interfaces are down In-Reply-To: References: Message-ID: <4535087.mvXUDI8C0e@p1> Hi, On ?roda, 27 pa?dziernika 2021 21:39:55 CEST Jibsan Joel Rosa Toirac wrote: > Hello, I'm trying to route all the requests from a private vlan to > internet. I have a private network and all the Virtual Machines inside the > subnet config can do everything between they, but if I ping to Internet, it > doesn't work. > > When I see the router_external it says all the interfaces are DOWN. By router_external, You mean neutron router, right? If so, what kind of router it is, centralized HA or non-HA, or maybe DVR? Is router scheduled properly to some node? You can check that with command like "neutron l3-agent-list-hosting-router ". > > I have search in everywhere but I can't find a solution for this. > > Thank you for your time -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From bsanjeewa at kln.ac.lk Thu Oct 28 07:10:05 2021 From: bsanjeewa at kln.ac.lk (Buddhika Godakuru) Date: Thu, 28 Oct 2021 12:40:05 +0530 Subject: kolla-ansible wallaby manila ceph pacific In-Reply-To: References: Message-ID: Hi Wodel, So the issue is beyond my current understanding. I am interested in this because I am planning to deploy one next month. Unfortunately, currently I do not have hardware to deploy one and see. Sorry couldn't be more helpful. On Thu, 28 Oct 2021 at 03:24, wodel youchi wrote: > Hi, > > To *Buddhika Godakuru* > I took a look into the manila-share docker deployed on my platforme and it > contains the patch you mentioned in [1]. > The Manila code does integrate the ceph test for version. > > [1] https://review.opendev.org/c/openstack/manila/+/797955 > > > Regards. > > Le mar. 26 oct. 2021 ? 09:06, wodel youchi a > ?crit : > >> Hi, >> My deployment is from source, and I have little experience on how to >> rebuild docker images, I can perhaps pull new images (built recently I >> mean). >> I took a look into docker hub and there are new manila images pushed 2 >> days ago. >> >> Regards. >> >> Le lun. 25 oct. 2021 ? 19:23, Buddhika Godakuru a >> ?crit : >> >>> Is your deployment type is source or binary? >>> If it is binay, I wonder if this patch [1] is built into the repos. >>> If source, could you try rebuilding the manila docker images? >>> >>> [1] https://review.opendev.org/c/openstack/manila/+/797955 >>> >>> On Mon, 25 Oct 2021 at 20:21, wodel youchi >>> wrote: >>> >>>> Hi, >>>> >>>> I tried with pacific then with octopus, the same problem. >>>> The patch was applied to kolla-ansible. >>>> >>>> Regards. >>>> >>>> Le ven. 22 oct. 2021 00:34, Goutham Pacha Ravi >>>> a ?crit : >>>> >>>>> >>>>> >>>>> On Thu, Oct 21, 2021 at 1:56 AM wodel youchi >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I did that already, I changed the keyring to "*ceph auth >>>>>> get-or-create client.manila -o manila.keyring mgr 'allow rw' mon 'allow r'* >>>>>> " it didn't work, then I tried with ceph octopus, same error. >>>>>> I applied the patch, then I recreated the keyring for manila as >>>>>> wallaby documentation, I get the error "*Bad target type 'mon-mgr'*" >>>>>> >>>>> >>>>> Thanks, the error seems similar to this issue: >>>>> https://tracker.ceph.com/issues/51039 >>>>> >>>>> Can you confirm the ceph version installed? On the ceph side, some >>>>> changes land after GA and get back ported; >>>>> >>>>> >>>>> >>>>>> >>>>>> Regards. >>>>>> >>>>>> Le jeu. 21 oct. 2021 ? 05:29, Buddhika Godakuru >>>>>> a ?crit : >>>>>> >>>>>>> Dear Wodel, >>>>>>> I think this is because manila has changed the way how to set/create >>>>>>> auth ID in Wallaby for native CephFS driver. >>>>>>> For the patch to work, you should change the command >>>>>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, >>>>>>> allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>>>>> to something like, >>>>>>> ceph auth get-or-create client.manila -o manila.keyring mgr 'allow >>>>>>> rw' mon 'allow r' >>>>>>> >>>>>>> Please see Manila Wallaby CephFS Driver document [1] >>>>>>> >>>>>>> Hope this helps. >>>>>>> >>>>>>> Thank you >>>>>>> [1] >>>>>>> https://docs.openstack.org/manila/wallaby/admin/cephfs_driver.html#authorizing-the-driver-to-communicate-with-ceph >>>>>>> >>>>>>> On Wed, 20 Oct 2021 at 23:19, wodel youchi >>>>>>> wrote: >>>>>>> >>>>>>>> Hi, and thanks >>>>>>>> >>>>>>>> I tried to apply the patch, but it didn't work, this is the >>>>>>>> manila-share.log. >>>>>>>> By the way, I did change to caps for the manila client to what is >>>>>>>> said in wallaby documentation, that is : >>>>>>>> [client.manila] >>>>>>>> key = keyyyyyyyy..... >>>>>>>> >>>>>>>> * caps mgr = "allow rw" caps mon = "allow r"* >>>>>>>> >>>>>>>> [root at ControllerA manila]# cat manila-share.log >>>>>>>> 2021-10-20 10:03:22.286 7 INFO oslo_service.periodic_task [-] >>>>>>>> Skipping periodic task update_share_usage_size because it is disabled >>>>>>>> ...... >>>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >>>>>>>> exception.ShareBackendException(msg) >>>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>>> volume >>>>>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>>>>> 'mon-mgr'. >>>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> Le mer. 20 oct. 2021 ? 00:14, Goutham Pacha Ravi < >>>>>>>> gouthampravi at gmail.com> a ?crit : >>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Oct 19, 2021 at 2:35 PM wodel youchi < >>>>>>>>> wodel.youchi at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> Has anyone been successful in deploying Manila wallaby using >>>>>>>>>> kolla-ansible with ceph pacific as a backend? >>>>>>>>>> >>>>>>>>>> I have created the manila client in ceph pacific like this : >>>>>>>>>> >>>>>>>>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow >>>>>>>>>> r, allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>>>>>>>> >>>>>>>>>> When I deploy, I get this error in manila's log file : >>>>>>>>>> Bad target type 'mon-mgr' >>>>>>>>>> Any ideas? >>>>>>>>>> >>>>>>>>> >>>>>>>>> Could you share the full log from the manila-share service? >>>>>>>>> There's an open bug related to manila/cephfs deployment: >>>>>>>>> https://bugs.launchpad.net/kolla-ansible/+bug/1935784 >>>>>>>>> Proposed fix: >>>>>>>>> https://review.opendev.org/c/openstack/kolla-ansible/+/802743 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Regards. >>>>>>>>>> >>>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> >>>>>>> ??????? ????? ???????? >>>>>>> Buddhika Sanjeewa Godakuru >>>>>>> >>>>>>> Systems Analyst/Programmer >>>>>>> Deputy Webmaster / University of Kelaniya >>>>>>> >>>>>>> Information and Communication Technology Centre (ICTC) >>>>>>> University of Kelaniya, Sri Lanka, >>>>>>> Kelaniya, >>>>>>> Sri Lanka. >>>>>>> >>>>>>> Mobile : (+94) 071 5696981 >>>>>>> Office : (+94) 011 2903420 / 2903424 >>>>>>> >>>>>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>>>> University of Kelaniya Sri Lanka, accepts no liability for the >>>>>>> content of this email, or for the consequences of any actions taken on the >>>>>>> basis of the information provided, unless that information is subsequently >>>>>>> confirmed in writing. If you are not the intended recipient, this email >>>>>>> and/or any information it contains should not be copied, disclosed, >>>>>>> retained or used by you or any other party and the email and all its >>>>>>> contents should be promptly deleted fully from our system and the sender >>>>>>> informed. >>>>>>> >>>>>>> E-mail transmission cannot be guaranteed to be secure or error-free >>>>>>> as information could be intercepted, corrupted, lost, destroyed, arrive >>>>>>> late or incomplete. >>>>>>> >>>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>>>> >>>>>> >>> >>> -- >>> >>> ??????? ????? ???????? >>> Buddhika Sanjeewa Godakuru >>> >>> Systems Analyst/Programmer >>> Deputy Webmaster / University of Kelaniya >>> >>> Information and Communication Technology Centre (ICTC) >>> University of Kelaniya, Sri Lanka, >>> Kelaniya, >>> Sri Lanka. >>> >>> Mobile : (+94) 071 5696981 >>> Office : (+94) 011 2903420 / 2903424 >>> >>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> University of Kelaniya Sri Lanka, accepts no liability for the content >>> of this email, or for the consequences of any actions taken on the basis of >>> the information provided, unless that information is subsequently confirmed >>> in writing. If you are not the intended recipient, this email and/or any >>> information it contains should not be copied, disclosed, retained or used >>> by you or any other party and the email and all its contents should be >>> promptly deleted fully from our system and the sender informed. >>> >>> E-mail transmission cannot be guaranteed to be secure or error-free as >>> information could be intercepted, corrupted, lost, destroyed, arrive late >>> or incomplete. >>> >>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> >> -- ??????? ????? ???????? Buddhika Sanjeewa Godakuru Systems Analyst/Programmer Deputy Webmaster / University of Kelaniya Information and Communication Technology Centre (ICTC) University of Kelaniya, Sri Lanka, Kelaniya, Sri Lanka. Mobile : (+94) 071 5696981 Office : (+94) 011 2903420 / 2903424 -- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++? University of Kelaniya Sri Lanka, accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information provided, unless that information is subsequently confirmed in writing. If you are not the intended recipient, this email and/or any information it contains should not be copied, disclosed, retained or used by you or any other party and the email and all its contents should be promptly deleted fully from our system and the sender informed. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Oct 28 09:10:18 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 28 Oct 2021 11:10:18 +0200 Subject: [neutron] CI meeting time slot Message-ID: <13828406.O9o76ZdvQC@p1> Hi, As per PTG discussion, I prepared doodle to check what would be the best time slot for most of the people. Doodle is at [1]. Please fill it in if You are interested attending the weekly Neutron CI meeting. Meeting is on the #openstack-neutron irc channel, but we are also planning to do it on video from time to time. The timeslots in doodle have dates for next week, but please ignore them. It's just to pick the best time slot for the meeting to use it weekly. Next week meeting will be for sure still in the current time slot, which is Tuesday 1500 UTC. [1] https://doodle.com/poll/3n2im4ebyxhs45ne?utm_source=poll&utm_medium=link -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From lokendrarathour at gmail.com Thu Oct 28 09:13:53 2021 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Thu, 28 Oct 2021 14:43:53 +0530 Subject: [ Tacker ] Passing a shell script/parameters as a file in cloud config Message-ID: Hi, *In Tacker, while deploying VNFD can we pass a file ( parameter file) and keep it at a defined path using cloud-config way?* Like in *generic hot template*s, we have the below-mentioned way to pass a file directly as below: parameters: foo: default: bar resources: the_server: type: OS::Nova::Server properties: # flavor, image etc user_data: str_replace: template: {get_file: the_server_boot.sh} params: $FOO: {get_param: foo} *but when using this approach in Tacker BaseHOT it gives an error saying * "nstantiation wait failed for vnf 77693e61-c80e-41e0-af9a-a0f702f3a9a7, error: VNF Create Resource CREATE failed: resources.obsvrnnu62mb: resources.CAS_0_group.Property error: resources.soft_script.properties.config: No content found in the "files" section for get_file path: Files/scripts/install.py 2021-10-28 00:46:35.677 3853831 ERROR oslo_messaging.rpc.server " do we have a defined way to use the hot capability in TACKER? Defined Folder Structure for CSAR: . ??? BaseHOT ? ??? default ? ??? RIN_vnf_hot.yaml ? ??? nested ? ??? RIN_0.yaml ? ??? RIN_1.yaml ??? Definitions ? ??? RIN_df_default.yaml ? ??? RIN_top_vnfd.yaml ? ??? RIN_types.yaml ? ??? etsi_nfv_sol001_common_types.yaml ? ??? etsi_nfv_sol001_vnfd_types.yaml ??? Files ? ??? images ? ??? scripts ? ??? install.py ??? Scripts ??? TOSCA-Metadata ? ??? TOSCA.meta ??? UserData ? ??? __init__.py ? ??? lcm_user_data.py *Objective: * To pass a file at a defined path on the VDU after the VDU is instantiated/launched. -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Thu Oct 28 09:21:19 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 28 Oct 2021 02:21:19 -0700 Subject: [manila][ptg] Yoga cycle PTG summary Message-ID: Hello Zorillas and other awesome Stackers, Thank you for participating in the Yoga cycle Project Teams Gathering. A special thanks to the folks at the OpenInfra Foundation who provided the platform for this event! We brainstormed on numerous topics. You'll find the unabridged notes and links in the main etherpad [1]. Video recordings from the event are posted on our Youtube channel [2]. The following is my summary of the proceedings. Please feel free to follow up here, or on OFTC's #openstack-manila if you have any questions. == Xena cycle retrospective == Etherpad: https://etherpad.opendev.org/p/manila-xena-retrospective * We celebrated successful Outreachy internships (Kafilat Adeleke and Archana Kumari), completion of the senior design project of students from Boston University (Ashley Rodriguez, Nicole Chen and Mark Tony) and two Open Source Day mentoring events at the Grace Hopper Celebration. * We have long term contributors in kafilat and archanaserver, along the lines of ashrod98 (who is now a Red Hatter!), disap (at Microsoft!), kiwi_36 (continuing his unified limits work), maaritamm (at city network, is manila core!) and many others. Very proud of the effort the team puts in to attract and retain such talent in the community. * We also discussed the mentoring efforts coinciding with the Yoga release (Northeastern University Cloud Computing course project, Outreachy internships, others). * We anticipate more changes to the maintainers team in the Yoga cycle in our continued effort to build domain expertise around the team, and avoid reviewer burn out. * Thanks to the enthusiastic participation, and to a well groomed bug backlog (kudos, vhari) we had two successful bugsquash weeks and a hack-a-thon during the Xena cycle. We're pruning down the list of stale issues collaboratively. The team liked the frequency of these events. * We discussed the responses for the 2021 User Survey and key takeaways (need for active/active HA, unified limits, unified cli, virtiofs) * Action Items: ** Plan a hack-a-thon in Yoga (topics welcome) ** Update questions for the 2022 user survey ** Plan periodic collaborative code review events alongside the bug squash events to prevent languishing patches. == Share transfers == * haixin highlighted the need for APIs to safely transfer shares across project namespaces and discussed some caveats, and a rough workflow for the feature * To ensure secure multi-tenancy, many back end share drivers use the project_id metadata in their provisioning paths - so transferring resources must send appropriate notifications * For DHSS=True, transfering the share alone makes little sense without first transfering the share network onto which the share is exported - however, multiple shares can be associated with one share network, so this could need a multi-phase transfer needing an ability to rollback. * Action Items: * Propose a spec and discuss the multi-stage implementation of this feature == Project testing and Interoperability improvements == * vhari shared plans for the improvement of the interop guidelines associated with manila * We touched upon the work lkuchlan and vhari are doing to align optional feature testing with tempest's best practices * Third party CI maintainers were reminded that they must be running scenario tests * Action Items: * Team looks at the possible tests to add as advisory to the 2021.11 guideline: https://etherpad.opendev.org/p/refstack-test-analysis == CephFS driver and integration changes == Etherpad: https://etherpad.opendev.org/p/yoga-ptg-manila-cephfsnfs-cephadm * vkmc highlighted the changes intended in the Manila CephFS driver to adopt the ceph mgr nfs APIs instead of the DBUS interactions that the Ganesha interace is coded to perform. The ceph mgr NFS APIs would only work on cephadm-deployed ceph nfs (nfs-ganesha) clusters. * Ceph nfs can be deployed like any other ceph daemon in an active/active HA cluster. One concern was the backwards incompatible changes to default nfs-ganesha configuration - this is being worked out. * TripleO developers joined us to discuss deployment changes, and the upgrade concerns including the eventual change in the NFS VIP. * Action Items: * Share notes about the deployment and upgrade changes necessary to get manila's CephFS driver to work with an active/active NFS HA cluster for the benefit of operators as well as deployment tools like tripleo, kolla-ansible, juju charms, etc. == Manila deployment at the Edge == Etherpad: https://etherpad.opendev.org/p/yoga-ptg-manila-at-the-edge * I quizzed fultonj about the use cases and architecture of the Distributed Compute Node edge deployments that tripleo is capable of orchestrating. * DCN now supports persistent block storage, and enables setting up the service in a multi-backend configuration with local-to-compute backends associated with availability zones that represent the edge sites. * We captured work items to similarly support manila-share at edge sites in this architecture to enable RWX storage at edge sites, and brainstormed future possibilities around the feature (cross site share replication) * Action Items: - A spec to support manila-share in an active/active HA configuration - Testing template for A/A with vendor share drivers == Metadata APIs for all user facing resources == Etherpad: https://etherpad.opendev.org/p/yoga-manila-metadata * ashrod98 discussed the ongoing work to generalize user defined metadata to all end-user facing resources. Currently users can attach metadata to shares and access rules, and the service provides metadata to export locations. * There are changes to the initial design that were brainstormed wrt RBAC and service/operator metadata. * The new APIs are going to be supported in the openstackclient shell implementation * Action Items: * Reviewer attention needed for the changes to the metadata spec, and for new RBAC guarding operator metadata. == Manila UI updates == * vkmc, carloss and haixin presented the plan to update the UI's feature parity * We have two outreachy internship proposals around implementing new UX around share networks and for enhancing integration testing for manila-ui * Action items: * Outreachy contributor period is open, lookout for applicants seeking reviews * Identify trivial API features that can be supported with little effort for the Yoga cycle hack-a-thon == VirtIOFS == Etherpad: https://etherpad.opendev.org/p/nova-manila-virtio-fs-support-yptg-october-2021 * tbarron presented updates regarding the ongoing prototyping for nova's support of virtiofs. * There's a demo of how the feature's expected to work: https://asciinema.org/a/IZ7UrhwspxBN63XsOl9JrTcUX?speed=1.5 * The nova discussion (https://etherpad.opendev.org/p/nova-yoga-ptg) included mechanics of supporting read-only attachments, os-share library, mount tags and use cases for baremetal nodes. The specification will be refined to answer some of these questions. * This continues to be a multi-release effort as changes in libvirt, qemu, kvm were needed for live virtiofs attachments. * Action Items: * Review/Refine nova spec: https://review.opendev.org/c/openstack/nova-specs/+/813180 == Multiple share network subnets in an AZ == Etherpad: https://etherpad.opendev.org/p/ptg-multiple-subnet * nahimsouza and felipe_rodrigues presented the use case for share networks to allow multiple subnets per availability zone. * This proposal would make dual stack ipv4/ipv6 networking possible for DHSS=True * Users would also have the ability to modify share networks that were in use - new network allocations will be possible for existing share servers, however, network allocations cannot be deleted. * Concerns were raised regarding the user experience and error signalling when port allocations cannot be made from all relevant subnets. * Action items: * Determine what needs to happen when a subnet has run out of allocations * Propose a specification discussing the API and driver interface impact to accommodate this change == FIPS compatibility and compliance == Etherpad: https://etherpad.opendev.org/p/yoga-manila-FIPS * carloss, ashrod98 and ade_lee spoke of the cross project effort to get FIPS compatibility and compliance testing * We discussed manila's direct/indirect use of cryptographic libraries and FIPS compliant alternatives (md5 in code, and paramiko's use of non-fips compliant digests); manila-tempest-plugin needs fixes along the same lines since it relies on paramiko. * Currently, the team's writing new CI jobs to test compatibility - these should merge by the end of the Yoga release cycle; the goal for compliance would be the Zorilla release (yes, i said it). * Action items: * Work on FIPS compatibility jobs in the Yoga release cycle == Secure RBAC changes in Yoga == Etherpad: https://etherpad.opendev.org/p/yoga-ptg-manila-srbac * vhari and I rounded up the work done so far to update the default RBAC policies across manila APIs and highlighted known issues, and pending work items * Cross service arbitrations on behalf of a user are ongoing discussions elsewhere - examples of these include manila's generic driver where the user's namespace and quota are consumed to create the nas server, backing volume and networking * We are continuing to implement tempest tests and anticipate a large portion to be completed during the Yoga cycle. * We don't plan to support cloud profiles with the manilaclient shell. Users are encouraged to switch to the openstackclient shell to use cloud profiles. * Action items: * Close on known issues in manila code * Complete protection and tempest test coverage == Paying down some technical debt in Yoga == * python-manilaclient still uses keystoneclient instead of keystoneauth1 - vkmc and gouthamr will work on this early in the Yoga release cycle * sqlalchemy2.0 changes - ack'ing SADeprecationWarnings and switching to using the new db engine facade exposed by oslo.db - felipe_rodrigues and gouthamr will work on this during the Yoga cycle * these are wishlist items that need volunteers to drive them to completion: * the generic driver doesn't yet support online extensions - * the generic driver can support more than 26 volumes per share server - this is a wishlist item that needs a volunteer to drive it * we need to publish the container driver image to a container registry from its development within the manila-image-elements project == Backing up shares == Etherpad: https://etherpad.opendev.org/p/ptg-backup-manila * We were joined by operators and storage experts from SAP, CSI Piemonte, NetApp, Red Hat and CERN to discuss current use cases and methods to backing up manila shares * Existing data protection and disaster recovery tools within manila (creating and mounting snapshots, cloning snapshots to new shares, reverting snapshots in-place, replicating shares and snapshots) were compared against the need for scheduling, efficient, incremental, durable/ex-situ/non-homogenous backup destinations, whole container vs selective file backup and recovery. * Available DIY external tooling (restic, borg, urbackup, other) and wrappers were discussed * Creating a backup solution based off-of the manila-data service was considered in the past, the spec was abandoned. It could be revived; however, we could benefit from presenting the need, a state of affairs as they are before exploring solutions. * Action Items: * We'll write up a doc on data protection for manila shares and invite operators to review them and propose future plans; anyone interested in collaborating in this space can get in touch with felipe_rodrigues == OpenStackCLI updates == Etherpad: https://etherpad.opendev.org/p/manila-osc-yoga * Thanks to maaritamm's awesome work, and the enthusiastic hack-a-thon participation from the community, we're very close to parity between OSC and the manilaclient shell. So, the team decided to stop adding new features in the manilaclient shell as a way to gradually wean users off of it. * When using the manilaclient shell from the Yoga release, users will be shown a deprecation warning suggesting an eventual removal (looking at the "A" release for this one, thoughts are welcome). * maaritamm proposed a hack-a-thon for completing the functional testing around manilaclient's osc plugin. * The osc plugin also needs the api microversion negotiation bits that the manilaclient shell currently has. * Action items: * We assigned owners to pending reviews, and missing osc commands * Deprecate the native manilaclient shell. == Manila CSI updates == Etherpad: https://etherpad.opendev.org/p/yoga-ptg-manila-csi * gman0 and tbarron presented the state of the kitchen for manila-csi since the Xena PTG and shared future plans * share backup is now a focus area for pvs serviced by manila-csi - the design involves being able to mount manila snapshots as read only PVs on backup pods and extracting relevant files. * we discussed e2e testing that's being added to the cloud-provider-openstack repository and the results of the perf/scale testing with manila-csi on openshift == OpenStackSDK updates == Etherpad: https://etherpad.opendev.org/p/yoga-manila-openstacksdk-update * we've had several interns working on exposing manila APIs within the openstacksdk since the wallaby release * kafilat has numerous open changes, wrapped up a successful outreachy internship recently * megharth, Jiabo, tutkuna and rishabhdutta from Northeastern University picked up the implementation of the remaining API resources for the Yoga cycle * this continues to be a multi-cycle effort and the plan is to eventually use manila APIs via the openstacksdk within manila-ui, ansible-collections-openstack and manila's OSC plugin. * Action Items: * team will review open changes against the openstacksdk repository pertaining to manila Thanks for reading thus far. It's possible a lot can get lost in translation :) As a reminder, do check out the complete recordings on our Youtube channel! [2] -- Goutham Pacha Ravi PTL, OpenStack Manila [1] https://etherpad.opendev.org/p/yoga-ptg-manila [2] https://www.youtube.com/watch?v=wVAClI82Ths&list=PLnpzT0InFrqDhp1A3lBi_guLAmOZ4bAEc From zhangbailin at inspur.com Thu Oct 28 11:37:16 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Thu, 28 Oct 2021 11:37:16 +0000 Subject: [cyborg][ptg] Yoga PTG Summary Message-ID: Hi everyone! First of all I would like to thank everyone for taking time and attending session. I think we had pretty productive time and discussions. You may find discussion summaries below: * With nova-cyborg interaction ** Cyborg vGPU support, we write a spec that adds the prefilter and the traits against every Nova RP and then cyborg contributors to provide a subsequent spec for Cyborg using their own trait ** Continue to work on resume/suspend feature, add the unit tests and update the PoC codes ** Works on PMEM instance cold migration in Nova, the spec is already merged in Xena release, and it need to re-propose it in Yoga release * Introducing some new accelerators driver ** xilin FPGA Driver ** PMEM Driver ** Optimization the exist device function, such as FPGA program interface, GPU/vGPU support * New feature will be support in Yoga release ** Get device profile get by name. It?s need to add a microversion, because of the request path_url changed, proposed the spec but need to update *** SPEC URL: https://review.opendev.org/c/openstack/cyborg-specs/+/813183 ** Add disable/enable device status to mark the device whether can be use or not ***SPEC URL: https://review.opendev.org/c/openstack/cyborg-specs/+/815460 ** We would like to improve the parameter validation, consider checking the api parameters with schema ** Add batch query ARQs for more than one instance support in Get *One* Accelerator Request API * Docs improving, such as nova-cyborg interaction manual and API ref docs * Improve the exception mechanism, improve the efficiency of abnormal judgment in unit testing * Improve the abnormal instance handling scenarios, for example, when the host is disconnected and the device is damaged, how should we set the device status and the accelerator instance state at this time. brinzhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Thu Oct 28 11:45:28 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Thu, 28 Oct 2021 11:45:28 +0000 Subject: =?gb2312?B?tPC4tDogW2N5Ym9yZ11bcHRnXSBZb2dhIFBURyBTdW1tYXJ5?= Message-ID: <019802e685a2441bbcd6f619d4f7b9a6@inspur.com> More details on etherpad: https://etherpad.opendev.org/p/cyborg-yoga-ptg brinzhang ???: Brin Zhang(???) ????: 2021?10?28? 19:37 ???: 'openstack-discuss at lists.openstack.org' ??: 'xin-ran.wang at intel.com' ; Alex Song (???) ; Jorhson Deng (???) ; Juntingqiu Qiujunting (???) ; 'eric_xiett at 163.com' ??: [cyborg][ptg] Yoga PTG Summary Hi everyone! First of all I would like to thank everyone for taking time and attending session. I think we had pretty productive time and discussions. You may find discussion summaries below: * With nova-cyborg interaction ** Cyborg vGPU support, we write a spec that adds the prefilter and the traits against every Nova RP and then cyborg contributors to provide a subsequent spec for Cyborg using their own trait ** Continue to work on resume/suspend feature, add the unit tests and update the PoC codes ** Works on PMEM instance cold migration in Nova, the spec is already merged in Xena release, and it need to re-propose it in Yoga release * Introducing some new accelerators driver ** xilin FPGA Driver ** PMEM Driver ** Optimization the exist device function, such as FPGA program interface, GPU/vGPU support * New feature will be support in Yoga release ** Get device profile get by name. It?s need to add a microversion, because of the request path_url changed, proposed the spec but need to update *** SPEC URL: https://review.opendev.org/c/openstack/cyborg-specs/+/813183 ** Add disable/enable device status to mark the device whether can be use or not ***SPEC URL: https://review.opendev.org/c/openstack/cyborg-specs/+/815460 ** We would like to improve the parameter validation, consider checking the api parameters with schema ** Add batch query ARQs for more than one instance support in Get *One* Accelerator Request API * Docs improving, such as nova-cyborg interaction manual and API ref docs * Improve the exception mechanism, improve the efficiency of abnormal judgment in unit testing * Improve the abnormal instance handling scenarios, for example, when the host is disconnected and the device is damaged, how should we set the device status and the accelerator instance state at this time. brinzhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Thu Oct 28 13:32:09 2021 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Thu, 28 Oct 2021 15:32:09 +0200 Subject: [kolla] Yoga PTG summary Message-ID: <0376AF00-ADDA-49BF-BC03-FCB2F7D98B08@gmail.com> Hi everyone, Thank you all who participated in the PTG discussions and shared their thoughts and opinions. Here is the summary of the topics we have discussed: # General ## We agreed to stop using Launchpad?s Blueprints functionality (which was inconsistently used by us). ## We?ve discussed recently proposed services to Kolla: Adjutant has a complicated status now (vacant PTL seat and looking for contributors) Skyline Venus We agreed that a project can be added to Kolla/Kolla-Ansible once it fulfills following the criteria of being in OpenStack governance for 1 cycle and having 1 release. An exception to this criteria is when the patch content is in good quality and there are core reviewers interested in making this functionality merged (but still the project needs to be in OpenStack governance). ## We also discussed adding OSISM collection of Grafana dashboards and Prometheus Alertmgr rules in order to improve the ?default deployment? experience for Kolla-Ansible users. ## There was a decision taken to finally retire kolla-cli (since it has been deprecated long time ago). # Kolla ## We have discussed deprecating and dropping of binary type of Kolla images: Pros: * less CI load, less maintenance burden (for a limited Kolla team), users would be running more tested images (since Kolla/Kolla-Ansible CI runs source images CI jobs as voting) Cons: * User survey showed that a considerable amount of users is using them - unclear if because that?s a default - or they have chosen to do so. * Overriding package versions in source variant can be more troublesome than it is in binary Action plan: * Improve documentation for source images, especially focusing on its advantages and what might change for current users of binary (when they make the transition) * Mark binary images as deprecated in Yoga cycle (add a note about deprecation in kolla-build CLI output) * Improve override options for source images (upper-constraints, etc) ## The next discussion item was to use a common base (single Linux distribution) for Kolla images Action plan (to make it possible): * Fix Bifrost and OVN builds for Debian (which are broken now) * Create CI jobs for mixed host-os/in-container-image-os (including upgrade jobs) * Deprecate binary images after those two actions are done * Provide a decent plan with justification and operator feedback in https://etherpad.opendev.org/p/kolla-only-on-debian * Deprecate CentOS only images (like qdrouterd) At the same time we agreed on not pursuing CentOS Stream 9 or Ubuntu 22.04 LTS image builds, when those will show up/be required. ## Migration from ElasticSearch to AWS OpenSearch (Elasticsearch fork after ES changed license) Action items: * Deprecate Elasticsearch in Yoga * Build OpenSearch in Yoga # Kolla-Ansible ## Podman support We agreed to implement it with as minimum Ansible playbooks/roles changes as possible (mainly rework kolla_docker.py) Action plan: * Reduce scope of existing patch to deploy a single service (e.g. MariaDB) * Write up a rough implementation plan in https://etherpad.opendev.org/p/kolla-docker-systemd-podman * Add systemd units support for existing Docker implementation * Add podman installation in Kolla-Ansible?s baremetal role * Add podman support upon systemd support for Docker ## Change default deployment to use ML2/OVN (instead of ML2/OVS) We agreed on the criteria to do so: * Debian OVN packages in Yoga * Working and reliable migration path * A way to prevent accidental migration to OVN for existing deployments And until that is resolved - we?re not going to pursue that decision. ## Keystone system scope continued Action plan: * Proposed to split into 3 parts ** use system scope for keystone admin user (done in Xena) ** assign admin role to service users with system scope ** provide flags to enable scope enforcement and new defaults ## Let?s Encrypt Resume efforts to implement it (since we upgraded to HaProxy 2.2 which enables seamless TLS key/cert replacement) ## More HA settings by default Action plan: * Add a docs HA page that describes typical Kolla-Ansible deployment (haproxy, galera cluster, etc) with references to other projects HA configurations (e.g. neutron, octavia) ## Rocky Linux Host OS support Proposal: Replace CentOS Stream support to Rocky Linux in the longer run Action plan: * Add support for Rocky Linux as Host OS (no Kolla container images) # Kayobe ## Multiple environments part 3 Proposed improvements: * merge OpenStack custom configs * merge Kolla Ansible group_vars * dependencies between environments * CI testing ## Support RAID with Bifrost via cleaning steps and deploy steps Action plan: * Add support to baremetal_node_action Ansible module for manual cleaning and passing deploy steps * Option: Enable automated cleaning to erase metadata * Add support for using this functionality to Bifrost * Add passing of deploy_steps and cleaning_steps from Kayobe to Bifrost ## Create collection(s) for external Ansible roles that Kayobe depends on (those from StackHPC namespace) * Group Kayobe external roles to collections (the whole ,,ecosystem?? of stackhpc.* roles) * Encourage Kayobe role users to move to collections * Add/improve CI testing for the collection For details please check the Kolla Yoga PTG etherpad: https://etherpad.opendev.org/p/kolla-yoga-ptg And see you on the weekly meetings: https://meetings.opendev.org/#Kolla_Team_Meeting Best Regards, Michal Nasiadka -------------- next part -------------- An HTML attachment was scrubbed... URL: From victoria at vmartinezdelacruz.com Thu Oct 28 16:10:13 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Thu, 28 Oct 2021 18:10:13 +0200 Subject: kolla-ansible wallaby manila ceph pacific In-Reply-To: References: Message-ID: Hi Wodel, A few things, can you share the specific version you are using for Ceph? You mentioned Ceph Octopus and Ceph Pacific, we need to make sure that the microversions are as follows Ceph Octopus minimum version 15.2.11 Ceph Pacific minimum version 16.2.1 Also, can you make sure that the updates on the caps were applied? Do this by doing "ceph auth ls" and checking the values you have for client.manila The output should be similar to what you see in [0] If those are ok, you would need to restart manila-share service and the issue should be resolved. Regards, Victoria [0] https://docs.ceph.com/en/latest/rados/operations/user-management/#list-users On Thu, Oct 28, 2021 at 9:19 AM Buddhika Godakuru wrote: > Hi Wodel, > So the issue is beyond my current understanding. > I am interested in this because I am planning to deploy one next month. > Unfortunately, currently I do not have hardware to deploy one and see. > Sorry couldn't be more helpful. > > On Thu, 28 Oct 2021 at 03:24, wodel youchi wrote: > >> Hi, >> >> To *Buddhika Godakuru* >> I took a look into the manila-share docker deployed on my platforme and >> it contains the patch you mentioned in [1]. >> The Manila code does integrate the ceph test for version. >> >> [1] https://review.opendev.org/c/openstack/manila/+/797955 >> >> >> Regards. >> >> Le mar. 26 oct. 2021 ? 09:06, wodel youchi a >> ?crit : >> >>> Hi, >>> My deployment is from source, and I have little experience on how to >>> rebuild docker images, I can perhaps pull new images (built recently I >>> mean). >>> I took a look into docker hub and there are new manila images pushed 2 >>> days ago. >>> >>> Regards. >>> >>> Le lun. 25 oct. 2021 ? 19:23, Buddhika Godakuru a >>> ?crit : >>> >>>> Is your deployment type is source or binary? >>>> If it is binay, I wonder if this patch [1] is built into the repos. >>>> If source, could you try rebuilding the manila docker images? >>>> >>>> [1] https://review.opendev.org/c/openstack/manila/+/797955 >>>> >>>> On Mon, 25 Oct 2021 at 20:21, wodel youchi >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> I tried with pacific then with octopus, the same problem. >>>>> The patch was applied to kolla-ansible. >>>>> >>>>> Regards. >>>>> >>>>> Le ven. 22 oct. 2021 00:34, Goutham Pacha Ravi >>>>> a ?crit : >>>>> >>>>>> >>>>>> >>>>>> On Thu, Oct 21, 2021 at 1:56 AM wodel youchi >>>>>> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I did that already, I changed the keyring to "*ceph auth >>>>>>> get-or-create client.manila -o manila.keyring mgr 'allow rw' mon 'allow r'* >>>>>>> " it didn't work, then I tried with ceph octopus, same error. >>>>>>> I applied the patch, then I recreated the keyring for manila as >>>>>>> wallaby documentation, I get the error "*Bad target type 'mon-mgr'*" >>>>>>> >>>>>> >>>>>> Thanks, the error seems similar to this issue: >>>>>> https://tracker.ceph.com/issues/51039 >>>>>> >>>>>> Can you confirm the ceph version installed? On the ceph side, some >>>>>> changes land after GA and get back ported; >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> Regards. >>>>>>> >>>>>>> Le jeu. 21 oct. 2021 ? 05:29, Buddhika Godakuru >>>>>>> a ?crit : >>>>>>> >>>>>>>> Dear Wodel, >>>>>>>> I think this is because manila has changed the way how to >>>>>>>> set/create auth ID in Wallaby for native CephFS driver. >>>>>>>> For the patch to work, you should change the command >>>>>>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, >>>>>>>> allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>>>>>> to something like, >>>>>>>> ceph auth get-or-create client.manila -o manila.keyring mgr 'allow >>>>>>>> rw' mon 'allow r' >>>>>>>> >>>>>>>> Please see Manila Wallaby CephFS Driver document [1] >>>>>>>> >>>>>>>> Hope this helps. >>>>>>>> >>>>>>>> Thank you >>>>>>>> [1] >>>>>>>> https://docs.openstack.org/manila/wallaby/admin/cephfs_driver.html#authorizing-the-driver-to-communicate-with-ceph >>>>>>>> >>>>>>>> On Wed, 20 Oct 2021 at 23:19, wodel youchi >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi, and thanks >>>>>>>>> >>>>>>>>> I tried to apply the patch, but it didn't work, this is the >>>>>>>>> manila-share.log. >>>>>>>>> By the way, I did change to caps for the manila client to what is >>>>>>>>> said in wallaby documentation, that is : >>>>>>>>> [client.manila] >>>>>>>>> key = keyyyyyyyy..... >>>>>>>>> >>>>>>>>> * caps mgr = "allow rw" caps mon = "allow r"* >>>>>>>>> >>>>>>>>> [root at ControllerA manila]# cat manila-share.log >>>>>>>>> 2021-10-20 10:03:22.286 7 INFO oslo_service.periodic_task [-] >>>>>>>>> Skipping periodic task update_share_usage_size because it is disabled >>>>>>>>> ...... >>>>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >>>>>>>>> exception.ShareBackendException(msg) >>>>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>>>> volume >>>>>>>>> ls, argdict={'format': 'json'} - exception message: Bad target >>>>>>>>> type 'mon-mgr'. >>>>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> >>>>>>>>> Le mer. 20 oct. 2021 ? 00:14, Goutham Pacha Ravi < >>>>>>>>> gouthampravi at gmail.com> a ?crit : >>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Oct 19, 2021 at 2:35 PM wodel youchi < >>>>>>>>>> wodel.youchi at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> Has anyone been successful in deploying Manila wallaby using >>>>>>>>>>> kolla-ansible with ceph pacific as a backend? >>>>>>>>>>> >>>>>>>>>>> I have created the manila client in ceph pacific like this : >>>>>>>>>>> >>>>>>>>>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow >>>>>>>>>>> r, allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>>>>>>>>> >>>>>>>>>>> When I deploy, I get this error in manila's log file : >>>>>>>>>>> Bad target type 'mon-mgr' >>>>>>>>>>> Any ideas? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Could you share the full log from the manila-share service? >>>>>>>>>> There's an open bug related to manila/cephfs deployment: >>>>>>>>>> https://bugs.launchpad.net/kolla-ansible/+bug/1935784 >>>>>>>>>> Proposed fix: >>>>>>>>>> https://review.opendev.org/c/openstack/kolla-ansible/+/802743 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Regards. >>>>>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> >>>>>>>> ??????? ????? ???????? >>>>>>>> Buddhika Sanjeewa Godakuru >>>>>>>> >>>>>>>> Systems Analyst/Programmer >>>>>>>> Deputy Webmaster / University of Kelaniya >>>>>>>> >>>>>>>> Information and Communication Technology Centre (ICTC) >>>>>>>> University of Kelaniya, Sri Lanka, >>>>>>>> Kelaniya, >>>>>>>> Sri Lanka. >>>>>>>> >>>>>>>> Mobile : (+94) 071 5696981 >>>>>>>> Office : (+94) 011 2903420 / 2903424 >>>>>>>> >>>>>>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>>>>> University of Kelaniya Sri Lanka, accepts no liability for the >>>>>>>> content of this email, or for the consequences of any actions taken on the >>>>>>>> basis of the information provided, unless that information is subsequently >>>>>>>> confirmed in writing. If you are not the intended recipient, this email >>>>>>>> and/or any information it contains should not be copied, disclosed, >>>>>>>> retained or used by you or any other party and the email and all its >>>>>>>> contents should be promptly deleted fully from our system and the sender >>>>>>>> informed. >>>>>>>> >>>>>>>> E-mail transmission cannot be guaranteed to be secure or error-free >>>>>>>> as information could be intercepted, corrupted, lost, destroyed, arrive >>>>>>>> late or incomplete. >>>>>>>> >>>>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>>>>> >>>>>>> >>>> >>>> -- >>>> >>>> ??????? ????? ???????? >>>> Buddhika Sanjeewa Godakuru >>>> >>>> Systems Analyst/Programmer >>>> Deputy Webmaster / University of Kelaniya >>>> >>>> Information and Communication Technology Centre (ICTC) >>>> University of Kelaniya, Sri Lanka, >>>> Kelaniya, >>>> Sri Lanka. >>>> >>>> Mobile : (+94) 071 5696981 >>>> Office : (+94) 011 2903420 / 2903424 >>>> >>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>> University of Kelaniya Sri Lanka, accepts no liability for the content >>>> of this email, or for the consequences of any actions taken on the basis of >>>> the information provided, unless that information is subsequently confirmed >>>> in writing. If you are not the intended recipient, this email and/or any >>>> information it contains should not be copied, disclosed, retained or used >>>> by you or any other party and the email and all its contents should be >>>> promptly deleted fully from our system and the sender informed. >>>> >>>> E-mail transmission cannot be guaranteed to be secure or error-free as >>>> information could be intercepted, corrupted, lost, destroyed, arrive late >>>> or incomplete. >>>> >>>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>> >>> > > -- > > ??????? ????? ???????? > Buddhika Sanjeewa Godakuru > > Systems Analyst/Programmer > Deputy Webmaster / University of Kelaniya > > Information and Communication Technology Centre (ICTC) > University of Kelaniya, Sri Lanka, > Kelaniya, > Sri Lanka. > > Mobile : (+94) 071 5696981 > Office : (+94) 011 2903420 / 2903424 > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > University of Kelaniya Sri Lanka, accepts no liability for the content of > this email, or for the consequences of any actions taken on the basis of > the information provided, unless that information is subsequently confirmed > in writing. If you are not the intended recipient, this email and/or any > information it contains should not be copied, disclosed, retained or used > by you or any other party and the email and all its contents should be > promptly deleted fully from our system and the sender informed. > > E-mail transmission cannot be guaranteed to be secure or error-free as > information could be intercepted, corrupted, lost, destroyed, arrive late > or incomplete. > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Wed Oct 27 21:54:01 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Wed, 27 Oct 2021 22:54:01 +0100 Subject: kolla-ansible wallaby manila ceph pacific In-Reply-To: References: Message-ID: Hi, To *Buddhika Godakuru* I took a look into the manila-share docker deployed on my platforme and it contains the patch you mentioned in [1]. The Manila code does integrate the ceph test for version. [1] https://review.opendev.org/c/openstack/manila/+/797955 Regards. Le mar. 26 oct. 2021 ? 09:06, wodel youchi a ?crit : > Hi, > My deployment is from source, and I have little experience on how to > rebuild docker images, I can perhaps pull new images (built recently I > mean). > I took a look into docker hub and there are new manila images pushed 2 > days ago. > > Regards. > > Le lun. 25 oct. 2021 ? 19:23, Buddhika Godakuru a > ?crit : > >> Is your deployment type is source or binary? >> If it is binay, I wonder if this patch [1] is built into the repos. >> If source, could you try rebuilding the manila docker images? >> >> [1] https://review.opendev.org/c/openstack/manila/+/797955 >> >> On Mon, 25 Oct 2021 at 20:21, wodel youchi >> wrote: >> >>> Hi, >>> >>> I tried with pacific then with octopus, the same problem. >>> The patch was applied to kolla-ansible. >>> >>> Regards. >>> >>> Le ven. 22 oct. 2021 00:34, Goutham Pacha Ravi >>> a ?crit : >>> >>>> >>>> >>>> On Thu, Oct 21, 2021 at 1:56 AM wodel youchi >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> I did that already, I changed the keyring to "*ceph auth >>>>> get-or-create client.manila -o manila.keyring mgr 'allow rw' mon 'allow r'* >>>>> " it didn't work, then I tried with ceph octopus, same error. >>>>> I applied the patch, then I recreated the keyring for manila as >>>>> wallaby documentation, I get the error "*Bad target type 'mon-mgr'*" >>>>> >>>> >>>> Thanks, the error seems similar to this issue: >>>> https://tracker.ceph.com/issues/51039 >>>> >>>> Can you confirm the ceph version installed? On the ceph side, some >>>> changes land after GA and get back ported; >>>> >>>> >>>> >>>>> >>>>> Regards. >>>>> >>>>> Le jeu. 21 oct. 2021 ? 05:29, Buddhika Godakuru >>>>> a ?crit : >>>>> >>>>>> Dear Wodel, >>>>>> I think this is because manila has changed the way how to set/create >>>>>> auth ID in Wallaby for native CephFS driver. >>>>>> For the patch to work, you should change the command >>>>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, >>>>>> allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>>>> to something like, >>>>>> ceph auth get-or-create client.manila -o manila.keyring mgr 'allow >>>>>> rw' mon 'allow r' >>>>>> >>>>>> Please see Manila Wallaby CephFS Driver document [1] >>>>>> >>>>>> Hope this helps. >>>>>> >>>>>> Thank you >>>>>> [1] >>>>>> https://docs.openstack.org/manila/wallaby/admin/cephfs_driver.html#authorizing-the-driver-to-communicate-with-ceph >>>>>> >>>>>> On Wed, 20 Oct 2021 at 23:19, wodel youchi >>>>>> wrote: >>>>>> >>>>>>> Hi, and thanks >>>>>>> >>>>>>> I tried to apply the patch, but it didn't work, this is the >>>>>>> manila-share.log. >>>>>>> By the way, I did change to caps for the manila client to what is >>>>>>> said in wallaby documentation, that is : >>>>>>> [client.manila] >>>>>>> key = keyyyyyyyy..... >>>>>>> >>>>>>> * caps mgr = "allow rw" caps mon = "allow r"* >>>>>>> >>>>>>> [root at ControllerA manila]# cat manila-share.log >>>>>>> 2021-10-20 10:03:22.286 7 INFO oslo_service.periodic_task [-] >>>>>>> Skipping periodic task update_share_usage_size because it is disabled >>>>>>> 2021-10-20 10:03:22.310 7 INFO oslo_service.service >>>>>>> [req-5b253656-4fe2-4087-b4ab-9ba2a8a0443f - - - - -] Starting 1 workers >>>>>>> 2021-10-20 10:03:22.315 30 INFO manila.service [-] Starting >>>>>>> manila-share node (version 12.0.1) >>>>>>> 2021-10-20 10:03:22.320 30 INFO manila.share.drivers.cephfs.driver >>>>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep >>>>>>> h client found, connecting... >>>>>>> 2021-10-20 10:03:22.368 30 INFO manila.share.drivers.cephfs.driver >>>>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] [CEPHFS1] Cep >>>>>>> h client connection complete. >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>>>> during i >>>>>>> n*itialization* >>>>>>> >>>>>>> * of driver CephFSDriver at ControllerA@cephfsnative1: >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>> volume ls, argdict={'format': 'json'} - exception message: Bad target type >>>>>>> 'mon-mgr'. 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): * >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 191, in rados_command >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>>>> timeout=RADOS_TIMEOUT) >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>>>> command >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager inbuf, >>>>>>> timeout, verbose) >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>>>> command_retry >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager return >>>>>>> send_command(*args, **kwargs) >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>>>> command >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise >>>>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager During >>>>>>> handling of the above exception, another exception occurred: >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>>>> ", line 346, in _driver_setup >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>>>> self.driver.do_setup(ctxt) >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 251, in do_setup >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>>>> volname=self.volname) >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 401, in volname >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 205, in rados_command >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager raise >>>>>>> exception.ShareBackendException(msg) >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>> volume >>>>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>>>> 'mon-mgr'. >>>>>>> 2021-10-20 10:03:22.372 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>>>> during i >>>>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>>>> target type 'mon-mgr'. >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 191, in rados_command >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>>>> timeout=RADOS_TIMEOUT) >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>>>> command >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager inbuf, >>>>>>> timeout, verbose) >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>>>> command_retry >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager return >>>>>>> send_command(*args, **kwargs) >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>>>> command >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise >>>>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager During >>>>>>> handling of the above exception, another exception occurred: >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>>>> ", line 346, in _driver_setup >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>>>> self.driver.do_setup(ctxt) >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 251, in do_setup >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>>>> volname=self.volname) >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 401, in volname >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 205, in rados_command >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager raise >>>>>>> exception.ShareBackendException(msg) >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>> volume >>>>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>>>> 'mon-mgr'. >>>>>>> 2021-10-20 10:03:26.379 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>>>> during i >>>>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>>>> target type 'mon-mgr'. >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 191, in rados_command >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>>>> timeout=RADOS_TIMEOUT) >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>>>> command >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager inbuf, >>>>>>> timeout, verbose) >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>>>> command_retry >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager return >>>>>>> send_command(*args, **kwargs) >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>>>> command >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise >>>>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager During >>>>>>> handling of the above exception, another exception occurred: >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>>>> ", line 346, in _driver_setup >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>>>> self.driver.do_setup(ctxt) >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 251, in do_setup >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>>>> volname=self.volname) >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 401, in volname >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 205, in rados_command >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager raise >>>>>>> exception.ShareBackendException(msg) >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>> volume >>>>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>>>> 'mon-mgr'. >>>>>>> 2021-10-20 10:03:34.387 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>>>> during i >>>>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>>>> target type 'mon-mgr'. >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 191, in rados_command >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>>>> timeout=RADOS_TIMEOUT) >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>>>> command >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager inbuf, >>>>>>> timeout, verbose) >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>>>> command_retry >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager return >>>>>>> send_command(*args, **kwargs) >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>>>> command >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise >>>>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager During >>>>>>> handling of the above exception, another exception occurred: >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>>>> ", line 346, in _driver_setup >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>>>> self.driver.do_setup(ctxt) >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 251, in do_setup >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>>>> volname=self.volname) >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 401, in volname >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 205, in rados_command >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager raise >>>>>>> exception.ShareBackendException(msg) >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>> volume >>>>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>>>> 'mon-mgr'. >>>>>>> 2021-10-20 10:03:50.404 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>>>> during i >>>>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>>>> target type 'mon-mgr'. >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 191, in rados_command >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>>>> timeout=RADOS_TIMEOUT) >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>>>> command >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager inbuf, >>>>>>> timeout, verbose) >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>>>> command_retry >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager return >>>>>>> send_command(*args, **kwargs) >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>>>> command >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise >>>>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager During >>>>>>> handling of the above exception, another exception occurred: >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>>>> ", line 346, in _driver_setup >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>>>> self.driver.do_setup(ctxt) >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 251, in do_setup >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>>>> volname=self.volname) >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 401, in volname >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 205, in rados_command >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager raise >>>>>>> exception.ShareBackendException(msg) >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>> volume >>>>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>>>> 'mon-mgr'. >>>>>>> 2021-10-20 10:04:22.436 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>>>> during i >>>>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>>>> target type 'mon-mgr'. >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 191, in rados_command >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>>>> timeout=RADOS_TIMEOUT) >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>>>> command >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager inbuf, >>>>>>> timeout, verbose) >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>>>> command_retry >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager return >>>>>>> send_command(*args, **kwargs) >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>>>> command >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise >>>>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager During >>>>>>> handling of the above exception, another exception occurred: >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>>>> ", line 346, in _driver_setup >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>>>> self.driver.do_setup(ctxt) >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 251, in do_setup >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>>>> volname=self.volname) >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 401, in volname >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 205, in rados_command >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager raise >>>>>>> exception.ShareBackendException(msg) >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>> volume >>>>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>>>> 'mon-mgr'. >>>>>>> 2021-10-20 10:05:26.438 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>>>> during i >>>>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>>>> target type 'mon-mgr'. >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 191, in rados_command >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>>>> timeout=RADOS_TIMEOUT) >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>>>> command >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager inbuf, >>>>>>> timeout, verbose) >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>>>> command_retry >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager return >>>>>>> send_command(*args, **kwargs) >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>>>> command >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise >>>>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager During >>>>>>> handling of the above exception, another exception occurred: >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>>>> ", line 346, in _driver_setup >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>>>> self.driver.do_setup(ctxt) >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 251, in do_setup >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>>>> volname=self.volname) >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 401, in volname >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 205, in rados_command >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager raise >>>>>>> exception.ShareBackendException(msg) >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>> volume >>>>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>>>> 'mon-mgr'. >>>>>>> 2021-10-20 10:07:34.539 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>> [req-9e549670-d131-4572-b57e-761b513abc75 - - - - -] Error encountered >>>>>>> during i >>>>>>> nitialization of driver CephFSDriver at ControllerA@cephfsnative1: >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix= >>>>>>> fs volume ls, argdict={'format': 'json'} - exception message: Bad >>>>>>> target type 'mon-mgr'. >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 191, in rados_command >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>> timeout=RADOS_TIMEOUT) >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1460, in json_ >>>>>>> command >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager inbuf, >>>>>>> timeout, verbose) >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1330, in send_ >>>>>>> command_retry >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager return >>>>>>> send_command(*args, **kwargs) >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>>>> "/usr/lib/python3.6/site-packages/ceph_argparse.py", line 1412, in send_ >>>>>>> command >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >>>>>>> ArgumentValid("Bad target type '{0}'".format(target[0])) >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>> ceph_argparse.ArgumentValid: Bad target type 'mon-mgr' >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager During >>>>>>> handling of the above exception, another exception occurred: >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager Traceback >>>>>>> (most recent call last): >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/manager.py >>>>>>> ", line 346, in _driver_setup >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>> self.driver.do_setup(ctxt) >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 251, in do_setup >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>> volname=self.volname) >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 401, in volname >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>> self.rados_client, "fs volume ls", json_obj=True) >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager File >>>>>>> "/var/lib/kolla/venv/lib/python3.6/site-packages/manila/share/drivers/ce >>>>>>> phfs/driver.py", line 205, in rados_command >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >>>>>>> exception.ShareBackendException(msg) >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>> volume >>>>>>> ls, argdict={'format': 'json'} - exception message: Bad target type >>>>>>> 'mon-mgr'. >>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>> >>>>>>> Regards >>>>>>> >>>>>>> Le mer. 20 oct. 2021 ? 00:14, Goutham Pacha Ravi < >>>>>>> gouthampravi at gmail.com> a ?crit : >>>>>>> >>>>>>>> >>>>>>>> On Tue, Oct 19, 2021 at 2:35 PM wodel youchi < >>>>>>>> wodel.youchi at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> Has anyone been successful in deploying Manila wallaby using >>>>>>>>> kolla-ansible with ceph pacific as a backend? >>>>>>>>> >>>>>>>>> I have created the manila client in ceph pacific like this : >>>>>>>>> >>>>>>>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, >>>>>>>>> allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>>>>>>> >>>>>>>>> When I deploy, I get this error in manila's log file : >>>>>>>>> Bad target type 'mon-mgr' >>>>>>>>> Any ideas? >>>>>>>>> >>>>>>>> >>>>>>>> Could you share the full log from the manila-share service? >>>>>>>> There's an open bug related to manila/cephfs deployment: >>>>>>>> https://bugs.launchpad.net/kolla-ansible/+bug/1935784 >>>>>>>> Proposed fix: >>>>>>>> https://review.opendev.org/c/openstack/kolla-ansible/+/802743 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Regards. >>>>>>>>> >>>>>>>> >>>>>> >>>>>> -- >>>>>> >>>>>> ??????? ????? ???????? >>>>>> Buddhika Sanjeewa Godakuru >>>>>> >>>>>> Systems Analyst/Programmer >>>>>> Deputy Webmaster / University of Kelaniya >>>>>> >>>>>> Information and Communication Technology Centre (ICTC) >>>>>> University of Kelaniya, Sri Lanka, >>>>>> Kelaniya, >>>>>> Sri Lanka. >>>>>> >>>>>> Mobile : (+94) 071 5696981 >>>>>> Office : (+94) 011 2903420 / 2903424 >>>>>> >>>>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>>> University of Kelaniya Sri Lanka, accepts no liability for the >>>>>> content of this email, or for the consequences of any actions taken on the >>>>>> basis of the information provided, unless that information is subsequently >>>>>> confirmed in writing. If you are not the intended recipient, this email >>>>>> and/or any information it contains should not be copied, disclosed, >>>>>> retained or used by you or any other party and the email and all its >>>>>> contents should be promptly deleted fully from our system and the sender >>>>>> informed. >>>>>> >>>>>> E-mail transmission cannot be guaranteed to be secure or error-free >>>>>> as information could be intercepted, corrupted, lost, destroyed, arrive >>>>>> late or incomplete. >>>>>> >>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>>> >>>>> >> >> -- >> >> ??????? ????? ???????? >> Buddhika Sanjeewa Godakuru >> >> Systems Analyst/Programmer >> Deputy Webmaster / University of Kelaniya >> >> Information and Communication Technology Centre (ICTC) >> University of Kelaniya, Sri Lanka, >> Kelaniya, >> Sri Lanka. >> >> Mobile : (+94) 071 5696981 >> Office : (+94) 011 2903420 / 2903424 >> >> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> University of Kelaniya Sri Lanka, accepts no liability for the content of >> this email, or for the consequences of any actions taken on the basis of >> the information provided, unless that information is subsequently confirmed >> in writing. If you are not the intended recipient, this email and/or any >> information it contains should not be copied, disclosed, retained or used >> by you or any other party and the email and all its contents should be >> promptly deleted fully from our system and the sender informed. >> >> E-mail transmission cannot be guaranteed to be secure or error-free as >> information could be intercepted, corrupted, lost, destroyed, arrive late >> or incomplete. >> >> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Oct 28 17:09:16 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 28 Oct 2021 12:09:16 -0500 Subject: [all] Anyone use or would like to maintain openstack/training-labs repo Message-ID: <17cc7e09369.ddfa2076225205.6942386596020382900@ghanshyammann.com> Hello Everyone, During the TC weekly meeting and PTG discussion to merge the 'Technical Writing' SIG into TC, we found that openstack/training-labs is no maintained now a days. Even we do not know who use this repo for training. I have checked that upstream institute training or CoA are not using this repo in their training. If you are using it for your training please reply to this email pr ping us on #openstack-tc IRC OFTC channel otherwise we will start the retirement process. - https://opendev.org/openstack/training-labs -gmann From ces.eduardo98 at gmail.com Thu Oct 28 18:27:16 2021 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Thu, 28 Oct 2021 15:27:16 -0300 Subject: [manila] FIPS compliance (Notice: third party drivers) Message-ID: Hello Zorillas and interested stackers! During the PTG week, we had a session where we discussed FIPS compatibility and compliance. In this PTG recording [1] you can see the discussion around this topic and our motivations to achieve such things for Manila. We're writing to follow up on the PTG discussion we had, and to update you on our move towards FIPS compatibility and compliance. We're aiming for manila FIPS compatibility with the Yoga cycle, meaning our CI jobs should be passing. FIPS compliance is a Z(orilla) cycle goal. For third party maintainers, to test if your code is FIPS compatible, use an image with fips enabled as discussed in [3]. In the etherpad we used in our previous discussion [2], there's a mention for some drivers that are using non-FIPS compliant libraries, and some examples of libraries that shouldn't be used. If FIPS is enabled in the controller where the driver is, and the driver is using one of the non permitted libraries, it will possibly lead the driver to malfunctioning. If you have any questions, please feel free to reach out to us (carloss and ashrod98) in #openstack-manila. We're always here to support our community. [1] https://youtu.be/mlTx71GIp1w?t=4864 [2] https://etherpad.opendev.org/p/yoga-manila-FIPS [3] https://etherpad.opendev.org/p/state-of-fips-in-openstack-ci-yoga Thanks! ---- Carlos da Silva (carloss) Ashley Rodriguez (ashrod98) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu Oct 28 21:57:47 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 28 Oct 2021 17:57:47 -0400 Subject: [ptg][cinder] yoga virtual PTG summary Message-ID: <91be05a0-82f8-fe40-40aa-938f4117d6f4@gmail.com> Sorry for the delay, it's posted here: https://wiki.openstack.org/wiki/CinderYogaPTGSummary The wiki page contains links to all the recordings. It's a wiki page, so feel free to make clarifications or corrections. Action items are noted in the "conclusions" sections of most of the summaries, so please look to remind yourself what you have committed yourself to. cheers, brian From tkajinam at redhat.com Fri Oct 29 04:29:28 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 29 Oct 2021 13:29:28 +0900 Subject: [puppet] Propose retiring old stable branches(Stein, Rocky and Queens) Message-ID: Hello, In puppet repos we have plenty of stable branches still open, and the oldest is now stable/queens. However recently we haven't seen many backports proposed to stein, rocky and queens [1], so I'll propose retiring these three old branches now. Please let me know if anybody is interested in keeping any of these three. Note that currently CI jobs are broken in these three branches and it is likely we need to investigate the required additional pinning of dependent packages. If somebody is still interested in maintaining these old branches then these jobs should be fixed. [1] https://review.opendev.org/q/(project:%255Eopenstack/puppet-.*)AND(NOT+project:openstack/puppet-tripleo)AND(NOT+project:openstack/puppet-pacemaker)AND((branch:stable/queens)OR(branch:stable/rocky)OR(branch:stable/stein)) Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiujunting at inspur.com Fri Oct 29 06:07:42 2021 From: qiujunting at inspur.com (=?gb2312?B?SnVudGluZ3FpdSBRaXVqdW50aW5nICjH8b785sMp?=) Date: Fri, 29 Oct 2021 06:07:42 +0000 Subject: [sahara][ptg] Yoga PTG Summary Message-ID: <6905320b911c4ecaa3bc4736dfd21a29@inspur.com> Hi all: Thank everyone for taking time and attending session. The discussion content of the PTG meeting is as follows: https://etherpad.opendev.org/p/sahara-yoga-ptg 1?Sahara has supported boot virtual machine from volume. https://specs.openstack.org/openstack/sahara-specs/specs/backlog/boot-from-volume.html 2?Sahara deploys a dedicated clusters need the network to be a flat one for instances. Could set up a "jumphost" which can be reached by the controllers, and that can reach the instances, need not the network to be a flat one for instance. https://docs.openstack.org/sahara/latest/admin/advanced-configuration-guide.html#custom-network-topologies 3?Discuss about which Plugins should be kept and updated and which ones should be dropped? The conclusion is that such as HDP?CDH?MapR Storm and Spark can be deleted in Yoga. 4?Improve the unit testing and documentation of Sahara project. Currently, the activity of the Sahara project is relatively low. We can discover its potential and expand its usage scenarios. Thank you Fossen --------------------------------- Fossen Qiu | ??? CBRD | ?????????? T: 18249256272 E: qiujunting at inspur.com ???? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3519 bytes Desc: image001.jpg URL: From syedammad83 at gmail.com Fri Oct 29 08:01:44 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Fri, 29 Oct 2021 13:01:44 +0500 Subject: [barbican] Simple Crypto Plugin kek issue Message-ID: Hi, I have installed barbican and using it with openstack magnum. When I am using the default kek describe in document below, works fine and magnum cluster creation goes successful. https://docs.openstack.org/barbican/latest/install/barbican-backend.html But when I generate a new kek with below command. python3 -c "from cryptography.fernet import Fernet ; key = Fernet.generate_key(); print(key)" and put it in barbican.conf, the magnum cluster failed to create and I see below logs in barbican. 2021-10-29 12:53:28.932 568554 INFO barbican.plugin.crypto.simple_crypto [req-aaac01e9-82af-421b-b85a-ff998d904972 ad702ac807f44c73a32a9b7a795b693c d782069f335041138f0cb141fde9933f - default default] Software Only Crypto initialized 2021-10-29 12:53:28.932 568554 DEBUG barbican.model.repositories [req-aaac01e9-82af-421b-b85a-ff998d904972 ad702ac807f44c73a32a9b7a795b693c d782069f335041138f0cb141fde9933f - default default] Getting session... get_session /usr/lib/python3/dist-packages/barbican/model/repositories.py:364 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers [req-aaac01e9-82af-421b-b85a-ff998d904972 ad702ac807f44c73a32a9b7a795b693c d782069f335041138f0cb141fde9933f - default default] Secret creation failure seen - please contact site administrator.: cryptography.fernet.InvalidToken 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers Traceback (most recent call last): 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/cryptography/fernet.py", line 113, in _verify_signature 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers h.verify(data[-32:]) 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/cryptography/hazmat/primitives/hmac.py", line 70, in verify 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers ctx.verify(signature) 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/cryptography/hazmat/backends/openssl/hmac.py", line 76, in verify 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers raise InvalidSignature("Signature did not match digest.") 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers cryptography.exceptions.InvalidSignature: Signature did not match digest. 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers During handling of the above exception, another exception occurred: 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers Traceback (most recent call last): 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/barbican/api/controllers/__init__.py", line 102, in handler 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers return fn(inst, *args, **kwargs) 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/barbican/api/controllers/__init__.py", line 88, in enforcer 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers return fn(inst, *args, **kwargs) 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/barbican/api/controllers/__init__.py", line 150, in content_types_enforcer 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers return fn(inst, *args, **kwargs) 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/barbican/api/controllers/secrets.py", line 456, in on_post 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers new_secret, transport_key_model = plugin.store_secret( 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/barbican/plugin/resources.py", line 108, in store_secret 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers secret_metadata = _store_secret_using_plugin(store_plugin, secret_dto, 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/barbican/plugin/resources.py", line 279, in _store_secret_using_plugin 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers secret_metadata = store_plugin.store_secret(secret_dto, context) 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/barbican/plugin/store_crypto.py", line 96, in store_secret 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers response_dto = encrypting_plugin.encrypt( 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/barbican/plugin/crypto/simple_crypto.py", line 76, in encrypt 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers kek = self._get_kek(kek_meta_dto) 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/barbican/plugin/crypto/simple_crypto.py", line 73, in _get_kek 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers return encryptor.decrypt(kek_meta_dto.plugin_meta) 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/cryptography/fernet.py", line 76, in decrypt 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers return self._decrypt_data(data, timestamp, ttl, int(time.time())) 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/cryptography/fernet.py", line 125, in _decrypt_data 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers self._verify_signature(data) 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers File "/usr/lib/python3/dist-packages/cryptography/fernet.py", line 115, in _verify_signature 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers raise InvalidToken 2021-10-29 12:53:28.991 568554 ERROR barbican.api.controllers cryptography.fernet.InvalidToken Any advise how to fix it ? - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Oct 29 08:53:58 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 29 Oct 2021 10:53:58 +0200 Subject: [masakari] Yoga PTG summary Message-ID: Hi! This is a quick summary of Masakari's Yoga PTG. Quick as the PTG itself was quite quick. :-) We have summarised the progress made in Xena and decided to continue the efforts related to supporting monitoring with Consul and tracking issued evacuations. We have also discussed one new topic regarding circuit breaking in case of a mass failure of instances (e.g., due to a storage network failing in one location). To this end, the proposal is to introduce a protection mechanism (circuit breaking) based on a static time window (static at least in the first iteration) per segment. suzhengwei is to write the spec for this. Raw notes are available at https://etherpad.opendev.org/p/masakari-yoga-ptg Kind regards, -yoctozepto From jibsan94 at gmail.com Thu Oct 28 21:44:49 2021 From: jibsan94 at gmail.com (Jibsan Joel Rosa Toirac) Date: Thu, 28 Oct 2021 17:44:49 -0400 Subject: Problems with Openstack internal:external router Message-ID: Hello I finished to set up a brand new Openstack server, I had already other up and running without any problems, yet on mine the router wich allows the external connection has all the interfaces down and I don't know what else to do. I've tried everything but it still not working. Please any help with this? I let you here a couple of screenshots of my network topology -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2021-10-28 12-40-29.png Type: image/png Size: 137501 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2021-10-28 12-40-31.png Type: image/png Size: 139144 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2021-10-28 12-40-33.png Type: image/png Size: 137840 bytes Desc: not available URL: From jean-francois.taltavull at elca.ch Fri Oct 29 14:08:07 2021 From: jean-francois.taltavull at elca.ch (=?utf-8?B?VGFsdGF2dWxsIEplYW4tRnJhbsOnb2lz?=) Date: Fri, 29 Oct 2021 14:08:07 +0000 Subject: [OpenStack-Ansible] LXC containers apt upgrade In-Reply-To: <1243101634549390@mail.yandex.ru> References: <2cce6f95893340dcba81c88e278213b8@elca.ch> <1243101634549390@mail.yandex.ru> Message-ID: Thanx Dmitriy, I will give this method a try. From: Dmitriy Rabotyagov Sent: lundi, 18 octobre 2021 11:32 To: openstack-discuss at lists.openstack.org Subject: Re: [OpenStack-Ansible] LXC containers apt upgrade EXTERNAL MESSAGE - This email comes from outside ELCA companies. - ??? Hi! Sorry for the late reply. I did upgrade that following way with ad-hoc command: cd /opt/openstack-ansible; ansible -m package -a "name=ca-certificates state=latest update_cache=yes only_upgrade=yes" all But eventually the correct way probably would be updating LXC image and re-creating containers. This is not really realistic scenario though. 04.10.2021, 16:11, "Taltavull Jean-Francois" >: Hi All, Following the recent Let's Encrypt certificates expiration, I was wondering what was the best policy to apt upgrade the operating system used by LXC containers running on controller nodes. Has anyone ever defined such a policy ? Is there an OSA tool to do this ? Regards, Jean-Fran?ois -- Kind Regards, Dmitriy Rabotyagov -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Oct 29 16:33:49 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 29 Oct 2021 09:33:49 -0700 Subject: [tc][operators][all] TC tags framework to be dropped In-Reply-To: References: Message-ID: ...I still think we should keep the starter-kit tags- if only for the openstack site and basic marketing for folks new to openstack. The vmt tag too is good to have I think? The other project maintained ones we should definitely drop. -Kendall (diablo_rojo) On Wed, Oct 27, 2021 at 11:51 AM Rados?aw Piliszek < radoslaw.piliszek at gmail.com> wrote: > Dear OpenStack Operators, > > Due to no response to the original query about usefulness of the TC > tags framework [1] (which was repeated in each and every following TC > newsletter), the TC has decided to drop the tags framework entirely. > [2] [3] > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2021-September/024804.html > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025554.html > [3] https://etherpad.opendev.org/p/tc-yoga-ptg > > Kind regards, > > -yoctozepto > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Oct 29 17:50:24 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 29 Oct 2021 19:50:24 +0200 Subject: [tc][operators][all] TC tags framework to be dropped In-Reply-To: References: Message-ID: Yes! I think we are going to discuss how to replace the otherwise-useful parts. -yoctozepto On Fri, 29 Oct 2021 at 18:34, Kendall Nelson wrote: > > ...I still think we should keep the starter-kit tags- if only for the openstack site and basic marketing for folks new to openstack. > > The vmt tag too is good to have I think? The other project maintained ones we should definitely drop. > > -Kendall (diablo_rojo) > > On Wed, Oct 27, 2021 at 11:51 AM Rados?aw Piliszek wrote: >> >> Dear OpenStack Operators, >> >> Due to no response to the original query about usefulness of the TC >> tags framework [1] (which was repeated in each and every following TC >> newsletter), the TC has decided to drop the tags framework entirely. >> [2] [3] >> >> [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-September/024804.html >> [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025554.html >> [3] https://etherpad.opendev.org/p/tc-yoga-ptg >> >> Kind regards, >> >> -yoctozepto >> From kennelson11 at gmail.com Fri Oct 29 18:11:54 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 29 Oct 2021 11:11:54 -0700 Subject: [TC][First Contact][Skyline][OpenDev][Release][OpenStackSDK] Oct 2021 PTG Summaries Message-ID: Hello! I tried to summarize some of the discussions I was in throughout the week for those that could not join! Hopefully some of it is helpful :) Technical Committee[1] Monday- 25 people Monday we focused on the relationship between the Project Team Leads (PTLs) and the Technical Committee (TC). We discussed the role of the TC, and what PTLs look to us for, what we should be doing that we are not, what we are doing that we shouldn?t be. Things we do that maybe might not be the best use of everyone?s time, etc. Some of the main things that came out of discussions were: - Creating a repo (possibly) to house documents about how certain things should be implemented if a project is going to implement them, for example quotas. These technical guidelines will hopefully give more direction to new projects entering the OpenStack space and help unify and standardize the implementations of features across already established OpenStack projects. These documents will not have accompanying deadlines like community goals, but will instead be used as resources and reference documents to further enhance OpenStack. - How to engage with OpenStack Operators and get them to be more vocal about feedback and encouraging interaction with the project specific communities with regards to bugs and project direction. Ideas included making sure to use the [ops] tag on emails to the openstack-discuss list and ensuring that users and operators know the mailing list is for them just as much as it is for developers. Also, making sure to communicate topics to the twitter handle for wider engagement to those that might not be watching the mailing list closely. Thursday- 18 people Another full day of discussions. We covered the Docs SIG state of affairs, Yoga testing runtime, TC tags, a project adoption ladder, and release name process changes (yes, again). The tldrs; of all of that: - Some of the Docs SIG repos are being retired, some are going to live with the First Contact SIG. - Python 3.9 is now a voting test and 3.10 will be added as non voting. Python 3.6 will be kept until CentOS Stream 8 is replaced by Stream 9. - There is a proposal to drop the tag framework altogether[1.1]. - We do NOT want to add a whole process for incubating new projects; simply keep the process we have and apply a ?tech preview? note to their first release where applicable. - We are going to change back to having the community vote for the names which will also be proposed by the community, but also pre-vetted by the community before voting begins. The final vetting will be the Open Infra Foundation?s legal team as it always has been. Friday- 28 people Another nother full day of discussions! We covered operator pain point targeting, Skyline project proposal, RBAC, Community Goals, Xena Wrap up, and were also joined at the very end by several members of the Kubernetes Steering Committee (YAY cross community collaboration!). Those tldrs; - We want to make sure all the pain points are being actively tracked and if there are things that are common across many projects those get more attention as potential community goals down the road. - Skyline is largely ready to be proposed as a project. There are a few things that might need to get rearranged and changed so that it can be released as a part of OpenStack but that shouldn?t stop them from being approved. They could be included in the Yoga release as a ?tech preview?. - I don?t even know how to summarize the RBAC stuff lol. That one I can?t tldr for you. Go check out those notes in the etherpad[1]. - The main thing we discussed wrt community goals is that things like RBAC that we want as a community goal cannot be scoped to a single release and we are fine with that. - In the Xena wrap up, we did a retrospective of the things we accomplished and also agreed to keep monthly video meetings in the upcoming cycle. - We again loved having members of the k8s steering committee join us! We chatted about how the CNCF project ladder works, how the OpenStack?s ?Upstream Investment Opportunities' works, and CI/CD resourcing struggles for both communities. OpenStackSDK / OpenStackClient[2] - 10 people First we kicked off with Artem giving us an update on the r1 branch which is nearing the point that we can merge it! There is only one part missing- compute has not been completely reconfigured to use proxy; there are still a number of sections of the code that rely on the cloud layer and directly touch the APIs. Once that is completed however; we will be able to merge r1 into master and celebrate! We legitimately discussed having a party and publicizing it, so if you have opinions about any of that (or want to help wrap things up), please reach out to us in #openstack-sdks. We talked a little bit about the exceptional cases where we might consider deprecating OSC commands. Basically we concluded that we still don?t want to do it except in very specific circumstances. We are generally okay with doing it for removed APIs such that they are actually being completely removed from the API and not just removed in a microversion and are essentially still reachable in older microversions. Lastly, we chatted with the Korea user group about their mentoring program that is in the process of wrapping up. It sounds like it will be a yearly thing for them and we (the SDK team) want to try to help support those efforts more going forward if the organizers of the mentoring program can give us heads up like they did this last time. We look forward to working with them more in the future! First Contact SIG[3] - 2 people Discussions were pretty brief as not many people attended, but we did follow up on a discussion earlier in the day that the Technical Committee had about having the First Contact SIG become the new owners of a couple of repositories formerly owned by the Docs SIG (a reaction to a general lack of spoons on that team and moving in the direction of moving the status of that SIG inactive). The First Contact SIG agreed to take ownership of the contributor-guide repo and the training-guides repo. Work on that migration/reconfiguration of ACLs will begin soon. If you have any questions or are interested in getting involved, please contact the First Contact SIG or the Technical Committee OpenDev[4] - 4 people The session was mostly just a working one, but we also had a little time to catch up and chat and stay connected as a team :) Not much to say here really. Skyline[5]- 4 people I attended their sessions on Tuesday and while it was largely quiet when I was there we did chat a little bit about the upcoming TC discussions about the project and I invited them to join in those- spoiler; they came! Release Management [6] - 6 People The largest topic we discussed in the release management ptg session was everyone?s favorite topic- release cadence! We talked about how it might finally be time to move to 12 month cadence. Things have slowed down sufficiently with regards to feature development (and this has been consistent throughout the last couple releases). It would reduce the frequent upgrades that distros have to work through, and the process we currently maintain can be easily stretched to accommodate a year. All in all, the release team supports the idea and think that if we are to make the transition, when we finish the ?Z? release and start over at the beginning of the alphabet again, that would be a natural time to also switch to a one year release cycle rather than the 6th month cadence we have been keeping up with for quite some time. The decision is ultimately up to the Technical Committee, but the release team supports the idea. -Kendall Nelson (diablo_rojo) [1] https://etherpad.opendev.org/p/tc-yoga-ptg [1.1] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025571.html [2] https://etherpad.opendev.org/p/oct2021-ptg-sdk-cli [3] https://etherpad.opendev.org/p/oct2021-ptg-first-contact [4] https://etherpad.opendev.org/p/oct2021-ptg-opendev [5] https://etherpad.opendev.org/p/oct2021-ptg-skyline [6] https://etherpad.opendev.org/p/relmgmt-yoga-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Oct 29 18:17:16 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 29 Oct 2021 11:17:16 -0700 Subject: [PTL] PTG Summaries Blog In-Reply-To: <1635367931.11755687@apps.rackspace.com> References: <1635367931.11755687@apps.rackspace.com> Message-ID: Here are the ones I see so far: - QA: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025560.html - Masakari: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025592.html - Sahara: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025590.html - Cinder: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025588.html - Kolla http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025583.html - Cyborg: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025581.html - Manila: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025580.html - TC: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025554.html - TripleO: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025550.html - Skyline: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025541.html - Nova/Placement: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025534.html - Neutron: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025528.html - Glance: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025506.html - Various: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025597.html - Kendall (diablo_rojo) On Wed, Oct 27, 2021 at 1:53 PM helena at openstack.org wrote: > Hi PTLs, > > > > I have loved seeing all the PTG summaries that have been rolling in! I am > working to gather all of them and create a blog post for > openstack.org/blog. If you would like for your summary to be included in > the blog please post it to the mailing list and send me the link to the > archived email by Tuesday, November 2nd. > > > > Thank you all for your involvement in PTG and I look forward to seeing all > the summaries! > > > > Cheers, > > Helena > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arthur.LuzdeAvila at windriver.com Fri Oct 29 19:00:58 2021 From: Arthur.LuzdeAvila at windriver.com (Luz de Avila, Arthur) Date: Fri, 29 Oct 2021 19:00:58 +0000 Subject: [horizon][dev]Handle multiple login sessions from same user in Horizon Message-ID: Hi everyone, In order to improve the system my colleagues and I would like to bring up a new feature to Horizon. We found out that an user is able to login in Horizon with the same credentials in multiple devices or/and browsers. This may be not very secure as the user can login in many different devices or/and browsers with the same credential. Thinking on that, we would like bring up more control to the admin of the system in a way that the admin can enable or disable the multiple login sessions according to the needs of the system. For a better follow up of this propose, a blueprint has been opened with more details about the idea and concepts of this and we would like the onion of the community whether this feature make sense to implement or not. The blueprint opened on launchpad: https://blueprints.launchpad.net/horizon/+spec/handle-multiple-login-sessions-from-same-user-in-horizon Kind regards, Arthur Avila -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Oct 29 19:32:47 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Oct 2021 14:32:47 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 29th Oct, 21: Reading: 5 min Message-ID: <17ccd8a54f1.ef4664d6290463.7486381369579029613@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * TC this week IRC meeting held on Oct 28th Thursday. * Most of the meeting discussions are summarized below (Completed or in-progress activities section). Meeting full logs are available @ - https://meetings.opendev.org/meetings/tc/2021/tc.2021-10-28-15.00.log.html * Next week's meeting will be video call on google meet on Oct 28th, Thursday 15:00 UTC, feel free the topic on agenda[1] by Oct 27th. 2. What we completed this week: ========================= * Add openstack/ci-log-processing project into TaCT[2] * PTG and Summary: I have summarized the TC summary along with the TC+community leaders interaction sessions. Recordings: Day1: https://www.youtube.com/watch?v=uQRKYummsHM Day2: https://www.youtube.com/watch?v=IHj7cpBlbYo Day3: https://www.youtube.com/watch?v=RT492bi6Xto Summary: http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025554.html Two topics 1. RBAC and 2. Pain points will be continue to discuss in post-PTG call or TC weekly meeting. 3. Activities In progress: ================== TC Tracker for Yoga cycle ------------------------------ * I have started the etherpad to collect the Yoga cycle targets for TC[3]. Open Reviews ----------------- * Nine open reviews for ongoing activities[4]. RBAC discussion: continuing from PTG ---------------------------------------------- We are continuing the RBAC discussion we left in PTG. I have sent the doodle poll to select the appropriate time, feel free to add your vote if you are interested to be part of it[5]. Yoga release community-wide goal ----------------------------------------- * With the continuing the discussion on RBAC, we are re-working on the RBAC goal, please wait until we finalize the implementation[6] Adjutant need maintainers and PTLs ------------------------------------------- You might have seen the email[7] from Adrian to call the maintainer for Adjutant project, please reply to that email or here or on openstack-tc channel if you are interested to maintain this project. New project 'Skyline' proposal ------------------------------------ * We discussed it in TC PTG and there are few open points about python packaging, repos, and plugins plan which we are discussion on ML. Updating the Yoga testing runtime ---------------------------------------- * We agreed to update the below things in the Yoga testing runtime: 1. Add Debian 11 as tested distro 2. Bump highest python version to test to 3.9 Patch is up for the review[8], once that is merged I will update the job template. Stable Core team process change --------------------------------------- * Current proposal is under review[9]. Feel free to provide early feedback if you have any. Merging 'Technical Writing' SIG Chair/Maintainers to TC ------------------------------------------------------------------ * Work to merge this SIG to TC is up for review[10]. One repo of this SIG openstack/training-labs is called for maintainers[11] and rest other are moved under TC & FC SIG. TC tags analysis ------------------- * TC agreed to remove the framework and it is communicated in ML[12]. Project updates ------------------- * Retiring js-openstack-lib [13] * Retiring kolla-cli [14] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[15]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [16] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [17] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/815318 [3] https://etherpad.opendev.org/p/tc-yoga-tracker [4] https://review.opendev.org/q/projects:openstack/governance+status:open [5] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025569.html [6] https://review.opendev.org/c/openstack/governance/+/815158 [7] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025555.html [8] https://review.opendev.org/c/openstack/governance/+/815851 [9] https://review.opendev.org/c/openstack/governance/+/810721 [10] https://review.opendev.org/c/openstack/governance/+/815869 [11] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025586.html [12] http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025571.html [13] https://review.opendev.org/c/openstack/governance/+/807163 [14] https://review.opendev.org/c/openstack/governance/+/814603 [15] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [16] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [17] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From mdemaced at redhat.com Sat Oct 30 12:05:57 2021 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Sat, 30 Oct 2021 14:05:57 +0200 Subject: [kuryr][ptg] Yoga cycle PTG summary Message-ID: Hello, I would like to thank you for the great discussion and engagement during the PTG sessions. It was a really productive week. Here follows the summary of the Kuryr sessions during the PTG: *Day 1 - Tuesday* Etherpad: https://etherpad.opendev.org/p/kuryr-yoga-ptg - Xena retrospective: - We celebrated the contributions from many Outreachy applicants and the interns. - The addition of new features, such as: - The Reconciliation mechanisms that enforces the matching of a Kubernetes Service with a load-balancer. - The setting of Octavia Listener timeout based on annotation of the Kubernetes Service. - Improvements on the logs to facilitate debuggability and troubleshooting. - Usage of Kubeadm to facilitate configuration of Kubernetes cluster with Devstack. - Many documentation improvements. - Stable branches: - We have many stable branches which are not highly maintained anymore - action items: EOL the branches till Stein - As we currently release with OpenStack, it was proposed that we release with Kubernetes - action items: Keep following the OpenStack release, but make sure that when there is a Kubernetes release it's updated in Kuryr. - Future of the CI: - action items: - Move all the gates to use CRI-O and have one with Docker. - Attempt to remove the Amphora dependency for the API load-balancer when OVN Octavia driver is available. - Move the jobs that use OVN to voting. - Rate limit requests: - Possibility to limit the requests to Octavia during the reconciliation between load-balancers and Kubernetes Service . - Possibility to limit the amount of requests sent to Neutron *Day 2 - Wednesday* Etherpad: https://etherpad.opendev.org/p/kuryr-yoga-ptg - Merge of Kuryr lib into Kuryr-kubernetes: - action items: analyze the feasibility of moving the needed pieces into Kuryr-kubernetes and sync with other projects that use it. - Different log levels: - action items: - clean up some unnecessary debugging messages and move the level of some to warning or info. - Feasibility of reducing the current approach of one Network per Namespace, to one Network per cluster to reduce resource usage: - action items: - check if there is a plan to implement Network cascade delete with the Neutron team - attempt to create a PoC with one Pod Network per Cluster - Possibility of creating external reachable services without Amphora. - Discussion about the open Kubernetes proposal of notifying the user when a Network Policy is actually enforced. - Drop the liveness checks for certain scenarios: - The controller and CNI restarts has helped with certain cornes cases, but it can decrease the availability of Kuryr if restarted many times - action items: - drop restarts on unavailability of Neutron/k8s API - enforce restarts when all the Kuryr watches, handlers and drivers are not running - Drop usage of Flask debug servers - Currently many servers created by Kuryr like Health, Prometheus exporter and Kuryr daemon are based on flask and could likely be moved to another type of server. - action item: Move the kuryr-cni to kuryr-daemon communication to use gRPC Don't exitate to contact us either on IRC or email if you have any questions. Cheers, Maysa Macedo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Sat Oct 30 16:06:27 2021 From: wodel.youchi at gmail.com (wodel youchi) Date: Sat, 30 Oct 2021 17:06:27 +0100 Subject: kolla-ansible wallaby manila ceph pacific In-Reply-To: References: Message-ID: Hi, Here is the version of ceph : # docker exec -it ceph-mon-controllera ceph -v *ceph version 16.2.5* (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable) And here is the output of the "ceph auth ls" for client.manila : client.manila key: AQCgR3lhcAZ8JxAFtr5dK+eWtXcgtPd1WUbPjw== * caps: [mgr] allow rw caps: [mon] allow r* Regards. Le jeu. 28 oct. 2021 ? 17:10, Victoria Mart?nez de la Cruz < victoria at vmartinezdelacruz.com> a ?crit : > Hi Wodel, > > A few things, can you share the specific version you are using for Ceph? > You mentioned Ceph Octopus and Ceph Pacific, we need to make sure that the > microversions are as follows > > Ceph Octopus minimum version 15.2.11 > Ceph Pacific minimum version 16.2.1 > > Also, can you make sure that the updates on the caps were applied? > > Do this by doing "ceph auth ls" and checking the values you have for > client.manila > > The output should be similar to what you see in [0] > > If those are ok, you would need to restart manila-share service and the > issue should be resolved. > > Regards, > > Victoria > > [0] > https://docs.ceph.com/en/latest/rados/operations/user-management/#list-users > > On Thu, Oct 28, 2021 at 9:19 AM Buddhika Godakuru > wrote: > >> Hi Wodel, >> So the issue is beyond my current understanding. >> I am interested in this because I am planning to deploy one next month. >> Unfortunately, currently I do not have hardware to deploy one and see. >> Sorry couldn't be more helpful. >> >> On Thu, 28 Oct 2021 at 03:24, wodel youchi >> wrote: >> >>> Hi, >>> >>> To *Buddhika Godakuru* >>> I took a look into the manila-share docker deployed on my platforme and >>> it contains the patch you mentioned in [1]. >>> The Manila code does integrate the ceph test for version. >>> >>> [1] https://review.opendev.org/c/openstack/manila/+/797955 >>> >>> >>> Regards. >>> >>> Le mar. 26 oct. 2021 ? 09:06, wodel youchi a >>> ?crit : >>> >>>> Hi, >>>> My deployment is from source, and I have little experience on how to >>>> rebuild docker images, I can perhaps pull new images (built recently I >>>> mean). >>>> I took a look into docker hub and there are new manila images pushed 2 >>>> days ago. >>>> >>>> Regards. >>>> >>>> Le lun. 25 oct. 2021 ? 19:23, Buddhika Godakuru >>>> a ?crit : >>>> >>>>> Is your deployment type is source or binary? >>>>> If it is binay, I wonder if this patch [1] is built into the repos. >>>>> If source, could you try rebuilding the manila docker images? >>>>> >>>>> [1] https://review.opendev.org/c/openstack/manila/+/797955 >>>>> >>>>> On Mon, 25 Oct 2021 at 20:21, wodel youchi >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I tried with pacific then with octopus, the same problem. >>>>>> The patch was applied to kolla-ansible. >>>>>> >>>>>> Regards. >>>>>> >>>>>> Le ven. 22 oct. 2021 00:34, Goutham Pacha Ravi < >>>>>> gouthampravi at gmail.com> a ?crit : >>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Oct 21, 2021 at 1:56 AM wodel youchi >>>>>>> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I did that already, I changed the keyring to "*ceph auth >>>>>>>> get-or-create client.manila -o manila.keyring mgr 'allow rw' mon 'allow r'* >>>>>>>> " it didn't work, then I tried with ceph octopus, same error. >>>>>>>> I applied the patch, then I recreated the keyring for manila as >>>>>>>> wallaby documentation, I get the error "*Bad target type 'mon-mgr'* >>>>>>>> " >>>>>>>> >>>>>>> >>>>>>> Thanks, the error seems similar to this issue: >>>>>>> https://tracker.ceph.com/issues/51039 >>>>>>> >>>>>>> Can you confirm the ceph version installed? On the ceph side, some >>>>>>> changes land after GA and get back ported; >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Regards. >>>>>>>> >>>>>>>> Le jeu. 21 oct. 2021 ? 05:29, Buddhika Godakuru < >>>>>>>> bsanjeewa at kln.ac.lk> a ?crit : >>>>>>>> >>>>>>>>> Dear Wodel, >>>>>>>>> I think this is because manila has changed the way how to >>>>>>>>> set/create auth ID in Wallaby for native CephFS driver. >>>>>>>>> For the patch to work, you should change the command >>>>>>>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow r, >>>>>>>>> allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>>>>>>> to something like, >>>>>>>>> ceph auth get-or-create client.manila -o manila.keyring mgr 'allow >>>>>>>>> rw' mon 'allow r' >>>>>>>>> >>>>>>>>> Please see Manila Wallaby CephFS Driver document [1] >>>>>>>>> >>>>>>>>> Hope this helps. >>>>>>>>> >>>>>>>>> Thank you >>>>>>>>> [1] >>>>>>>>> https://docs.openstack.org/manila/wallaby/admin/cephfs_driver.html#authorizing-the-driver-to-communicate-with-ceph >>>>>>>>> >>>>>>>>> On Wed, 20 Oct 2021 at 23:19, wodel youchi >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi, and thanks >>>>>>>>>> >>>>>>>>>> I tried to apply the patch, but it didn't work, this is the >>>>>>>>>> manila-share.log. >>>>>>>>>> By the way, I did change to caps for the manila client to what is >>>>>>>>>> said in wallaby documentation, that is : >>>>>>>>>> [client.manila] >>>>>>>>>> key = keyyyyyyyy..... >>>>>>>>>> >>>>>>>>>> * caps mgr = "allow rw" caps mon = "allow r"* >>>>>>>>>> >>>>>>>>>> [root at ControllerA manila]# cat manila-share.log >>>>>>>>>> 2021-10-20 10:03:22.286 7 INFO oslo_service.periodic_task [-] >>>>>>>>>> Skipping periodic task update_share_usage_size because it is disabled >>>>>>>>>> ...... >>>>>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager raise >>>>>>>>>> exception.ShareBackendException(msg) >>>>>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>>>>> manila.exception.ShareBackendException: json_command failed - prefix=fs >>>>>>>>>> volume >>>>>>>>>> ls, argdict={'format': 'json'} - exception message: Bad target >>>>>>>>>> type 'mon-mgr'. >>>>>>>>>> 2021-10-20 10:11:50.596 30 ERROR manila.share.manager >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>> Le mer. 20 oct. 2021 ? 00:14, Goutham Pacha Ravi < >>>>>>>>>> gouthampravi at gmail.com> a ?crit : >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, Oct 19, 2021 at 2:35 PM wodel youchi < >>>>>>>>>>> wodel.youchi at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> Has anyone been successful in deploying Manila wallaby using >>>>>>>>>>>> kolla-ansible with ceph pacific as a backend? >>>>>>>>>>>> >>>>>>>>>>>> I have created the manila client in ceph pacific like this : >>>>>>>>>>>> >>>>>>>>>>>> *ceph auth get-or-create client.manila mon 'allow r' mds 'allow >>>>>>>>>>>> r, allow rw path=/' osd 'allow rw pool=cephfs_data' mgr 'allow rw'* >>>>>>>>>>>> >>>>>>>>>>>> When I deploy, I get this error in manila's log file : >>>>>>>>>>>> Bad target type 'mon-mgr' >>>>>>>>>>>> Any ideas? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Could you share the full log from the manila-share service? >>>>>>>>>>> There's an open bug related to manila/cephfs deployment: >>>>>>>>>>> https://bugs.launchpad.net/kolla-ansible/+bug/1935784 >>>>>>>>>>> Proposed fix: >>>>>>>>>>> https://review.opendev.org/c/openstack/kolla-ansible/+/802743 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Regards. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> >>>>>>>>> ??????? ????? ???????? >>>>>>>>> Buddhika Sanjeewa Godakuru >>>>>>>>> >>>>>>>>> Systems Analyst/Programmer >>>>>>>>> Deputy Webmaster / University of Kelaniya >>>>>>>>> >>>>>>>>> Information and Communication Technology Centre (ICTC) >>>>>>>>> University of Kelaniya, Sri Lanka, >>>>>>>>> Kelaniya, >>>>>>>>> Sri Lanka. >>>>>>>>> >>>>>>>>> Mobile : (+94) 071 5696981 >>>>>>>>> Office : (+94) 011 2903420 / 2903424 >>>>>>>>> >>>>>>>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>>>>>> University of Kelaniya Sri Lanka, accepts no liability for the >>>>>>>>> content of this email, or for the consequences of any actions taken on the >>>>>>>>> basis of the information provided, unless that information is subsequently >>>>>>>>> confirmed in writing. If you are not the intended recipient, this email >>>>>>>>> and/or any information it contains should not be copied, disclosed, >>>>>>>>> retained or used by you or any other party and the email and all its >>>>>>>>> contents should be promptly deleted fully from our system and the sender >>>>>>>>> informed. >>>>>>>>> >>>>>>>>> E-mail transmission cannot be guaranteed to be secure or >>>>>>>>> error-free as information could be intercepted, corrupted, lost, destroyed, >>>>>>>>> arrive late or incomplete. >>>>>>>>> >>>>>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>>>>>> >>>>>>>> >>>>> >>>>> -- >>>>> >>>>> ??????? ????? ???????? >>>>> Buddhika Sanjeewa Godakuru >>>>> >>>>> Systems Analyst/Programmer >>>>> Deputy Webmaster / University of Kelaniya >>>>> >>>>> Information and Communication Technology Centre (ICTC) >>>>> University of Kelaniya, Sri Lanka, >>>>> Kelaniya, >>>>> Sri Lanka. >>>>> >>>>> Mobile : (+94) 071 5696981 >>>>> Office : (+94) 011 2903420 / 2903424 >>>>> >>>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>> University of Kelaniya Sri Lanka, accepts no liability for the content >>>>> of this email, or for the consequences of any actions taken on the basis of >>>>> the information provided, unless that information is subsequently confirmed >>>>> in writing. If you are not the intended recipient, this email and/or any >>>>> information it contains should not be copied, disclosed, retained or used >>>>> by you or any other party and the email and all its contents should be >>>>> promptly deleted fully from our system and the sender informed. >>>>> >>>>> E-mail transmission cannot be guaranteed to be secure or error-free as >>>>> information could be intercepted, corrupted, lost, destroyed, arrive late >>>>> or incomplete. >>>>> >>>>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>> >>>> >> >> -- >> >> ??????? ????? ???????? >> Buddhika Sanjeewa Godakuru >> >> Systems Analyst/Programmer >> Deputy Webmaster / University of Kelaniya >> >> Information and Communication Technology Centre (ICTC) >> University of Kelaniya, Sri Lanka, >> Kelaniya, >> Sri Lanka. >> >> Mobile : (+94) 071 5696981 >> Office : (+94) 011 2903420 / 2903424 >> >> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> University of Kelaniya Sri Lanka, accepts no liability for the content of >> this email, or for the consequences of any actions taken on the basis of >> the information provided, unless that information is subsequently confirmed >> in writing. If you are not the intended recipient, this email and/or any >> information it contains should not be copied, disclosed, retained or used >> by you or any other party and the email and all its contents should be >> promptly deleted fully from our system and the sender informed. >> >> E-mail transmission cannot be guaranteed to be secure or error-free as >> information could be intercepted, corrupted, lost, destroyed, arrive late >> or incomplete. >> >> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Sat Oct 30 20:05:47 2021 From: amonster369 at gmail.com (A Monster) Date: Sat, 30 Oct 2021 21:05:47 +0100 Subject: Openstack ansible VS kolla ansible Message-ID: Openstack-ansible uses LXC containers to deploy openstack services , while Kolla uses docker containers instead, which of these two deployment tools should I use for an Openstack deployment, and what are the differences between them. -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun Oct 31 04:18:10 2021 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 31 Oct 2021 00:18:10 -0400 Subject: is OVS+DPDK useful for general purpose workload Message-ID: Folks, I have deployed openstack and configured OVS-DPDK on compute nodes for high performance networking. My workload is general purpose workload like running haproxy, mysql, apache and XMPP etc. When I did load testing I found performance was average and after 200kpps packet rate I noticed packet drops. I heard and read that DPDK can handle millions of packets but in my case its not true. I am using virtio-net in guest vm which processes packets in the kernel so I believe my bottleneck is my guest VM. I don't have any guest based DPDK applications like testpmd etc. does that mean OVS+DPDK isn't useful for my cloud? How do I take advantage of OVS+DPDK with general purpose workload? Maybe I have the wrong understanding about DPDK so please help me :) Thanks ~S From tobias.urdin at binero.com Sun Oct 31 19:56:20 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Sun, 31 Oct 2021 19:56:20 +0000 Subject: [puppet] Propose retiring old stable branches(Stein, Rocky and Queens) In-Reply-To: References: Message-ID: <493FB665-DAA1-49D0-9BB5-D2CC9CD07D7C@binero.com> Hello, Thanks for proposing this, I think we should retire these branches but let?s see if we get any other feedback. Best regards Tobias On 29 Oct 2021, at 06:29, Takashi Kajinami > wrote: Hello, In puppet repos we have plenty of stable branches still open, and the oldest is now stable/queens. However recently we haven't seen many backports proposed to stein, rocky and queens [1], so I'll propose retiring these three old branches now. Please let me know if anybody is interested in keeping any of these three. Note that currently CI jobs are broken in these three branches and it is likely we need to investigate the required additional pinning of dependent packages. If somebody is still interested in maintaining these old branches then these jobs should be fixed. [1] https://review.opendev.org/q/(project:%255Eopenstack/puppet-.*)AND(NOT+project:openstack/puppet-tripleo)AND(NOT+project:openstack/puppet-pacemaker)AND((branch:stable/queens)OR(branch:stable/rocky)OR(branch:stable/stein)) Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Sun Oct 31 21:37:15 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sun, 31 Oct 2021 17:37:15 -0400 Subject: is OVS+DPDK useful for general purpose workload In-Reply-To: References: Message-ID: Most of the implementations I have seen for OVS-DPDK mean that the VM side would also use DPDK. Because even from a DPDK perspective at the compute level, the VM will become the bottleneck. 200k PPS with OVS-DPDK + non-DPDK VM is about what you get with OVS + OVSfirewall + non-DPDK VM. On Sun, Oct 31, 2021 at 12:21 AM Satish Patel wrote: > Folks, > > I have deployed openstack and configured OVS-DPDK on compute nodes for > high performance networking. My workload is general purpose workload > like running haproxy, mysql, apache and XMPP etc. > > When I did load testing I found performance was average and after > 200kpps packet rate I noticed packet drops. I heard and read that DPDK > can handle millions of packets but in my case its not true. I am using > virtio-net in guest vm which processes packets in the kernel so I > believe my bottleneck is my guest VM. > > I don't have any guest based DPDK applications like testpmd etc. does > that mean OVS+DPDK isn't useful for my cloud? How do I take advantage > of OVS+DPDK with general purpose workload? > > Maybe I have the wrong understanding about DPDK so please help me :) > > Thanks > ~S > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Sun Oct 31 21:54:12 2021 From: amonster369 at gmail.com (A Monster) Date: Sun, 31 Oct 2021 22:54:12 +0100 Subject: Using ceph for openstack storage Message-ID: I'm in the process of deploying an Openstack cloud and I've been aiming at maximize the number of computing node and in order to do that I've thought of using Ceph distributed storage and use an All in one openstack deployment, so that swift objects and glance's blocks are stored through Ceph. I have a total of 12 servers, so I'll end up with 1 Controller node for openstack services and use the remaining nodes for ceph cluster and computing. Is it worth it to use a ceph storage system , and what would be the minimum number of nodes required to deploy it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun Oct 31 23:43:02 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 1 Nov 2021 00:43:02 +0100 Subject: Using ceph for openstack storage In-Reply-To: References: Message-ID: On 10/31/21 10:54 PM, A Monster wrote: > I'm in the process of deploying an Openstack cloud and I've been aiming > at maximize the number of computing node and in order to do that I've > thought of using Ceph distributed storage and use an All in one > openstack deployment, so that swift objects and glance's blocks are > stored through Ceph. I have a total of 12 servers, so I'll end up with?1 > Controller node for openstack services and use the remaining nodes for > ceph cluster?and computing.? > ?Is it worth it to use a ceph storage system , and what would be the > minimum number of nodes required to deploy it.? Well, are we talking about serious production, or just playing? Since you're talking about a single controller, without any sort of redundancy, I am double-guessing that we're not talking about a serious deployment here. If you were, I would strongly suggest that your Ceph is made of: - 3 ceph mon (that do not share any other role and do only that) - at least 10 ceph OSD nodes, so that loosing one of them doesn't make the full of your cluster super slow (as Ceph automatically would rebalance the data to the 9 remaining nodes). and also that your control plane is made of at least 3 nodes, and not mix Ceph roles with anything else... This would be a serious deployment. But since you seem super limited in therms of hardware, I would suggest that you: - Deploy 3 controllers on which you also install a Ceph MON - Deploy Nova and Ceph OSD on the remaining 9 nodes. This way, you still keep some kind of redundancy at least, if one of the server fails. Remember: server do fail, it's a question of when, rather than if. :) I hope this helps, Cheers, Thomas Goirand (zigo)