From gmann at ghanshyammann.com Tue Jun 1 00:56:19 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 31 May 2021 19:56:19 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 3rd at 1500 UTC Message-ID: <179c5124664.d7d11855244381.7893037772801020341@ghanshyammann.com> Hello Everyone, NOTE: FROM THIS WEEK ONWARDS, TC MEETINGS WILL BE HELD IN #openstack-tc CHANNEL ON OFTC NETWORK (NOT FREENODE) Technical Committee's next weekly meeting is scheduled for June 3rd at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, June 2nd, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From yasufum.o at gmail.com Tue Jun 1 01:31:28 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 1 Jun 2021 10:31:28 +0900 Subject: [tacker] Next meeting moving on OFTC Message-ID: <4ba17cf3-5446-4a86-aea1-fa621b12b2a3@gmail.com> Hi team, The next tacker team meeting will be held on OFTC #openstack-meeting channel. Please refer [1] and [2] for joining to OFTC if you are not ready. [1] https://www.oftc.net/ [2] https://www.oftc.net/Services/#register-your-account Thanks, Yasufumi From kklimonda at syntaxhighlighted.com Tue Jun 1 06:56:02 2021 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Tue, 01 Jun 2021 08:56:02 +0200 Subject: [magnum] docker_volume_size considered harmful? Message-ID: <57d9666b-75c0-47b8-9998-8f3394cd6f83@www.fastmail.com> Hi, Reading through recent magnum reviews on gerrit, I've noticed Spyros' comment that he hopes not many people use this option. That lead me to look into issues related to this, but the only related piece of information I could find was that this option has been observed (or proven, depending on whether we read release notes or commit message) to become a bottleneck for scaling larger clusters. Firstly, if this option is considered problematic for scaling wouldn't it make sense to somehow deprecated it, and put a warning in the documentation describing the reasoning for that? Right now it seems to be a preferred way of deploying clusters based on the documentation - it is used in the cluster template creation example here: https://docs.openstack.org/magnum/latest/user/ Secondly, if docker volume is considered problematic, does this mean that volume-based instances have the same problem in general, and image-based instances should be used instead? When does this become a problem? For clusters with 20 nodes? 100? 200? 500? -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com From pierre at stackhpc.com Tue Jun 1 07:46:55 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 1 Jun 2021 09:46:55 +0200 Subject: [blazar] Project channel and weekly meeting moving to OFTC Message-ID: Hello, Like the rest of the OpenStack community, Blazar is moving to the OFTC network, still using the #openstack-blazar channel. Please join us over there. The bi-weekly meeting will also be on the OFTC network, still in #openstack-meeting-alt for now. For more information about the change of IRC network, read [1]. Best wishes, Pierre Riteau (priteau) [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html From manchandavishal143 at gmail.com Tue Jun 1 08:18:59 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Tue, 1 Jun 2021 13:48:59 +0530 Subject: [horizon] Project channel and weekly meeting moving to OFTC n/w Message-ID: Hello Everyone, As you may already know Openstack IRC has moved from Freenode[1] n/w to OFTC n/w. So from tomorrow onwards, our weekly team meetings will be on OFTC n/w on the same channel (#openstack-meeting-alt) as previous one at Freenode n/w. Also, please try to discuss any topics on the same channel (openstack-horizon) on OFTC n/w. Kindly register yourself on OFTC n/w[ 2] if you have not done yet. Thanks & Regards, Vishal Manchanda [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html [2] https://www.oftc.net/Services/#register-your-account -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Jun 1 08:26:22 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 1 Jun 2021 10:26:22 +0200 Subject: [qa] Weekly meeting moving to OFTC starting June 1st Message-ID: Hello, as you are probably aware, OpenStack IRC has moved from FreeNode to OFTC network during the weekend [1]. Starting this week our weekly Office Hour which is held on #openstack-qa channel *will be at OFTC* network. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html Regards, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Jun 1 12:01:58 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 01 Jun 2021 13:01:58 +0100 Subject: [docs] Request to clean up reviewers on openstack-doc-core and openstack-contributor-guide-core In-Reply-To: <58D9BB93-0264-4A7F-89D6-E91CFD4914FD@demarco.com> References: <58D9BB93-0264-4A7F-89D6-E91CFD4914FD@demarco.com> Message-ID: <224e39f37e7780c87ce6bf666985d069d084ec4c.camel@redhat.com> On Mon, 2021-05-31 at 12:57 -0500, Amy wrote: > I can help > > Amy > > > On May 31, 2021, at 11:58 AM, Radosław Piliszek wrote: > > > > On Mon, May 31, 2021 at 1:19 PM Stephen Finucane wrote: > > > > > > > On Wed, 2021-05-26 at 18:24 -0500, Ghanshyam Mann wrote: > > > > ---- On Wed, 26 May 2021 12:22:46 -0500 Julia Kreger wrote ---- > > > > I am happy to help in the openstack/contributor-guide repo (as doing as part of the upstream institute training activity). > > > > > > Added you, gmann. > > > > > > > I can help too. > > > > -yoctozepto > > > Hurrah. Added you both. Thanks :) It's very low activity now, but increasing bus factor is always a good thing. Stephen From patryk.jakuszew at gmail.com Tue Jun 1 12:11:16 2021 From: patryk.jakuszew at gmail.com (Patryk Jakuszew) Date: Tue, 1 Jun 2021 14:11:16 +0200 Subject: [nova] Proper way to regenerate request_specs of existing instances? Message-ID: Hi! I have a Rocky deployment and I want to enable AggregateInstanceExtraSpecsFilter on it. There is one slight problem I'm trying to solve in a proper way: fixing the request_specs of instances that are already running. After enabling the filter, I want to add necessary metadata keys to flavors, but this won't be propagated into request_specs of running instances, and this will cause issues later on (like scheduler selecting wrong destination hosts for migration, for example) Few years ago I encountered a similar problem on Mitaka: that deployment already had the filter enabled, but some flavors were misconfigured and lacked the metadata keys. I ended up writing a crude Python script which connected directly into the Nova database, searched for bad request_specs and manually appended the necessary extra_specs keys into request_specs JSON blob. Now, my question is: has anyone encountered a similar scenario before? Is there a more clean method for regeneration of instance request_specs, or do I have to modify the JSON blobs manually by writing directly into the database? -- Regards, Patryk Jakuszew -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Jun 1 12:34:50 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 1 Jun 2021 14:34:50 +0200 Subject: [nova] Proper way to regenerate request_specs of existing instances? In-Reply-To: References: Message-ID: On Tue, Jun 1, 2021 at 2:17 PM Patryk Jakuszew wrote: > Hi! > > I have a Rocky deployment and I want to enable > AggregateInstanceExtraSpecsFilter on it. There is one slight problem I'm > trying to solve in a proper way: fixing the request_specs of instances that > are already running. > > After enabling the filter, I want to add necessary metadata keys to > flavors, but this won't be propagated into request_specs of running > instances, and this will cause issues later on (like scheduler selecting > wrong destination hosts for migration, for example) > > Few years ago I encountered a similar problem on Mitaka: that deployment > already had the filter enabled, but some flavors were misconfigured and > lacked the metadata keys. I ended up writing a crude Python script which > connected directly into the Nova database, searched for bad request_specs > and manually appended the necessary extra_specs keys into request_specs > JSON blob. > > Now, my question is: has anyone encountered a similar scenario before? Is > there a more clean method for regeneration of instance request_specs, or do > I have to modify the JSON blobs manually by writing directly into the > database? > > As Nova looks at the RequestSpec records for knowing what the user was asking when creating the instance, and as the instance values can be modified when for example you move an instance, that's why we don't support to modify the RequestSpec directly. In general, this question is about AZs : as in general some operators want to modify the AZ value of a specific RequestSpec, this would also mean that the users using the related instance would not understand why now this instance would be on another AZ if the host is within another one. As you said, if you really want to modify the RequestSpec object, please then write a Python script that would use the objects class by getting the RequestSpec object directly and then persisting it again. -Sylvain -- > Regards, > Patryk Jakuszew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Jun 1 13:55:19 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 1 Jun 2021 10:55:19 -0300 Subject: [CLOUDKITTY] Fix tests cases broken by flask >=2.0.1 Message-ID: Hello guys, I was reviewing the patch https://review.opendev.org/c/openstack/cloudkitty/+/793790, and decided to propose an alternative patch ( https://review.opendev.org/c/openstack/cloudkitty/+/793973). Could you guys review it? The idea I am proposing is that, instead of mocking the root object ("flask.request"), we address the issue by mocking only the needed methods and attributes. This facilitates the understanding of the unit test, and also helps people to pin-point problems right away as the mocked attributes/methods are clearly seen in the unit test. -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Jun 1 14:00:36 2021 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 1 Jun 2021 10:00:36 -0400 Subject: [ops] trial ops meetups team meeting on oftc now Message-ID: let's try to kick the tires of the new IRC location (irc.oftc.net) for a ops meetups team reunion https://etherpad.opendev.org/p/ops-meetups-team -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Jun 1 14:33:51 2021 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 1 Jun 2021 10:33:51 -0400 Subject: [ops] ops meetups team meeting restarted succesfully on OFTC! Message-ID: We had a quick rehearsal meeting of the OpenStack Ops Meetups team on IRC on the new IRC host this morning. Minutes are linked below, however the key points : new team member amorin OFTC IRC worked fine openstack's meetbot instance is correctly connected and the expected commands worked (big shoutout to Jeremy Stanley and the opendev infra team for a seamless switch!) No strong preference to move away from IRC amongst those present We will try to re-animate the #openstack-operators as a channel for technical discussions between openstack operators Meeting ended Tue Jun 1 14:28:21 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:28 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2021/ops_meetup_team.2021-06-01-14.02.html 10:28 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2021/ops_meetup_team.2021-06-01-14.02.txt 10:28 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2021/ops_meetup_team.2021-06-01-14.02.log.html Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Jun 1 16:02:24 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 1 Jun 2021 09:02:24 -0700 Subject: [requirements][docs] sphinx and docutils major version update In-Reply-To: <20210528020128.earj2i2v5nxjnlu3@mthode.org> References: <20210528020128.earj2i2v5nxjnlu3@mthode.org> Message-ID: FYI, some projects may need an additional LaTeX font package for PDF rendering after the update: https://review.opendev.org/c/openstack/designate-dashboard/+/791558/3/bindep.txt Michael On Thu, May 27, 2021 at 7:05 PM Matthew Thode wrote: > > Looks like a major version update came along and broke things. I'd > appreciate if some docs people could take a look at > https://review.opendev.org/793022 > > Thanks, > > -- > Matthew Thode From patryk.jakuszew at gmail.com Tue Jun 1 16:06:01 2021 From: patryk.jakuszew at gmail.com (Patryk Jakuszew) Date: Tue, 1 Jun 2021 18:06:01 +0200 Subject: [nova] Proper way to regenerate request_specs of existing instances? In-Reply-To: References: Message-ID: On Tue, 1 Jun 2021 at 14:35, Sylvain Bauza wrote: > In general, this question is about AZs : as in general some operators want to modify the AZ value of a specific RequestSpec, this would also mean that the users using the related instance would not understand why now this instance would be on another AZ if the host is within another one. To be more specific: we do have AZs already, but we also want to add AggregateInstanceExtraSpecsFilter in order to prepare for a scenario with having multiple CPU generations in each AZ. > As you said, if you really want to modify the RequestSpec object, please then write a Python script that would use the objects class by getting the RequestSpec object directly and then persisting it again. Alright, I will try that again, but using the Nova objects class as you suggest. Thanks for the answer! -- Regards, Patryk From swamycnn at gmail.com Tue Jun 1 11:47:49 2021 From: swamycnn at gmail.com (Swamy C.N.N) Date: Tue, 1 Jun 2021 17:17:49 +0530 Subject: Is there s way, i can boot nova instance using external DHCP server ? Message-ID: Hi, I know, this does not fit in the cloud operator model nor the openstack-neutron way of doing things. However, i have this use case: - vmware is booting guest vm with underlying DHCP server for IPAM integrated with DNS for host registration as the guest vms power on. So, vm gets ip with its hostname DNS registered as it boots. - I started a few vms on openstack, I can attach guest vms on these provider n/w and I can see they are reachable just like any other vm on an enterprise network similar to vmware guest vms and this is great. However, I cannot find any docs, option where these guest vms can cross over to external DHCP for ip address management because the provider network operates at Layer3. Found this blueprint, review discussion https://blueprints.launchpad.net/neutron/+spec/dhcp-relay Which is close to what i'm looking for and not updated for sometime. Wanted to know if anyone has come across this scenario and how to get over this. As one of the options, validated openstack-designate service which solves the hostname registration not the IPAM. Also it needs site DNS server domain management . Something, I can look at if the external DHCP server is not an option for nova boot. Thanks, Swamy -------------- next part -------------- An HTML attachment was scrubbed... URL: From swamycnn at gmail.com Tue Jun 1 11:58:02 2021 From: swamycnn at gmail.com (Swamy C.N.N) Date: Tue, 1 Jun 2021 17:28:02 +0530 Subject: [ops] Is there s way, i can boot nova instance using external DHCP Message-ID: Hi I know, this does not fit in the cloud operator model nor the openstack-neutron way of doing things. However, i have this use case: - vmware is booting guest vm with underlying DHCP server for IPAM integrated with DNS for host registration as the guest vms power on. So, vm gets ip with its hostname DNS registered as it boots. - I started a few vms on openstack, I can attach guest vms on these provider n/w and I can see they are reachable just like any other vm on an enterprise network similar to vmware guest vms and this is great. However, I cannot find any docs, option where these guest vms can cross over to external DHCP for ip address management because the provider network operates at Layer3. Found this blueprint, review discussion https://blueprints.launchpad.net/neutron/+spec/dhcp-relay Which is close to what i'm looking for and not updated for sometime. Wanted to know if anyone has come across this scenario and how to get over this. As one of the options, validated openstack-designate service which solves the hostname registration not the IPAM. Also it needs site DNS server domain management . Something, I can look at if the external DHCP server is not an option for nova boot. Thanks, Swamy -------------- next part -------------- An HTML attachment was scrubbed... URL: From levonmelikbekjan at yahoo.de Tue Jun 1 12:12:45 2021 From: levonmelikbekjan at yahoo.de (levonmelikbekjan at yahoo.de) Date: Tue, 1 Jun 2021 14:12:45 +0200 Subject: AW: AW: Customization of nova-scheduler In-Reply-To: <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> Message-ID: <000001d756df$6e7ab430$4b701c90$@yahoo.de> Hello Stephen, thank you for your quick reply and your valuable information. I am well aware that this task will not be easy to accomplish. However, I like challenging tasks because you can learn a lot from them. Thanks for the warning, but I think you misunderstood me. It is not my intention to reserve ressources for anyone. Let me explain you my aim more detailed. Hosts will exist in our infrastructure that will belong to a user (owner). Each user will have an aggregate that will be assigned to his user object as an id in the "extra" attribute field. All of the compute nodes that will be owned by the owner are located within this host aggregate. If hosts from an aggregate that belong to someone are not in use, everyone else is allowed to use them (for example a user who does not have any servers in his possession). When the owner decides to create a VM and our cloud doesn't have enough resources, all servers will be deleted from his compute node based on the aggregate id which is located in his user object. Then the function "Launch instance" tries again to create his VM. You're right, the API requests will take some time. The only requests I will send are one-off: - Get user by name/id - Get server list - Get aggregate by id ... and maybe a few times the server delete call. Maybe it is possible to store aggregate information locally and access it with python?!?! Or maybe it is better to store all the host information directly in the user object without having always to call the aggregate API. Alternatively, hypervisors could be used to minimize the number of calls for deletion of servers. This would only be a onetime call at the very beginning of my python script to determine the amount of free and used resources. With hypervisors and the information of the required resources by the owner of an aggregate, I could delete specific servers without having to delete all of them. I like the feature with the aggregates very much, especially because it is possible to add new compute nodes at any time. Kind regards Levon -----Ursprüngliche Nachricht----- Von: Stephen Finucane Gesendet: Montag, 31. Mai 2021 18:21 An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org Betreff: Re: AW: Customization of nova-scheduler On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > Hello Stephen, > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > Two cases should be solved! > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. What you're describing is commonly referred to as "preemptible" or "spot" instances. This topic has a long, complicated history in nova and has yet to be implemented. Searching for "preemptible instances openstack" should yield you lots of discussion on the topic along with a few proof-of-concept approaches using external services or out-of-tree modifications to nova. > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? As hinted above, this is likely to be a very difficult project given the fraught history of the idea. I don't want to dissuade you from this work but you should be aware of what you're getting into from the start. If you're serious about pursuing this, I suggest you first do some research on prior art. As noted above, there is lots of information on the internet about this. With this research done, you'll need to decide whether this is something you want to approach within nova itself, via out-of-tree extensions or via a third party project. If you're opting for integration with nova, then you'll need to think long and hard about how you would design such a system and start working on a spec (a design document) outlining your proposed solution. Details on how to write a spec are discussed at [1]. The only extension points nova offers today are scheduler filters and weighers so your options for an out-of-tree extension approach will be limited. A third party project will arguably be the easiest approach but you will be restricted to talking to nova's REST APIs which may limit the design somewhat. This Blazar spec [2] could give you some ideas on this approach (assuming it was never actually implemented, though it may well have been). > My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. > > My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. > > What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? This could potentially work, but I suspect there will be serious performance implications with this, particularly at scale. Scheduler filters are historically used for simple things like "find me a group of hosts that have this metadata attribute I set on my image". Making API calls sounds like something that would take significant time and therefore slow down the schedule process. You'd also have to decide what your heuristic for deciding which VM(s) to delete would be, since there's nothing obvious in nova that you could use. You could use something as simple as filter extra specs or something as complicated as an external service. This should be lots to get you started. Once again, do make sure you're aware of what you're getting yourself into before you start. This could get complicated very quickly :) Cheers, Stephen > I'm very happy to have found you!!! > > Thank you really much for your time! [1] https://specs.openstack.org/openstack/nova-specs/readme.html [2] https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > Best regards > Levon > > -----Ursprüngliche Nachricht----- > Von: Stephen Finucane > Gesendet: Montag, 31. Mai 2021 12:34 > An: Levon Melikbekjan ; > openstack at lists.openstack.org > Betreff: Re: Customization of nova-scheduler > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > Hello Openstack team, > > > > is it possible to customize the nova-scheduler via Python? If yes, how? > > Yes, you can provide your own filters and weighers. This is documented at [1]. > > Hope this helps, > Stephen > > [1] > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-y > our-own-filter > > > > > Best regards > > Levon > > > > From Fadi.Badine at enghouse.com Tue Jun 1 13:45:01 2021 From: Fadi.Badine at enghouse.com (Fadi Badine) Date: Tue, 1 Jun 2021 13:45:01 +0000 Subject: Tacker Auto Scale Support Message-ID: Hello, I would like to know if VNF auto scaling is supported by Tacker and if so in which release. I tried looking at release notes but couldn't find anything. Thanks! Best regards, Fadi Badine Product Manager Office: +961 (1) 900 818 Mobile: +961 (3) 822 966 W: www.enghousenetworks.com E: fadi.badine at enghouse.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Jun 1 17:14:19 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 01 Jun 2021 19:14:19 +0200 Subject: =?UTF-8?B?562U5aSNOg==?= [Nova] Meeting time poll In-Reply-To: References: <56c234f49e7c41bb86a86af824f41187@inspur.com> Message-ID: Hi, So today we decided to have an extra meeting timeslot first Thursday of every month at 8:00 UTC on #openstack-nova on the OFTC IRC server. So the first such meeting will happen on Thursday 2021.06.03. I've update the meeting wiki with the new timings and pushed a patch to add the meeting to the IRC meeting schedule[2]. See you on the meeting! Cheers, gibi [1] https://wiki.openstack.org/wiki/Meetings/Nova [2] https://review.opendev.org/c/opendev/irc-meetings/+/794010 On Wed, May 26, 2021 at 09:50, Balazs Gibizer wrote: > Hi, > > On Tue, May 25, 2021 at 07:10, Sam Su (苏正伟) > wrote: >> Hi, gibi: >> I'm very sorry for respone later. >> A meeting around 8:00 UTC seems very appropriate to us. It is >> afternoon work time in East Asian when 8:00 UTC. >> Now my colleague, have some work on Cyborg across with Nova, >> passthroug device, TPM and so on. If they can join the irc meeting >> talking with the community , it will be much helpful. >> > > @Sam: We discussed your request yesterday[1] and it seems that the > team is not objecting against a monthly office hour in > #openstack-nova around UTC 8 or UTC 9. But we did not agreed which > day we should have it so I set up a poll[2]. > > @Team: As we discussed yesterday I opened a poll to agree on the day > of the week and the exact start time for the Asia friendly office > hours slot. Please vote in the poll[2] before next Tuesday. > > Cheers, > gibi > > [1] > http://eavesdrop.openstack.org/meetings/nova/2021/nova.2021-05-25-16.00.log.html#l-100 > [2] https://doodle.com/poll/svrnmrtn6nnknzqp > >> >> -----邮件原件----- >> 发件人: Balazs Gibizer [mailto:balazs.gibizer at est.tech] >> 发送时间: 2021年5月14日 14:12 >> 收件人: Sam Su (苏正伟) >> 抄送: alifshit at redhat.com; openstack-discuss at lists.openstack.org >> 主题: Re: [Nova] Meeting time poll >> >> >> >> On Fri, May 14, 2021 at 01:23, Sam Su (苏正伟) >> wrote: >>> From: Sam Su (苏正伟) >>> Sent: Friday, May 14, 2021 03:23 >>> To: alifshit at redhat.com >>> Cc: openstack-discuss at lists.openstack.org >>> Subject: Re: [Nova] Meeting time poll >>> >>> Hi, Nova team: >> >> Hi Sam! >> >>> There are many asian developers for Openstack community. I >>> found the current IRC time of Nova is not friendly to them, >>> especially >>> to East Asian. >>> If they >>> can take part in the IRC meeting, the Nova may have more >>> developers. >>> Of >>> cource, Central Europe and NA West Coast is firstly considerable. >>> If >>> the team could schedule the meeting once per month, time suitable >>> for >>> asians, more people would participate in the meeting discussion. >> >> You have a point. In the past Nova had alternating meeting time >> slots one for EU+NA and one for the NA+Asia timezones. Our >> experience was that the NA+Asia meeting time slot was mostly >> lacking participants. So we merged the two slots. But I can imagine >> that the situation has changed since and there might be need for an >> alternating meeting again. >> >> We can try what you suggest and do an Asia friendly meeting once a >> month. The next question is what time you would like to have that >> meeting. Or more specifically which part of the nova team you would >> like to meet more? >> >> * Do a meeting around 8:00 UTC to meet Nova devs from the EU >> >> * Do a meeting around 0:00 UTC to meet Nova devs from North America >> >> If we go for the 0:00 UTC time slot then I need somebody to chair >> that meeting as I'm from the EU. >> >> Alternatively to having a formal meeting I can offer to hold a free >> style office hour each Thursday 8:00 UTC in #openstack-nova. I made >> the same offer when we moved the nova meeting to be a non >> alternating one. >> But honestly I don't remember ever having discussion happening >> specifically due to that office hour in #openstack-nova. >> >> Cheers, >> gibi >> >> p.s.: the smime in your mail is not really mailing list friendly. >> Your mail does not appear properly in the archive. >> >> >> >> > > > From Albert.Shih at obspm.fr Tue Jun 1 18:44:35 2021 From: Albert.Shih at obspm.fr (Albert Shih) Date: Tue, 1 Jun 2021 20:44:35 +0200 Subject: [victoria][cinder ?] Dell Unity + Iscsi Message-ID: Hi everyone I've a small openstack configuration with 4 computes nodes, a Dell Unity 480F for the storage. I'm using cinder with iscsi. Everything work when I create a instance. But some instance after few time are not reponsive. When I check on the hypervisor I can see [888240.310461] sd 14:0:0:2: [sdb] tag#120 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.310493] sd 14:0:0:2: [sdb] tag#120 Sense Key : Illegal Request [current] [888240.310502] sd 14:0:0:2: [sdb] tag#120 Add. Sense: Logical unit not supported [888240.310510] sd 14:0:0:2: [sdb] tag#120 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 [888240.310519] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 [888240.311045] sd 14:0:0:2: [sdb] tag#121 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.311050] sd 14:0:0:2: [sdb] tag#121 Sense Key : Illegal Request [current] [888240.311065] sd 14:0:0:2: [sdb] tag#121 Add. Sense: Logical unit not supported [888240.311070] sd 14:0:0:2: [sdb] tag#121 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 [888240.311074] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 [888240.342482] sd 14:0:0:2: [sdb] tag#70 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.342490] sd 14:0:0:2: [sdb] tag#70 Sense Key : Illegal Request [current] [888240.342496] sd 14:0:0:2: [sdb] tag#70 Add. Sense: Logical unit not supported I check on the hypervisor, no error at all on the ethernet interface. I check on the switch, no error at all on the interface on the switch. No sure but it's seem the problem appear more often when the instance are doing nothing during some time. Every firmware, software on the Unity are uptodate. The 4 computes are exactly same, they run the same version of the nova-compute & OS & firmware on the hardware. Any clue ? Or place to search the problem ? Regards -- Albert SHIH Observatoire de Paris xmpp: jas at obspm.fr Heure local/Local time: Tue Jun 1 08:27:42 PM CEST 2021 From smooney at redhat.com Tue Jun 1 20:55:43 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 01 Jun 2021 21:55:43 +0100 Subject: AW: Customization of nova-scheduler In-Reply-To: <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> Message-ID: <0fbc1e49a3f87aadc82fa12a53454bc76a3dae4a.camel@redhat.com> On Mon, 2021-05-31 at 17:21 +0100, Stephen Finucane wrote: > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > Hello Stephen, > > > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > > > Two cases should be solved! > > > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > one thing to keep in mind is that end users are not allow to know the capstity of the cloud in terms of number of host, the resouces on a host or what host there vm is placeed on. so as a user the conceph of "a low priority uses a VM from Openstack with half performance of the available host" is not something that you can express arctecurally in nova. flavor define the size of vms in absolute term i.e. 4GB of ram not relitve "50% of the host". we have a 3 laryer schuldeing prcoess that start with a query to the placment service for a set of quantitative resouce class and qualitative traits. that produces a set fo allcoation candiate against a serise of host that could fit the instance, we then filter those host useing python filters wich are boolean fucntion that either pass the host or reject it finally after filtering we weight the remaining hosts and selecet one to boot the vm. once you have completed a steph in this processs you can nolonger go to a previous step and you can never readd a host afteer it has been elimiated by placemnt or a filter to be considered again. as a result if you get the end of the avaiable hosts and there are none that can fix your vm we cannot delete a vm and start again without redoing all the work and possible facing with concurrent api requests. this is why this is a hard problem with out an external service that can rebalance exiting workloads and free up capsity. > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. > > What you're describing is commonly referred to as "preemptible" or "spot" > instances. This topic has a long, complicated history in nova and has yet to be > implemented. Searching for "preemptible instances openstack" should yield you > lots of discussion on the topic along with a few proof-of-concept approaches > using external services or out-of-tree modifications to nova. > > > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? > > As hinted above, this is likely to be a very difficult project given the fraught > history of the idea. I don't want to dissuade you from this work but you should > be aware of what you're getting into from the start. If you're serious about > pursuing this, I suggest you first do some research on prior art. As noted > above, there is lots of information on the internet about this. With this > research done, you'll need to decide whether this is something you want to > approach within nova itself, via out-of-tree extensions or via a third party > project. If you're opting for integration with nova, then you'll need to think > long and hard about how you would design such a system and start working on a > spec (a design document) outlining your proposed solution. Details on how to > write a spec are discussed at [1]. The only extension points nova offers today > are scheduler filters and weighers so your options for an out-of-tree extension > approach will be limited. A third party project will arguably be the easiest > approach but you will be restricted to talking to nova's REST APIs which may > limit the design somewhat. This Blazar spec [2] could give you some ideas on > this approach (assuming it was never actually implemented, though it may well > have been). > > > My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. > > > > My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > > > The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. > > > > What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? > > This could potentially work, but I suspect there will be serious performance > implications with this, particularly at scale. Scheduler filters are > historically used for simple things like "find me a group of hosts that have > this metadata attribute I set on my image". Making API calls sounds like > something that would take significant time and therefore slow down the schedule > process. You'd also have to decide what your heuristic for deciding which VM(s) > to delete would be, since there's nothing obvious in nova that you could use. > You could use something as simple as filter extra specs or something as > complicated as an external service. yes implementing preemption in the scheduler as filet was disccused in the passed and discounted for the performance implication stephen hinted at. in tree we currentlyt do not allow filter to make any api or db queires. that approach also will not work toady since you would have to rexecute the query to the placment service after deleting an instance when you run out of capacity and restart the filtering which a filter cannot do as i noted above. the most recent spec in this area was https://review.opendev.org/c/openstack/nova-specs/+/438640 for the integrated approch and https://review.opendev.org/c/openstack/nova-specs/+/554212/12 which proposed adding a pending state for use with a standalone service https://gitlab.cern.ch/ttsiouts/ReaperServicePrototype ther are a number of presentation on this form cern/stackhapc https://www.stackhpc.com/scientific-sig-at-the-dublin-ptg.html http://openstack-in-production.blogspot.com/2018/02/maximizing-resource-utilization-with.html https://openlab.cern/sites/openlab.web.cern.ch/files/2018-07/Containers_on_Baremetal_and_Preemptible_VMs_at_CERN_and_SKA.pdf https://indico.cern.ch/event/739089/sessions/282073/attachments/1689073/2717151/ASDF_preemptible.pdf the current state is rebuilding from cell0 is not support but the pending state was never added and the reaper service was not upstream. work in this are has now move the blazar project as stphen noted in [2] https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html but is dont think it has made much progress. https://review.opendev.org/q/topic:%22preemptibles%22+(status:open%20OR%20status:merged) nova previously had a pluggable scheduler that would have allowed you to reimplent the scudler entirely from scratch but we removed that capability in the last year or two. at this point the only viable approach that will not take multiple upstream cycles to this is really to use an external service. > > This should be lots to get you started. Once again, do make sure you're aware of > what you're getting yourself into before you start. This could get complicated > very quickly :) yes anything other then adding the pending state to nova will be very complex due to placement interaction. you would really need to implement a fallback query mechanism in the scudler iteself. anything after the call to placement is already too late. you might be able to reuse consumer types to make some allocation preemtiblae and have a prefilter decide if an allocation should be a normal nova consumer or premtable consumer based on a flavor extra spec.https://docs.openstack.org/placement/train/specs/train/approved/2005473-support-consumer-types.html this would still require the pending state and an external reaper service to free the capsity to be clean but its a possible direction. > > Cheers, > Stephen > > > I'm very happy to have found you!!! > > > > Thank you really much for your time! > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > [2] https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > > Best regards > > Levon > > > > -----Ursprüngliche Nachricht----- > > Von: Stephen Finucane > > Gesendet: Montag, 31. Mai 2021 12:34 > > An: Levon Melikbekjan ; openstack at lists.openstack.org > > Betreff: Re: Customization of nova-scheduler > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > Hello Openstack team, > > > > > > is it possible to customize the nova-scheduler via Python? If yes, how? > > > > Yes, you can provide your own filters and weighers. This is documented at [1]. > > > > Hope this helps, > > Stephen > > > > [1] https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > > > > > > > > Best regards > > > Levon > > > > > > > > > > From smooney at redhat.com Tue Jun 1 21:14:45 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 01 Jun 2021 22:14:45 +0100 Subject: [nova] Proper way to regenerate request_specs of existing instances? In-Reply-To: References: Message-ID: On Tue, 2021-06-01 at 18:06 +0200, Patryk Jakuszew wrote: > On Tue, 1 Jun 2021 at 14:35, Sylvain Bauza wrote: > > In general, this question is about AZs : as in general some operators want to modify the AZ value of a specific RequestSpec, this would also mean that the users using the related instance would not understand why now this instance would be on another AZ if the host is within another one. > > To be more specific: we do have AZs already, but we also want to add > AggregateInstanceExtraSpecsFilter in order to prepare for a scenario > with having multiple CPU generations in each AZ. the supported way to do that woudl be to resize the instance. nova currently does not suppout updating the embedded flavor any other way. that said this is yet another usecase for a recreate api that would allow updating the embedded flavor and image metadta. nova expect flavours to be effectively immutable once an instace start to use them. the same is true of image properties so partly be design this has not been easy to support in nova because it was a usgage model we have declard out of scope. the solution that is vaiable today is rebuidl ro resize but a recreate api is really want you need. > > > As you said, if you really want to modify the RequestSpec object, please then write a Python script that would use the objects class by getting the RequestSpec object directly and then persisting it again. > > Alright, I will try that again, but using the Nova objects class as you suggest. this has come up often enough that we __might__ (im stressing might since im not sure we really want to do this) consider adding a nova manage command to do this. e.g. nova-mange instance flavor-regenerate and nova-mange instance image-regenerate those command woudl just recrate the embeded flavor and image metadta without moving the vm or otherwise restarting it. you would then have to hard reboot it or migrate it sepereatlly. im not convicned this is a capablity we should provide to operators in tree however via nova-manage. with my downstream hat on im not sure how supportable it woudl for example since like nova reset-state it woudl be very easy to render vms unbootable in there current localthouh if a tenatn did a hard reboot and cause all kinds of stange issues that are hard to debug an fix. > > Thanks for the answer! > > -- > Regards, > Patryk > From Arkady.Kanevsky at dell.com Tue Jun 1 21:24:22 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Tue, 1 Jun 2021 21:24:22 +0000 Subject: [interop] something strange Message-ID: Team, Once we merged https://review.opendev.org/c/osf/interop/+/786116 I expect that all old guidelines will move into directory "previous". I just sync my master to latest and still see old guidelines on top level directory. Any idea why? Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Jun 1 21:39:20 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 1 Jun 2021 23:39:20 +0200 Subject: [interop] something strange In-Reply-To: References: Message-ID: Hi Arkady, I had to revert it (see the latest comment or https://review.opendev.org/c/osf/interop/+/792883) as it caused troubles with the refstack server - it wasn't able to retrieve the guidelines. Reason for revert: refstack server gives 404 on the guidelines: https://refstack.openstack.org/#/guidelines .. seems like https://review.opendev.org/c/osf/refstack/+/790940 didn't handle the update of the guidelines location everywhere - I suspect that some changes in refstack-ui are needed as well, ah I'm sorry for inconvenience, On Tue, 1 Jun 2021 at 23:24, Kanevsky, Arkady wrote: > Team, > > Once we merged https://review.opendev.org/c/osf/interop/+/786116 > > I expect that all old guidelines will move into directory “previous”. > > I just sync my master to latest and still see old guidelines on top level > directory. > > Any idea why? > > > > Thanks, > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Jun 1 22:56:28 2021 From: amy at demarco.com (Amy Marrich) Date: Tue, 1 Jun 2021 17:56:28 -0500 Subject: [Meeting] RDO Community Meeting Message-ID: Just a reminder that this week's meeting will be our video meeting[0][1] as it is the first meeting of the month. Our IRC meetings will be on OFTC in the #rdo channel beginning next week. Thanks, Amy (spotz) 0 - https://meet.google.com/uzo-tfkt-top. 1 - https://etherpad.opendev.org/p/RDO-Meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From suzhengwei at inspur.com Wed Jun 2 01:56:54 2021 From: suzhengwei at inspur.com (=?utf-8?B?U2FtIFN1ICjoi4/mraPkvJ8p?=) Date: Wed, 2 Jun 2021 01:56:54 +0000 Subject: =?utf-8?B?562U5aSNOiDnrZTlpI06IFtOb3ZhXSBNZWV0aW5nIHRpbWUgcG9sbA==?= In-Reply-To: References: <56c234f49e7c41bb86a86af824f41187@inspur.com> Message-ID: <3c8c17a23ed84d8a9f6f05eb9a52b0db@inspur.com> Hi, gibi I am glad to introduce the extra meeting to my colleagues and other developers in China. I will be on the meeting. See you. -----邮件原件----- 发件人: Balazs Gibizer [mailto:balazs.gibizer at est.tech] 发送时间: 2021年6月2日 1:14 收件人: Sam Su (苏正伟) 抄送: alifshit at redhat.com; openstack-discuss at lists.openstack.org 主题: Re: 答复: [Nova] Meeting time poll Hi, So today we decided to have an extra meeting timeslot first Thursday of every month at 8:00 UTC on #openstack-nova on the OFTC IRC server. So the first such meeting will happen on Thursday 2021.06.03. I've update the meeting wiki with the new timings and pushed a patch to add the meeting to the IRC meeting schedule[2]. See you on the meeting! Cheers, gibi [1] https://wiki.openstack.org/wiki/Meetings/Nova [2] https://review.opendev.org/c/opendev/irc-meetings/+/794010 On Wed, May 26, 2021 at 09:50, Balazs Gibizer wrote: > Hi, > > On Tue, May 25, 2021 at 07:10, Sam Su (苏正伟) > wrote: >> Hi, gibi: >> I'm very sorry for respone later. >> A meeting around 8:00 UTC seems very appropriate to us. It is >> afternoon work time in East Asian when 8:00 UTC. >> Now my colleague, have some work on Cyborg across with Nova, >> passthroug device, TPM and so on. If they can join the irc meeting >> talking with the community , it will be much helpful. >> > > @Sam: We discussed your request yesterday[1] and it seems that the > team is not objecting against a monthly office hour in #openstack-nova > around UTC 8 or UTC 9. But we did not agreed which day we should have > it so I set up a poll[2]. > > @Team: As we discussed yesterday I opened a poll to agree on the day > of the week and the exact start time for the Asia friendly office > hours slot. Please vote in the poll[2] before next Tuesday. > > Cheers, > gibi > > [1] > http://eavesdrop.openstack.org/meetings/nova/2021/nova.2021-05-25-16.0 > 0.log.html#l-100 [2] https://doodle.com/poll/svrnmrtn6nnknzqp > >> >> -----邮件原件----- >> 发件人: Balazs Gibizer [mailto:balazs.gibizer at est.tech] >> 发送时间: 2021年5月14日 14:12 >> 收件人: Sam Su (苏正伟) >> 抄送: alifshit at redhat.com; openstack-discuss at lists.openstack.org >> 主题: Re: [Nova] Meeting time poll >> >> >> >> On Fri, May 14, 2021 at 01:23, Sam Su (苏正伟) >> wrote: >>> From: Sam Su (苏正伟) >>> Sent: Friday, May 14, 2021 03:23 >>> To: alifshit at redhat.com >>> Cc: openstack-discuss at lists.openstack.org >>> Subject: Re: [Nova] Meeting time poll >>> >>> Hi, Nova team: >> >> Hi Sam! >> >>> There are many asian developers for Openstack community. I >>> found the current IRC time of Nova is not friendly to them, >>> especially to East Asian. >>> If they >>> can take part in the IRC meeting, the Nova may have more >>> developers. >>> Of >>> cource, Central Europe and NA West Coast is firstly considerable. >>> If >>> the team could schedule the meeting once per month, time suitable >>> for asians, more people would participate in the meeting >>> discussion. >> >> You have a point. In the past Nova had alternating meeting time slots >> one for EU+NA and one for the NA+Asia timezones. Our experience was >> that the NA+Asia meeting time slot was mostly lacking participants. >> So we merged the two slots. But I can imagine that the situation has >> changed since and there might be need for an alternating meeting >> again. >> >> We can try what you suggest and do an Asia friendly meeting once a >> month. The next question is what time you would like to have that >> meeting. Or more specifically which part of the nova team you would >> like to meet more? >> >> * Do a meeting around 8:00 UTC to meet Nova devs from the EU >> >> * Do a meeting around 0:00 UTC to meet Nova devs from North America >> >> If we go for the 0:00 UTC time slot then I need somebody to chair >> that meeting as I'm from the EU. >> >> Alternatively to having a formal meeting I can offer to hold a free >> style office hour each Thursday 8:00 UTC in #openstack-nova. I made >> the same offer when we moved the nova meeting to be a non >> alternating one. >> But honestly I don't remember ever having discussion happening >> specifically due to that office hour in #openstack-nova. >> >> Cheers, >> gibi >> >> p.s.: the smime in your mail is not really mailing list friendly. >> Your mail does not appear properly in the archive. >> >> >> >> > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3606 bytes Desc: not available URL: From zakhar at gmail.com Wed Jun 2 03:24:55 2021 From: zakhar at gmail.com (Zakhar Kirpichenko) Date: Wed, 2 Jun 2021 06:24:55 +0300 Subject: [telemetry] Wallaby ceilometer.compute.discovery fails to get domain metadata Message-ID: Hi! I'm facing a weird situation where Ceilometer compute agent fails to get libvirt domain metadata on an Ubuntu 20.04 LTS with the latest updates, kernel 5.4.0-65-generic and Openstack Wallaby Nova compute services installed using the official Wallaby repo for Ubuntu 20.04. All components have been deployed manually. Ceilometer agent is configured with instance_discovery_method = libvirt_metadata. The agent is unable to fetch the domain metadata, and the following error messages appear in /var/log/ceilometer/ceilometer-agent-compute.log on agent start and periodic polling attempts: 2021-06-01 16:01:18.297 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid baf06f57-ac5b-4661-928c-7adaeaea0311 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.298 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid 208c0d7a-41a3-4fa6-bf72-2f9594ac6b8d metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.300 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid d979a527-c1ba-4b29-8e30-322d4d2efcf7 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.301 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid a41f21b6-766d-4979-bbe1-84f421b0c3f2 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.302 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid fd5ffe32-c6d6-4898-9ba2-2af1ffebd502 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.302 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid aff042c9-c311-4944-bc42-09ccd5a90eb7 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.303 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid 9510bc46-e4e2-490c-9cbe-c9eb5e349b8d metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.304 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid 4d2c2c9b-4eff-460a-a00b-19fdbe33f5d4 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.305 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster cpu_l3_cache, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.305 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.305 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.305 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.306 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.306 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.306 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.306 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.306 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.307 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.307 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 All domains exist and their metadata is readily available using virsh or a simple Python script. Nova compute service is fully functional, Ceilometer agent is partially functional as it is able to export compute.node.cpu.* metrics but nothing related to libvirt domains. I already filed a bug report https://bugs.launchpad.net/ceilometer/+bug/1930446, but would appreciate feedback and/or advice. Best regards, Zakhar --- More deployment information: Installed Ceilometer-related packages: ceilometer-agent-compute 2:16.0.0-0ubuntu1~cloud0 ceilometer-common 2:16.0.0-0ubuntu1~cloud0 python3-ceilometer 2:16.0.0-0ubuntu1~cloud0 Installed Nova-related packages: nova-common 3:23.0.0-0ubuntu1~cloud0 nova-compute 3:23.0.0-0ubuntu1~cloud0 nova-compute-kvm 3:23.0.0-0ubuntu1~cloud0 nova-compute-libvirt 3:23.0.0-0ubuntu1~cloud0 python3-nova 3:23.0.0-0ubuntu1~cloud0 python3-novaclient 2:17.4.0-0ubuntu1~cloud0 Installed Libvirt-related packages: libvirt-clients 6.0.0-0ubuntu8.9 libvirt-daemon 6.0.0-0ubuntu8.9 libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.9 libvirt-daemon-driver-storage-rbd 6.0.0-0ubuntu8.9 libvirt-daemon-system 6.0.0-0ubuntu8.9 libvirt-daemon-system-systemd 6.0.0-0ubuntu8.9 libvirt0:amd64 6.0.0-0ubuntu8.9 python3-libvirt 6.1.0-1 Installed Qemu-related packages: libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.9 qemu-block-extra:amd64 1:4.2-3ubuntu6.16 qemu-kvm 1:4.2-3ubuntu6.16 qemu-system-common 1:4.2-3ubuntu6.16 qemu-system-data 1:4.2-3ubuntu6.16 qemu-system-gui:amd64 1:4.2-3ubuntu6.16 qemu-system-x86 1:4.2-3ubuntu6.16 qemu-utils 1:4.2-3ubuntu6.16 Apparmor is enabled and running the default configuration, no messages related to apparmor and libvirt, qemu, nova-compute, ceilometer-agent, etc are visible in the logs. I am also attaching the relevant Ceilometer agent and Nova configuration files: ceilometer.conf: [DEFAULT] transport_url = rabbit://WORKING-TRANSPORT-URL verbose = true debug = true auth_strategy = keystone log_dir = /var/log/ceilometer [compute] instance_discovery_method = libvirt_metadata [keystone_authtoken] www_authenticate_uri = http://CONTROLLER.VIP:5000 auth_url = http://CONTROLLER.VIP:5000 memcached_servers = LIST-OF-WORKING-MEMCACHED-SERVERS auth_type = password project_domain_name = default user_domain_name = default project_name = service username = ceilometer password = WORKING_PASSWORD [service_credentials] auth_type = password auth_url = http://CONTROLLER.VIP:5000/v3 memcached_servers = LIST-OF-WORKING-MEMCACHED-SERVERS project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = WORKING_PASSWORD interface = internalURL region_name = RegionOne [oslo_messaging_notifications] driver = messagingv2 transport_url = rabbit://WORKING-TRANSPORT-URL polling.yaml: --- sources: - name: some_pollsters interval: 300 meters: - cpu - cpu_l3_cache - memory.usage - network.incoming.bytes - network.incoming.packets - network.outgoing.bytes - network.outgoing.packets - disk.device.read.bytes - disk.device.read.requests - disk.device.write.bytes - disk.device.write.requests - hardware.cpu.util - hardware.cpu.user - hardware.cpu.nice - hardware.cpu.system - hardware.cpu.idle - hardware.cpu.wait - hardware.cpu.kernel - hardware.cpu.interrupt - hardware.memory.used - hardware.memory.total - hardware.memory.buffer - hardware.memory.cached - hardware.memory.swap.avail - hardware.memory.swap.total - hardware.system_stats.io.outgoing.blocks - hardware.system_stats.io.incoming.blocks - hardware.network.ip.incoming.datagrams - hardware.network.ip.outgoing.datagrams nova.conf: [DEFAULT] log_dir = /var/log/nova lock_path = /var/lock/nova state_path = /var/lib/nova instance_usage_audit_period = hour compute_monitors = cpu.virt_driver,numa_mem_bw.virt_driver reserved_host_memory_mb = 2048 instance_usage_audit = True resume_guests_state_on_host_boot = true my_ip = COMPUTE.HOST.IP.ADDR report_interval = 30 transport_url = rabbit://WORKING-TRANSPORT-URL [api] [api_database] [barbican] [cache] expiration_time = 600 backend = oslo_cache.memcache_pool backend_argument = memcached_expire_time:660 enabled = true memcache_servers = LIST-OF-WORKING-MEMCACHED-SERVERS [cinder] catalog_info = volumev3::internalURL [compute] [conductor] [console] [consoleauth] [cors] [cyborg] [database] connection = mysql+pymysql://WORKING-CONNECTION-STRING connection_recycle_time = 280 max_pool_size = 5 max_retries = -1 [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://CONTROLLER.VIP:9292 [guestfs] [healthcheck] [hyperv] [image_cache] [ironic] [key_manager] [keystone] [keystone_authtoken] www_authenticate_uri = http://CONTROLLER.VIP:5000 auth_url = http://CONTROLLER.VIP:5000 region_name = RegionOne memcached_servers = LIST-OF-WORKING-MEMCACHED-SERVERS auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = WORKING-PASSWORD [libvirt] live_migration_scheme = ssh live_migration_permit_post_copy = true disk_cachemodes="network=writeback,block=writeback" images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = SECRET-UUID [metrics] [mks] [neutron] auth_url = http://CONTROLLER.VIP:5000 region_name = RegionOne memcached_servers = LIST-OF-WORKING-MEMCACHED-SERVERS auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = WORKING-PASSWORD [notifications] notify_on_state_change = vm_and_task_state [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] driver = messagingv2 [oslo_messaging_rabbit] amqp_auto_delete = false rabbit_ha_queues = true [oslo_middleware] [oslo_policy] [pci] [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://CONTROLLER.VIP:5000/v3 username = placement password = WORKING-PASSWORD [powervm] [privsep] [profiler] [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = https://WORKING-URL:6080/vnc_auto.html [workarounds] [wsgi] [zvm] [cells] enable = False [os_region_name] openstack = -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaurmanpreet2620 at gmail.com Wed Jun 2 03:44:14 2021 From: kaurmanpreet2620 at gmail.com (manpreet kaur) Date: Wed, 2 Jun 2021 09:14:14 +0530 Subject: Tacker Auto Scale Support In-Reply-To: References: Message-ID: HI Fadi Badine, In the OpenStack Newton release, VNF auto-scaling and manual-scaling features were introduced in the tacker. Please check release notes for the newton release, https://docs.openstack.org/releasenotes/tacker/newton.html Feel free to revert in case of concerns. Thanks & Regards, Manpreet Kaur On Tue, Jun 1, 2021 at 10:11 PM Fadi Badine wrote: > Hello, > > > > I would like to know if VNF auto scaling is supported by Tacker and if so > in which release. > > I tried looking at release notes but couldn’t find anything. > > > > Thanks! > > > > Best regards, > > > > *Fadi Badine* > > *Product Manager* > > Office: +961 (1) 900 818 > > Mobile: +961 (3) 822 966 > > W: www.enghousenetworks.com > > E: fadi.badine at enghouse.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gthiemonge at redhat.com Wed Jun 2 08:18:38 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Wed, 2 Jun 2021 10:18:38 +0200 Subject: [Octavia] Weekly meeting moving to OFTC Message-ID: Hi team, The next Octavia team meeting (today at 16:00 UTC) will be on the OFTC network on #openstack-lbaas. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Jun 2 10:14:34 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 2 Jun 2021 11:14:34 +0100 Subject: [kolla] IRC channel -> OFTC Message-ID: Hi Koalas, As you may already know, Openstack IRC has moved IRC channels from Freenode [1] to OFTC. So from today onwards, our weekly team meetings and general project discussion will be on OFTC on the same channel (#openstack-kolla). Kindly register yourself on OFTC n/w[ 2] if you have not done so yet. Thanks, Mark [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html [2] https://www.oftc.net/Services/#register-your-account From marios at redhat.com Wed Jun 2 11:17:32 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 2 Jun 2021 14:17:32 +0300 Subject: [TripleO] Proposing ysandeep for tripleo-ci core Message-ID: Hello all Having discussed this with some members of the tripleo ci team (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: ysandeep) for core on the tripleo-ci repos (tripleo-ci, tripleo-quickstart and tripleo-quickstart-extras). Sandeep joined the team about 1.5 years ago and has from the start demonstrated his eagerness to learn and an excellent work ethic, having made many useful code submissions [1] and code reviews [2] to the CI repos and beyond. Thanks Sandeep and keep up the good work! Please reply to this mail with a +1 or -1 for objections in the usual manner. If there are no objections we can declare it official in a few days regards, marios [1] https://review.opendev.org/q/owner:sandeepyadav93 [2] https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 From sshnaidm at redhat.com Wed Jun 2 11:20:34 2021 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Wed, 2 Jun 2021 14:20:34 +0300 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1! On Wed, Jun 2, 2021 at 2:19 PM Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Wed Jun 2 11:28:29 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Wed, 2 Jun 2021 16:58:29 +0530 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: On Wed, Jun 2, 2021 at 4:55 PM Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > +1 -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssbarnea at redhat.com Wed Jun 2 12:14:23 2021 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Wed, 2 Jun 2021 12:14:23 +0000 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1 -- /zbr On 2 Jun 2021 at 12:17:32, Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Wed Jun 2 12:16:50 2021 From: mkopec at redhat.com (Martin Kopec) Date: Wed, 2 Jun 2021 14:16:50 +0200 Subject: [neutron][interop][refstack] New tests and capabilities to track in interop In-Reply-To: <6595086.PSTg7GmUaj@p1> References: <6595086.PSTg7GmUaj@p1> Message-ID: Hi Slawek, thanks for getting back to us and sharing new potential tests and capabilities from neutron-tempest-plugin. Let's first discuss tests which are in tempest directly please. We have done an analysis where we have cross checked tests we have in our guidelines with the ones (api and non-admin ones) present in tempest at the tempest checkout we currently use and here are the results: https://etherpad.opendev.org/p/refstack-test-analysis There are 110 and tempest.api.network tests which we don't have in any guideline yet. Could you please have a look at the list of the tests? Would it make sense to include them in a guideline? Would they extend any network capabilities we have in OpenStack Powered Platform program or would we need to create a new one(s)? https://opendev.org/osf/interop/src/branch/master/next.json Thank you, On Mon, 24 May 2021 at 16:33, Slawek Kaplonski wrote: > Hi, > > Dnia poniedziałek, 26 kwietnia 2021 17:48:08 CEST Martin Kopec pisze: > > > Hi everyone, > > > > > > I would like to further discuss the topics we covered with the neutron > team > > > during > > > the PTG [1]. > > > > > > * adding address_group API capability > > > It's tested by tests in neutron-tempest-plugin. First question is if > tests > > > which are > > > not directly in tempest can be a part of a non-add-on marketing program? > > > It's possible to move them to tempest though, by the time we do so, could > > > they be > > > marked as advisory? > > > > > > * Shall we include QoS tempest tests since we don't know what share of > > > vendors > > > enable QoS? Could it be an add-on? > > > These tests are also in neutron-tempest-plugin, I assume we're talking > about > > > neutron_tempest_plugin.api.test_qos tests. > > > If we want to include these tests, which program should they belong to? > Do > > > we wanna > > > create a new one? > > > > > > [1] https://etherpad.opendev.org/p/neutron-xena-ptg > > > > > > Thanks, > > > -- > > > Martin Kopec > > > Senior Software Quality Engineer > > > Red Hat EMEA > > First of all, sorry that it took so long for me but I finally looked into > Neutron related tests and capabilities and I think we can possibly add few > things there: > > - For "networks-security-groups-CRUD" we can add "address_groups" API. It > is now supported by ML2 plugin [1]. In the neutron-tempest-plugin we just > have some scenario test [2] but we would probably need also API tests for > that, correct? > > - For networks-l3-CRUD we can optionally add port_forwarding API. This can > be added by service plugin [3] so it may not be enabled in all deployments. > But maybe there is some "optional feature" category in the RefStack, and if > so, this could be included there. Tests for that are in > neutron-tempest-plugin [4] and [5]. > > - There are also 2 other service plugins, which I think could be included > as "optional feature" in the RefStack, but IMO don't fit exactly in any of > the existing groups. Those are QoS [6] and Trunks [7]. Tests for both are > in the neutron-tempest-plugin as well: Qos: [8] and [9], Trunk [10], [11] > and [12]. > > Please let me know what do You think about it and if that would be ok and > if You want me to propose some patches with that or maybe You will propose > them. > > [1] https://review.opendev.org/c/openstack/neutron-lib/+/741784 > > [2] https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/777833 > > [3] > https://github.com/openstack/neutron/blob/master/neutron/services/portforwarding/pf_plugin.py > > [4] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_port_forwardings.py > > [5] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_port_forwarding_negative.py > > [6] > https://github.com/openstack/neutron/blob/master/neutron/services/qos/qos_plugin.py > > [7] > https://github.com/openstack/neutron/blob/master/neutron/services/trunk/plugin.py > > [8] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_qos.py > > [9] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_qos_negative.py > > [10] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_trunk.py > > [11] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_trunk_details.py > > [12] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_trunk_negative.py > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Wed Jun 2 12:25:14 2021 From: james.slagle at gmail.com (James Slagle) Date: Wed, 2 Jun 2021 08:25:14 -0400 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: On Wed, Jun 2, 2021 at 7:26 AM Marios Andreou wrote: > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > +1! -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Wed Jun 2 12:25:19 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 2 Jun 2021 14:25:19 +0200 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: <422d8bfc-e258-7eee-8d11-82ce15242485@redhat.com> What, Sandeep wasn't core already? +42 :) On 6/2/21 1:17 PM, Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From vikash.kumarprasad at siemens.com Wed Jun 2 12:34:15 2021 From: vikash.kumarprasad at siemens.com (Kumar Prasad, Vikash) Date: Wed, 2 Jun 2021 12:34:15 +0000 Subject: PNDriver on openstack VM is not able to communicate to ET200SP device connected to my physical router Message-ID: Dear Community, I have installed openstack on Centos 7 on Virutalbox VM. Now I am running an application PNDriver on openstack VM(VNF), which is supposed to communicate with a hardware ET200SP, which is connected to my physical home router. Now my PNDriver is not able to communicate to ET200SP hardware device. PNDriver minimum requirements to run on an interface is using ethtool it should list the speed, duplex, and port properties, but by default speed , duplex, and port values it is Showing "unknown" on openstack VM(VNF). I tried setting these values using ethtool and somehow I was able to set duplex, speed values but port value when I am trying to set it is throwing error. My question is how we can set the port value of openstack VM(Vnf) using ethtool? Second question is that suppose if we create a VM on virtualbox, then virtualbox provides a provision for bridged type on network setting, can I not configure openstack vm (vnf) like a virtualbox VM so that my vnf can also get broadcast messages broadcasted by the connected hardware devices in my home router? Thanks Vikash kumar prasad -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Wed Jun 2 12:40:14 2021 From: ykarel at redhat.com (Yatin Karel) Date: Wed, 2 Jun 2021 18:10:14 +0530 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: On Wed, Jun 2, 2021 at 4:53 PM Marios Andreou wrote: > > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > +1 > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > From aschultz at redhat.com Wed Jun 2 12:54:58 2021 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 2 Jun 2021 06:54:58 -0600 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1 On Wed, Jun 2, 2021 at 5:27 AM Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Jun 2 13:10:10 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 2 Jun 2021 07:10:10 -0600 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1 WOOOT! On Wed, Jun 2, 2021 at 6:57 AM Alex Schultz wrote: > +1 > > On Wed, Jun 2, 2021 at 5:27 AM Marios Andreou wrote: > >> Hello all >> >> Having discussed this with some members of the tripleo ci team >> (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >> ysandeep) for core on the tripleo-ci repos (tripleo-ci, >> tripleo-quickstart and tripleo-quickstart-extras). >> >> Sandeep joined the team about 1.5 years ago and has from the start >> demonstrated his eagerness to learn and an excellent work ethic, >> having made many useful code submissions [1] and code reviews [2] to >> the CI repos and beyond. Thanks Sandeep and keep up the good work! >> >> Please reply to this mail with a +1 or -1 for objections in the usual >> manner. If there are no objections we can declare it official in a few >> days >> >> regards, marios >> >> [1] https://review.opendev.org/q/owner:sandeepyadav93 >> [2] >> https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshewale at redhat.com Wed Jun 2 14:06:17 2021 From: bshewale at redhat.com (Bhagyashri Shewale) Date: Wed, 2 Jun 2021 19:36:17 +0530 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1 :) Thanks and Regards Bhagyashri Shewale On Wed, Jun 2, 2021 at 4:48 PM Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moreira.belmiro.email.lists at gmail.com Wed Jun 2 14:12:57 2021 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Wed, 2 Jun 2021 16:12:57 +0200 Subject: AW: Customization of nova-scheduler In-Reply-To: <0fbc1e49a3f87aadc82fa12a53454bc76a3dae4a.camel@redhat.com> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <0fbc1e49a3f87aadc82fa12a53454bc76a3dae4a.camel@redhat.com> Message-ID: Hi Sean, maybe this is the time to bring up again the discussion regarding preemptible instances support in Nova. Preemptible/Spot instances are available in all of the major public clouds to allow a better resource utilization. OpenStack private clouds suffer exactly from the same issue. There was a lot of work done in this area during the last 3 years. Most of the work is summarized by the blogs/presentations/cern-gitlab that you mentioned. CERN has been running this code in production since 1 year ago. It allows us to use the spare capacity in the compute nodes dedicated for specific services to run batch workloads. I heard that "ARDC Nectar Research Cloud" is also running it. I believe the work that was done is an excellent PoC. Also, to me this looks like it should be a Nova feature. Having an external project to support this functionality it's a huge overhead. cheers, Belmiro On Tue, Jun 1, 2021 at 11:03 PM Sean Mooney wrote: > On Mon, 2021-05-31 at 17:21 +0100, Stephen Finucane wrote: > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > > Hello Stephen, > > > > > > I am a student from Germany who is currently working on his bachelor > thesis. My job is to build a cloud solution for my university with > Openstack. The functionality should include the prioritization of users. So > that you can imagine exactly how the whole thing should work, I would like > to give you an example. > > > > > > Two cases should be solved! > > > > > > Case 1: A user A with a low priority uses a VM from Openstack with > half performance of the available host. Then user B comes in with a high > priority and needs the full performance of the host for his VM. When > creating the VM of user B, the VM of user A should be deleted because there > is not enough compute power for user B. The VM of user B is successfully > created. > > > > > > Case 2: A user A with a low priority uses a VM with half the > performance of the available host, then user B comes in with a high > priority and needs half of the performance of the host for his VM. When > creating the VM of user B, user A should not be deleted, since enough > computing power is available for both users. > > > > one thing to keep in mind is that end users are not allow to know the > capstity of the cloud in terms of number of host, the resouces on a host or > what > host there vm is placeed on. so as a user the conceph of "a low priority > uses a VM from Openstack with half performance of the available host" is not > something that you can express arctecurally in nova. > flavor define the size of vms in absolute term i.e. 4GB of ram not relitve > "50% of the host". > we have a 3 laryer schuldeing prcoess that start with a query to the > placment service for a set of quantitative resouce class and qualitative > traits. > that produces a set fo allcoation candiate against a serise of host that > could fit the instance, we then filter those host useing python filters > wich are boolean fucntion that either pass the host or reject it finally > after filtering we weight the remaining hosts and selecet one to boot the > vm. > > once you have completed a steph in this processs you can nolonger go to a > previous step and you can never readd a host afteer it has been elimiated by > placemnt or a filter to be considered again. as a result if you get the > end of the avaiable hosts and there are none that can fix your vm we cannot > delete a vm and start again without redoing all the work and possible > facing with concurrent api requests. > this is why this is a hard problem with out an external service that can > rebalance exiting workloads and free up capsity. > > > > > > These cases should work for unlimited users. In order to optimize the > whole thing, I would like to write a function that precisely calculates all > performance components to determine whether enough resources are available > for the VM of the high priority user. > > > > What you're describing is commonly referred to as "preemptible" or "spot" > > instances. This topic has a long, complicated history in nova and has > yet to be > > implemented. Searching for "preemptible instances openstack" should > yield you > > lots of discussion on the topic along with a few proof-of-concept > approaches > > using external services or out-of-tree modifications to nova. > > > > > I’m new to Openstack, but I’ve already implemented cloud projects with > Microsoft Azure and have solid programming skills. Can you give me a hint > where and how I can start? > > > > As hinted above, this is likely to be a very difficult project given the > fraught > > history of the idea. I don't want to dissuade you from this work but you > should > > be aware of what you're getting into from the start. If you're serious > about > > pursuing this, I suggest you first do some research on prior art. As > noted > > above, there is lots of information on the internet about this. With this > > research done, you'll need to decide whether this is something you want > to > > approach within nova itself, via out-of-tree extensions or via a third > party > > project. If you're opting for integration with nova, then you'll need to > think > > long and hard about how you would design such a system and start working > on a > > spec (a design document) outlining your proposed solution. Details on > how to > > write a spec are discussed at [1]. The only extension points nova offers > today > > are scheduler filters and weighers so your options for an out-of-tree > extension > > approach will be limited. A third party project will arguably be the > easiest > > approach but you will be restricted to talking to nova's REST APIs which > may > > limit the design somewhat. This Blazar spec [2] could give you some > ideas on > > this approach (assuming it was never actually implemented, though it may > well > > have been). > > > > > My university gave me three compute hosts and one control host to > implement this solution for the bachelor thesis. I’m currently setting up > Openstack and all the services on the control host all by myself to > understand all the functionality (sorry for not using Packstack) 😉. All my > hosts have CentOS 7 and the minimum deployment which I configure is Train. > > > > > > My idea is to work with nova schedulers, because they seem to be > interesting for my case. I've found a whole infrastructure description of > the provisioning of an instance in Openstack > https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > > > > > > The nova scheduler > https://docs.openstack.org/operations-guide/ops-customize-compute.html is > the first component, where it is possible to implement functions via Python > and the Compute API > https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail > to check for active VMs and probably delete them if needed before a > successful request for an instantiation can be made. > > > > > > What do you guys think about it? Does it seem like a good starting > point for you or is it the wrong approach? > > > > This could potentially work, but I suspect there will be serious > performance > > implications with this, particularly at scale. Scheduler filters are > > historically used for simple things like "find me a group of hosts that > have > > this metadata attribute I set on my image". Making API calls sounds like > > something that would take significant time and therefore slow down the > schedule > > process. You'd also have to decide what your heuristic for deciding > which VM(s) > > to delete would be, since there's nothing obvious in nova that you could > use. > > You could use something as simple as filter extra specs or something as > > complicated as an external service. > yes implementing preemption in the scheduler as filet was disccused in > the passed and discounted for the performance implication stephen hinted at. > in tree we currentlyt do not allow filter to make any api or db queires. > that approach also will not work toady since you would have to rexecute the > query to the placment service after deleting an instance when you run out > of capacity and restart the filtering which a filter cannot do as i noted > above. > > the most recent spec in this area was > https://review.opendev.org/c/openstack/nova-specs/+/438640 for the > integrated approch and > https://review.opendev.org/c/openstack/nova-specs/+/554212/12 which > proposed adding a pending state for use with a standalone service > > https://gitlab.cern.ch/ttsiouts/ReaperServicePrototype > > ther are a number of presentation on this form cern/stackhapc > https://www.stackhpc.com/scientific-sig-at-the-dublin-ptg.html > > http://openstack-in-production.blogspot.com/2018/02/maximizing-resource-utilization-with.html > > https://openlab.cern/sites/openlab.web.cern.ch/files/2018-07/Containers_on_Baremetal_and_Preemptible_VMs_at_CERN_and_SKA.pdf > > https://indico.cern.ch/event/739089/sessions/282073/attachments/1689073/2717151/ASDF_preemptible.pdf > > > the current state is rebuilding from cell0 is not support but the pending > state was never added and the reaper service was not upstream. > > work in this are has now move the blazar project as stphen noted in [2] > > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > but is dont think it has made much progress. > https://review.opendev.org/q/topic:%22preemptibles%22+(status:open%20OR%20status:merged) > > nova previously had a pluggable scheduler that would have allowed you to > reimplent the scudler entirely from scratch but we removed that > capability in the last year or two. at this point the only viable approach > that will not take multiple upstream cycles to this is really to use an > external service. > > > > > This should be lots to get you started. Once again, do make sure you're > aware of > > what you're getting yourself into before you start. This could get > complicated > > very quickly :) > > yes anything other then adding the pending state to nova will be very > complex due to placement interaction. > you would really need to implement a fallback query mechanism in the > scudler iteself. > anything after the call to placement is already too late. you might be > able to reuse consumer types to make some allocation > preemtiblae and have a prefilter decide if an allocation should be a > normal nova consumer or premtable consumer based on > a flavor extra spec. > https://docs.openstack.org/placement/train/specs/train/approved/2005473-support-consumer-types.html > this would still require the pending state and an external reaper service > to free the capsity to be clean but its a possible direction. > > > > > > Cheers, > > Stephen > > > > > I'm very happy to have found you!!! > > > > > > Thank you really much for your time! > > > > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > > [2] > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > > > > Best regards > > > Levon > > > > > > -----Ursprüngliche Nachricht----- > > > Von: Stephen Finucane > > > Gesendet: Montag, 31. Mai 2021 12:34 > > > An: Levon Melikbekjan ; > openstack at lists.openstack.org > > > Betreff: Re: Customization of nova-scheduler > > > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > > Hello Openstack team, > > > > > > > > is it possible to customize the nova-scheduler via Python? If yes, > how? > > > > > > Yes, you can provide your own filters and weighers. This is documented > at [1]. > > > > > > Hope this helps, > > > Stephen > > > > > > [1] > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > > > > > > > > > > > Best regards > > > > Levon > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Wed Jun 2 14:40:36 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Wed, 2 Jun 2021 14:40:36 +0000 Subject: [interop] something strange In-Reply-To: References: Message-ID: OK. Let’s discuss it on the next week Interop call. I will postpone updating wiki page yet that points to these guidelines. Thanks, Arkady From: Martin Kopec Sent: Tuesday, June 1, 2021 4:39 PM To: Kanevsky, Arkady Cc: openstack-discuss; Goutham Pacha Ravi; Ghanshyam Mann; Vida Haririan Subject: Re: [interop] something strange [EXTERNAL EMAIL] Hi Arkady, I had to revert it (see the latest comment or https://review.opendev.org/c/osf/interop/+/792883 [review.opendev.org]) as it caused troubles with the refstack server - it wasn't able to retrieve the guidelines. Reason for revert: refstack server gives 404 on the guidelines: https://refstack.openstack.org/#/guidelines [refstack.openstack.org] .. seems like https://review.opendev.org/c/osf/refstack/+/790940 [review.opendev.org] didn't handle the update of the guidelines location everywhere - I suspect that some changes in refstack-ui are needed as well, ah I'm sorry for inconvenience, On Tue, 1 Jun 2021 at 23:24, Kanevsky, Arkady > wrote: Team, Once we merged https://review.opendev.org/c/osf/interop/+/786116 [review.opendev.org] I expect that all old guidelines will move into directory “previous”. I just sync my master to latest and still see old guidelines on top level directory. Any idea why? Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 2 15:04:16 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 02 Jun 2021 16:04:16 +0100 Subject: AW: Customization of nova-scheduler In-Reply-To: References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <0fbc1e49a3f87aadc82fa12a53454bc76a3dae4a.camel@redhat.com> Message-ID: On Wed, 2021-06-02 at 16:12 +0200, Belmiro Moreira wrote: > Hi Sean, > > maybe this is the time to bring up again the discussion regarding > preemptible instances support in Nova. maybe realistically im not sure we have the capasity to do the detailed design required this cycle but we could disucss it with an aim to having something ready for next cycle. i still think this is a valuable capablity which is partly why i brought this topic up with gibi this morning http://eavesdrop.openstack.org/irclogs/%23openstack-nova/latest.log.html#t2021-06-02T10:26:24 his reply is here http://eavesdrop.openstack.org/irclogs/%23openstack-nova/latest.log.html#t2021-06-02T12:00:03 i was exploring the question does the soon to be intoduced consumer types impact the desgin in any way. if unified limits was aware of consumer types and we had a placement:consumer_type=premtibale extra spec for example and we enhanced nova to use that we could adress some of the awkwardness in the current design where you have to have two project to do quota properly. effectively i think unified limits + consumer types shoudl probably be a prerequisite. we might want to revive the pending state also alhtough we now have rebuild form cell 0 i belvie so that may not be reuqied. if there is interst in this perhaps we should explore a subteam/popup team to pursue this again? > > Preemptible/Spot instances are available in all of the major public clouds > to allow a better resource utilization. OpenStack private clouds suffer > exactly from the same issue. > > There was a lot of work done in this area during the last 3 years. > > Most of the work is summarized by the blogs/presentations/cern-gitlab that > you mentioned. > > CERN has been running this code in production since 1 year ago. It allows > us to use the spare capacity in the compute nodes dedicated for specific > services to run batch workloads. yep i see utility in it for providing extra cloud capacity for ci also. > > I heard that "ARDC Nectar Research Cloud" is also running it. > > I believe the work that was done is an excellent PoC. well since cern and netcar are potentialy running it already is that not an endorsement of an external agent approch :) > > Also, to me this looks like it should be a Nova feature. Having an external > project to support this functionality it's a huge overhead. so we have been debaiting addign a new agent to nova for a while that would be responsible for runing some of the period healing type task. we were calling it nova audit as a place holder but it woudld basicaly do thing like arciving delete rows healing allocation ectra. the other logical approch would be to incoperate it into the nova conductor but im still not sold that it shoudl be in the nova tree. im not againt that either but perhaps a better apprcoh would be to create seperate repo that is a deliverable of nova based on the poc code and incubate it there. im really conviced that an external process is a huge overhead but also haveing to maintain the project release it ectra probably is. with that said i have always been a fan of the idea of having a common agent on a node that ran multiple services. e.g. a way to deploy nova api, nova conductor and nova scheduler as a singel binary to reduce the number of service you need to manage but i think that is a seperate topic. > > cheers, > > Belmiro > > > On Tue, Jun 1, 2021 at 11:03 PM Sean Mooney wrote: > > > On Mon, 2021-05-31 at 17:21 +0100, Stephen Finucane wrote: > > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > > > Hello Stephen, > > > > > > > > I am a student from Germany who is currently working on his bachelor > > thesis. My job is to build a cloud solution for my university with > > Openstack. The functionality should include the prioritization of users. So > > that you can imagine exactly how the whole thing should work, I would like > > to give you an example. > > > > > > > > Two cases should be solved! > > > > > > > > Case 1: A user A with a low priority uses a VM from Openstack with > > half performance of the available host. Then user B comes in with a high > > priority and needs the full performance of the host for his VM. When > > creating the VM of user B, the VM of user A should be deleted because there > > is not enough compute power for user B. The VM of user B is successfully > > created. > > > > > > > > Case 2: A user A with a low priority uses a VM with half the > > performance of the available host, then user B comes in with a high > > priority and needs half of the performance of the host for his VM. When > > creating the VM of user B, user A should not be deleted, since enough > > computing power is available for both users. > > > > > > one thing to keep in mind is that end users are not allow to know the > > capstity of the cloud in terms of number of host, the resouces on a host or > > what > > host there vm is placeed on. so as a user the conceph of "a low priority > > uses a VM from Openstack with half performance of the available host" is not > > something that you can express arctecurally in nova. > > flavor define the size of vms in absolute term i.e. 4GB of ram not relitve > > "50% of the host". > > we have a 3 laryer schuldeing prcoess that start with a query to the > > placment service for a set of quantitative resouce class and qualitative > > traits. > > that produces a set fo allcoation candiate against a serise of host that > > could fit the instance, we then filter those host useing python filters > > wich are boolean fucntion that either pass the host or reject it finally > > after filtering we weight the remaining hosts and selecet one to boot the > > vm. > > > > once you have completed a steph in this processs you can nolonger go to a > > previous step and you can never readd a host afteer it has been elimiated by > > placemnt or a filter to be considered again. as a result if you get the > > end of the avaiable hosts and there are none that can fix your vm we cannot > > delete a vm and start again without redoing all the work and possible > > facing with concurrent api requests. > > this is why this is a hard problem with out an external service that can > > rebalance exiting workloads and free up capsity. > > > > > > > > > > These cases should work for unlimited users. In order to optimize the > > whole thing, I would like to write a function that precisely calculates all > > performance components to determine whether enough resources are available > > for the VM of the high priority user. > > > > > > What you're describing is commonly referred to as "preemptible" or "spot" > > > instances. This topic has a long, complicated history in nova and has > > yet to be > > > implemented. Searching for "preemptible instances openstack" should > > yield you > > > lots of discussion on the topic along with a few proof-of-concept > > approaches > > > using external services or out-of-tree modifications to nova. > > > > > > > I’m new to Openstack, but I’ve already implemented cloud projects with > > Microsoft Azure and have solid programming skills. Can you give me a hint > > where and how I can start? > > > > > > As hinted above, this is likely to be a very difficult project given the > > fraught > > > history of the idea. I don't want to dissuade you from this work but you > > should > > > be aware of what you're getting into from the start. If you're serious > > about > > > pursuing this, I suggest you first do some research on prior art. As > > noted > > > above, there is lots of information on the internet about this. With this > > > research done, you'll need to decide whether this is something you want > > to > > > approach within nova itself, via out-of-tree extensions or via a third > > party > > > project. If you're opting for integration with nova, then you'll need to > > think > > > long and hard about how you would design such a system and start working > > on a > > > spec (a design document) outlining your proposed solution. Details on > > how to > > > write a spec are discussed at [1]. The only extension points nova offers > > today > > > are scheduler filters and weighers so your options for an out-of-tree > > extension > > > approach will be limited. A third party project will arguably be the > > easiest > > > approach but you will be restricted to talking to nova's REST APIs which > > may > > > limit the design somewhat. This Blazar spec [2] could give you some > > ideas on > > > this approach (assuming it was never actually implemented, though it may > > well > > > have been). > > > > > > > My university gave me three compute hosts and one control host to > > implement this solution for the bachelor thesis. I’m currently setting up > > Openstack and all the services on the control host all by myself to > > understand all the functionality (sorry for not using Packstack) 😉. All my > > hosts have CentOS 7 and the minimum deployment which I configure is Train. > > > > > > > > My idea is to work with nova schedulers, because they seem to be > > interesting for my case. I've found a whole infrastructure description of > > the provisioning of an instance in Openstack > > https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > > > > > > > > > The nova scheduler > > https://docs.openstack.org/operations-guide/ops-customize-compute.html is > > the first component, where it is possible to implement functions via Python > > and the Compute API > > https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail > > to check for active VMs and probably delete them if needed before a > > successful request for an instantiation can be made. > > > > > > > > What do you guys think about it? Does it seem like a good starting > > point for you or is it the wrong approach? > > > > > > This could potentially work, but I suspect there will be serious > > performance > > > implications with this, particularly at scale. Scheduler filters are > > > historically used for simple things like "find me a group of hosts that > > have > > > this metadata attribute I set on my image". Making API calls sounds like > > > something that would take significant time and therefore slow down the > > schedule > > > process. You'd also have to decide what your heuristic for deciding > > which VM(s) > > > to delete would be, since there's nothing obvious in nova that you could > > use. > > > You could use something as simple as filter extra specs or something as > > > complicated as an external service. > > yes implementing preemption in the scheduler as filet was disccused in > > the passed and discounted for the performance implication stephen hinted at. > > in tree we currentlyt do not allow filter to make any api or db queires. > > that approach also will not work toady since you would have to rexecute the > > query to the placment service after deleting an instance when you run out > > of capacity and restart the filtering which a filter cannot do as i noted > > above. > > > > the most recent spec in this area was > > https://review.opendev.org/c/openstack/nova-specs/+/438640 for the > > integrated approch and > > https://review.opendev.org/c/openstack/nova-specs/+/554212/12 which > > proposed adding a pending state for use with a standalone service > > > > https://gitlab.cern.ch/ttsiouts/ReaperServicePrototype > > > > ther are a number of presentation on this form cern/stackhapc > > https://www.stackhpc.com/scientific-sig-at-the-dublin-ptg.html > > > > http://openstack-in-production.blogspot.com/2018/02/maximizing-resource-utilization-with.html > > > > https://openlab.cern/sites/openlab.web.cern.ch/files/2018-07/Containers_on_Baremetal_and_Preemptible_VMs_at_CERN_and_SKA.pdf > > > > https://indico.cern.ch/event/739089/sessions/282073/attachments/1689073/2717151/ASDF_preemptible.pdf > > > > > > the current state is rebuilding from cell0 is not support but the pending > > state was never added and the reaper service was not upstream. > > > > work in this are has now move the blazar project as stphen noted in [2] > > > > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > but is dont think it has made much progress. > > https://review.opendev.org/q/topic:%22preemptibles%22+(status:open%20OR%20status:merged) > > > > nova previously had a pluggable scheduler that would have allowed you to > > reimplent the scudler entirely from scratch but we removed that > > capability in the last year or two. at this point the only viable approach > > that will not take multiple upstream cycles to this is really to use an > > external service. > > > > > > > > This should be lots to get you started. Once again, do make sure you're > > aware of > > > what you're getting yourself into before you start. This could get > > complicated > > > very quickly :) > > > > yes anything other then adding the pending state to nova will be very > > complex due to placement interaction. > > you would really need to implement a fallback query mechanism in the > > scudler iteself. > > anything after the call to placement is already too late. you might be > > able to reuse consumer types to make some allocation > > preemtiblae and have a prefilter decide if an allocation should be a > > normal nova consumer or premtable consumer based on > > a flavor extra spec. > > https://docs.openstack.org/placement/train/specs/train/approved/2005473-support-consumer-types.html > > this would still require the pending state and an external reaper service > > to free the capsity to be clean but its a possible direction. > > > > > > > > > > Cheers, > > > Stephen > > > > > > > I'm very happy to have found you!!! > > > > > > > > Thank you really much for your time! > > > > > > > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > > > [2] > > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > > > > > > Best regards > > > > Levon > > > > > > > > -----Ursprüngliche Nachricht----- > > > > Von: Stephen Finucane > > > > Gesendet: Montag, 31. Mai 2021 12:34 > > > > An: Levon Melikbekjan ; > > openstack at lists.openstack.org > > > > Betreff: Re: Customization of nova-scheduler > > > > > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > > > Hello Openstack team, > > > > > > > > > > is it possible to customize the nova-scheduler via Python? If yes, > > how? > > > > > > > > Yes, you can provide your own filters and weighers. This is documented > > at [1]. > > > > > > > > Hope this helps, > > > > Stephen > > > > > > > > [1] > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > > > > > > > > > > > > > > Best regards > > > > > Levon > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From patryk.jakuszew at gmail.com Wed Jun 2 15:08:37 2021 From: patryk.jakuszew at gmail.com (Patryk Jakuszew) Date: Wed, 2 Jun 2021 17:08:37 +0200 Subject: [nova] Proper way to regenerate request_specs of existing instances? In-Reply-To: References: Message-ID: On Tue, 1 Jun 2021 at 23:14, Sean Mooney wrote: > this has come up often enough that we __might__ (im stressing might since im not sure we really want to do this) > consider adding a nova manage command to do this. > > e.g. nova-mange instance flavor-regenerate and nova-mange instance image-regenerate > > those command woudl just recrate the embeded flavor and image metadta without moving the vm or otherwise restarting it. > you would then have to hard reboot it or migrate it sepereatlly. > > im not convicned this is a capablity we should provide to operators in tree however via nova-manage. > > with my downstream hat on im not sure how supportable it woudl for example since like nova reset-state it woudl be > very easy to render vms unbootable in there current localthouh if a tenatn did a hard reboot and cause all kinds of stange issues > that are hard to debug an fix. I have the same thoughts - initially I wanted to figure out whether such feature could be added to nova-manage toolset, but I'm not sure it would be a welcome contribution due to the risks it creates. *Maybe* it would help to add some warnings around it and add an obligatory '--yes-i-really-really-mean-it' switch, but still - it may cause undesired long-term consequences if used improperly. On the other hand, other projects do have options that one can consider to be similiar in nature ('cinder-manage volume update_host' comes to mind), and I think nova-manage is considered to be a low-level utility that shouldn't be used in day-to-day operations anyway... From jmlineb at sandia.gov Wed Jun 2 15:31:23 2021 From: jmlineb at sandia.gov (Linebarger, John) Date: Wed, 2 Jun 2021 15:31:23 +0000 Subject: Is the server Action Log immutable? Message-ID: Hello! Is the server Action Log absolutely immutable? Meaning, if you make a mistake in handling a server (VM) and it shows up in the Action Log, is there any way to remove that entry? I understand that the Action Log is kept in a database but am searching in vain for an API call that will allow such entries to be modified or deleted as opposed to merely displayed. What workarounds might exist? Thanks! Enjoy! John M. Linebarger, PhD, MBA Principal Member of Technical Staff Sandia National Laboratories (Office) 505-845-8282 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 2 17:10:03 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 02 Jun 2021 18:10:03 +0100 Subject: Is the server Action Log immutable? In-Reply-To: References: Message-ID: <7cbf6a8cba49ca8b7653ccc7d4e6511f42c18ba3.camel@redhat.com> On Wed, 2021-06-02 at 15:31 +0000, Linebarger, John wrote: > Hello! Is the server Action Log absolutely immutable? Meaning, if you make a mistake in handling a server (VM) and it shows up in the Action Log, is there any way to remove that entry? I understand that the Action Log is kept in a database but am searching in vain for an API call that will allow such entries to be modified or deleted as opposed to merely displayed. What workarounds might exist? > it is intended to be immutable yes as a form of audit log. it is provided by the instance action api https://docs.openstack.org/api-ref/compute/#servers-actions-servers-os-instance-actions > Thanks! Enjoy! > > John M. Linebarger, PhD, MBA > Principal Member of Technical Staff > Sandia National Laboratories > (Office) 505-845-8282 From rlandy at redhat.com Wed Jun 2 21:19:57 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Wed, 2 Jun 2021 17:19:57 -0400 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: absolutely +1 On Wed, Jun 2, 2021 at 10:08 AM Bhagyashri Shewale wrote: > +1 :) > > Thanks and Regards > Bhagyashri Shewale > > On Wed, Jun 2, 2021 at 4:48 PM Marios Andreou wrote: > >> Hello all >> >> Having discussed this with some members of the tripleo ci team >> (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >> ysandeep) for core on the tripleo-ci repos (tripleo-ci, >> tripleo-quickstart and tripleo-quickstart-extras). >> >> Sandeep joined the team about 1.5 years ago and has from the start >> demonstrated his eagerness to learn and an excellent work ethic, >> having made many useful code submissions [1] and code reviews [2] to >> the CI repos and beyond. Thanks Sandeep and keep up the good work! >> >> Please reply to this mail with a +1 or -1 for objections in the usual >> manner. If there are no objections we can declare it official in a few >> days >> >> regards, marios >> >> [1] https://review.opendev.org/q/owner:sandeepyadav93 >> [2] >> https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dvd at redhat.com Wed Jun 2 22:24:39 2021 From: dvd at redhat.com (David Vallee Delisle) Date: Wed, 2 Jun 2021 18:24:39 -0400 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1 indeed DVD On Wed, Jun 2, 2021 at 5:26 PM Ronelle Landy wrote: > absolutely +1 > > On Wed, Jun 2, 2021 at 10:08 AM Bhagyashri Shewale > wrote: > >> +1 :) >> >> Thanks and Regards >> Bhagyashri Shewale >> >> On Wed, Jun 2, 2021 at 4:48 PM Marios Andreou wrote: >> >>> Hello all >>> >>> Having discussed this with some members of the tripleo ci team >>> (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >>> ysandeep) for core on the tripleo-ci repos (tripleo-ci, >>> tripleo-quickstart and tripleo-quickstart-extras). >>> >>> Sandeep joined the team about 1.5 years ago and has from the start >>> demonstrated his eagerness to learn and an excellent work ethic, >>> having made many useful code submissions [1] and code reviews [2] to >>> the CI repos and beyond. Thanks Sandeep and keep up the good work! >>> >>> Please reply to this mail with a +1 or -1 for objections in the usual >>> manner. If there are no objections we can declare it official in a few >>> days >>> >>> regards, marios >>> >>> [1] https://review.opendev.org/q/owner:sandeepyadav93 >>> [2] >>> https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From atikoo at bloomberg.net Wed Jun 2 22:39:54 2021 From: atikoo at bloomberg.net (Ajay Tikoo (BLOOMBERG/ 120 PARK)) Date: Wed, 2 Jun 2021 22:39:54 -0000 Subject: =?UTF-8?B?W29wc10gcmFiYml0bXEgcXVldWVzIGZvciBub3ZhIHZlcnNpb25lZCBub3RpZmljYXRpbw==?= =?UTF-8?B?bnMgcXVldWVzIGtlZXAgZmlsbGluZyB1cA==?= Message-ID: <60B808BA00D0068401D80001_0_3025859@msclnypmsgsv04> I am not sure if this is the right channel/format to post this question, so my apologies in advance if this is not the right place. We are using Openstack Rocky. Watcher needs versioned notifications to be enabled. However after enabling versioned notifications, the queues for versioned_notifications (info and error) keep filling up Based on the updates the the Watchers cluster data model, it appears that Watcher is consuming messages, but they still linger in these queues. So with nova versioned notifications disabled, Watcher is unable to update the cluster data model (between rebuild intervals), and with them enabled, it keeps filling up the MQ queues. What is the best way to resolve this? Thank you, Ajay Tikoo -------------- next part -------------- An HTML attachment was scrubbed... URL: From forums at mossakowski.ch Wed Jun 2 22:05:52 2021 From: forums at mossakowski.ch (forums at mossakowski.ch) Date: Wed, 02 Jun 2021 22:05:52 +0000 Subject: [Neutron] sriov network setup for victoria - clarification needed In-Reply-To: References: Message-ID: Muchas gracias Alonso para tu ayuda! I've commented out the decorator line, new exception popped out, I've updated my gist: https://gist.github.com/8e6272cbe7748b2c5210fab291360e0b BR, Piotr Mossakowski Sent from ProtonMail mobile \-------- Original Message -------- On 31 May 2021, 18:08, Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > > > > Hello Piotr: > > > > > Maybe you should update the pyroute2 library, but this is a blind shot. > > > > > What I recommend you do is to find the error you have when retrieving the interface VFs. In the same compute node, use this method \[1\] but remove the decorator \[2\]. Then, in a root shell, run python again: > > >>> from neutron.privileged.agent.linux import ip\_lib > >>> ip\_lib.get\_link\_vfs('ens2f0', '') > > > > > That will execute the pyroute2 code without the privsep decorator. You'll see what error is returning the method. > > > > > Regards. > > > > > > \[1\][https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip\_lib.py\#L396-L410][https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L396-L410] > > \[2\][https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip\_lib.py\#L395][https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L395] > > > > > > > > On Mon, May 31, 2021 at 5:50 PM <[forums at mossakowski.ch][forums_mossakowski.ch]> wrote: > > > > Hello, > > > > > > I have two victoria environments: > > > > > > 1) a working one, standard setup with separate dedicated interface for sriov (pt0 and pt1) > > > > > > 2) a broken one, where I'm trying to reuse one of already used interfaces (ens2f0 or ens2f1) for sriov. ens2f0 is used for several VLANs (mgmt and storage) and ens2f1 is a neutron external interface which I bridged for VLAN tenant networks. On both I have enabled 63 VFs, it's a standard intetl 10Gb x540 adapter. > > > > > > > > > > > > On broken environment, when I'm trying to boot a VM with sriov port that I created before, I see this error shown on below gist: > > > > > > https://gist.github.com/moss2k13/8e6272cbe7748b2c5210fab291360e0b > > > > > > > > > > > > I'm investigating this for couple days now but I'm out of ideas so I'd like to ask for your support. Is this possible to achieve what I'm trying to do on 2nd environment? To use PF as normal interface and use its VFs for sriov-agent at the same time? > > > > > > > > > > > > Regards, > > > > > > Piotr Mossakowski > > [https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L396-L410]: https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L396-L410 [https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L395]: https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L395 [forums_mossakowski.ch]: mailto:forums at mossakowski.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: publickey - EmailAddress(s=forums at mossakowski.ch) - 0xDC035524.asc Type: application/pgp-keys Size: 648 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 294 bytes Desc: OpenPGP digital signature URL: From gmann at ghanshyammann.com Thu Jun 3 02:20:09 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 02 Jun 2021 21:20:09 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 3rd at 1500 UTC In-Reply-To: <179c5124664.d7d11855244381.7893037772801020341@ghanshyammann.com> References: <179c5124664.d7d11855244381.7893037772801020341@ghanshyammann.com> Message-ID: <179cfabbca3.10336c8f3100883.9067913380315024841@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on June 3rd at 1500 UTC in #openstack-tc IRC OFTC channel. -https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Xena cycle tracker status check ** https://etherpad.opendev.org/p/tc-xena-tracker * Migration from 'Freenode' to 'OFTC' (gmann) ** https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc ** TC resolution *** https://review.opendev.org/c/openstack/governance/+/793260 *OpenStack Newsletters ** https://etherpad.opendev.org/p/newsletter-openstack-news * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Mon, 31 May 2021 19:56:19 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > NOTE: FROM THIS WEEK ONWARDS, TC MEETINGS WILL BE HELD IN #openstack-tc CHANNEL ON OFTC NETWORK (NOT FREENODE) > > Technical Committee's next weekly meeting is scheduled for June 3rd at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, June 2nd, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > > -gmann > > From zaitcev at redhat.com Thu Jun 3 06:22:30 2021 From: zaitcev at redhat.com (Pete Zaitcev) Date: Thu, 3 Jun 2021 01:22:30 -0500 Subject: [Swift] Object replication failures on newly upgraded servers In-Reply-To: References: Message-ID: <20210603012230.65f2bc33@suzdal.zaitcev.lan> On Fri, 28 May 2021 16:58:10 +1200 Mark Kirkwood wrote: > Examining the logs (/var/log/swift/object.log and /var/log/syslog) these > are not throwing up any red flags (i.e no failing rsyncs noted). You should be seeing tracebacks and "Error syncing partition", "Error syncing handoff partition", or "Exception in top-level replication loop". -- Pete From skaplons at redhat.com Thu Jun 3 07:10:44 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 03 Jun 2021 09:10:44 +0200 Subject: [neutron] Drivers meetning 04.06.2021 - agenda Message-ID: <12880716.CG0u9eRpRN@p1> Hi, We have one new RFE to discuss on the tomorrows drivers meeting: https://bugs.launchpad.net/neutron/+bug/1930200 - [RFE] Add support for Node-Local virtual IP[1] Please check it, ask any questions You will have regarding this proposal and see You on the meeting tomorrow. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://bugs.launchpad.net/neutron/+bug/1930200 - [RFE] Add support for Node-Local virtual IP -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ralonsoh at redhat.com Thu Jun 3 07:12:10 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Thu, 3 Jun 2021 09:12:10 +0200 Subject: [Neutron] sriov network setup for victoria - clarification needed In-Reply-To: References: Message-ID: Hi Piotr: I think you are hitting [1]. As you said, each PF has 63 VFs configured. Your error looks very similar to this one reported. Try updating pyroute2 to version 0.6.2. That should contain the fix for this error. Regards. [1]https://github.com/svinota/pyroute2/issues/751 On Thu, Jun 3, 2021 at 12:06 AM wrote: > Muchas gracias Alonso para tu ayuda! > > > I've commented out the decorator line, new exception popped out, I've > updated my gist: > > https://gist.github.com/8e6272cbe7748b2c5210fab291360e0b > > > BR, > > Piotr Mossakowski > > Sent from ProtonMail mobile > > > -------- Original Message -------- > On 31 May 2021, 18:08, Rodolfo Alonso Hernandez < ralonsoh at redhat.com> > wrote: > > > Hello Piotr: > > Maybe you should update the pyroute2 library, but this is a blind shot. > > What I recommend you do is to find the error you have when retrieving the > interface VFs. In the same compute node, use this method [1] but remove the > decorator [2]. Then, in a root shell, run python again: > >>> from neutron.privileged.agent.linux import ip_lib > >>> ip_lib.get_link_vfs('ens2f0', '') > > That will execute the pyroute2 code without the privsep decorator. You'll > see what error is returning the method. > > Regards. > > [1] > https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L396-L410 > [2] > https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L395 > > > On Mon, May 31, 2021 at 5:50 PM wrote: > >> Hello, >> I have two victoria environments: >> 1) a working one, standard setup with separate dedicated interface for >> sriov (pt0 and pt1) >> 2) a broken one, where I'm trying to reuse one of already used interfaces >> (ens2f0 or ens2f1) for sriov. ens2f0 is used for several VLANs (mgmt and >> storage) and ens2f1 is a neutron external interface which I bridged for >> VLAN tenant networks. On both I have enabled 63 VFs, it's a standard intetl >> 10Gb x540 adapter. >> >> On broken environment, when I'm trying to boot a VM with sriov port that >> I created before, I see this error shown on below gist: >> https://gist.github.com/moss2k13/8e6272cbe7748b2c5210fab291360e0b >> >> I'm investigating this for couple days now but I'm out of ideas so I'd >> like to ask for your support. Is this possible to achieve what I'm trying >> to do on 2nd environment? To use PF as normal interface and use its VFs for >> sriov-agent at the same time? >> >> Regards, >> Piotr Mossakowski >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Jun 3 08:05:35 2021 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 3 Jun 2021 10:05:35 +0200 Subject: [largescale-sig] Next meeting: June 2, 15utc In-Reply-To: <33a1e2d5-88fe-826c-47b9-2b01f06163a7@openstack.org> References: <33a1e2d5-88fe-826c-47b9-2b01f06163a7@openstack.org> Message-ID: <941e3204-e7a1-abbc-1632-4d8c0dda91f5@openstack.org> We held our meeting yesterday. We agreed to do on June 10 a continuation of the "upgrades in large scale openstack infra" show on OpenInfra.Live. We also plan to do another episode around running openstack control plane on openstack, tentatively scheduled for July 15. Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-06-02-15.00.html Our next IRC meeting will be June 23, at 1500utc on #openstack-operators on OFTC. Regards, -- Thierry Carrez (ttx) From mark at stackhpc.com Thu Jun 3 08:24:53 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 3 Jun 2021 09:24:53 +0100 Subject: [kolla] [kolla-ansible] fluentd doesn't forward OpenStack logs to Elasticsearch In-Reply-To: References: Message-ID: On Sat, 29 May 2021 at 11:24, Bernd Bausch wrote: > > I might have found a bug in Kolla-Ansible (Victoria version) but don't know where to file it. Hi Bernd, you can file kolla-ansible bugs on Launchpad [1]. [1] https://bugs.launchpad.net/kolla-ansible/+filebug > > This is about central logging. In my installation, none of the interesting logs (Nova, Cinder, Neutron...) are sent to Elasticsearch. I confirmed that using tcpdump. > > I found that fluentd's config file /etc/kolla/fluentd/td-agent.conf tags these logs with "kolla.*". But later in the file, one finds filters like this: > > # Included from conf/filter/01-rewrite-0.14.conf.j2: > > @type rewrite_tag_filter > capitalize_regex_backreference yes > ... > > key programname > pattern ^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$ > tag openstack_python > > > If I understand this right, this basically re-tags all nova logs with "openstack_python". > > The same config file has an output rule at the very end. I think the intention is to make this a catch-all rule (or "match anything else"): > > # Included from conf/output/01-es.conf.j2: > > @type copy > > @type elasticsearch > host 192.168.122.209 > port 9200 > scheme http > > etc. > > Unfortunately, the openstack_python tag doesn't match *.**, since it contains no dot. I fixed this with . Now I receive all logs, but I am not sure if this is the right way to fix it. I have seen log aggregation working, although possibly haven't tried it with Victoria. I can't see any obviously relevant changes, so please file a bug. > > The error, if it is one, is in https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/roles/common/templates/conf/output/01-es.conf.j2. > > If you want me to file a bug, please let me know how. > > Bernd. > > From soumplis at admin.grnet.gr Thu Jun 3 08:47:50 2021 From: soumplis at admin.grnet.gr (Alexandros Soumplis) Date: Thu, 3 Jun 2021 11:47:50 +0300 Subject: [kolla] [kolla-ansible] Magnum UI Message-ID: Hi all, Before submitting a bug against launchpad I would like to ask if anyone else can confirm this issue. I deploy Magnum on Victoria release using the ubuntu binary containers and I do not have the UI installed. Changing to the source binaries, the UI is installed and working as expected. Is this a configerror, a bug or a feature maybe :) Thank you, a. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3620 bytes Desc: S/MIME Cryptographic Signature URL: From mark at stackhpc.com Thu Jun 3 08:53:28 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 3 Jun 2021 09:53:28 +0100 Subject: [kolla] [kolla-ansible] fluentd doesn't forward OpenStack logs to Elasticsearch In-Reply-To: References: Message-ID: On Thu, 3 Jun 2021 at 09:24, Mark Goddard wrote: > > On Sat, 29 May 2021 at 11:24, Bernd Bausch wrote: > > > > I might have found a bug in Kolla-Ansible (Victoria version) but don't know where to file it. > > Hi Bernd, you can file kolla-ansible bugs on Launchpad [1]. > > [1] https://bugs.launchpad.net/kolla-ansible/+filebug > > > > > This is about central logging. In my installation, none of the interesting logs (Nova, Cinder, Neutron...) are sent to Elasticsearch. I confirmed that using tcpdump. > > > > I found that fluentd's config file /etc/kolla/fluentd/td-agent.conf tags these logs with "kolla.*". But later in the file, one finds filters like this: > > > > # Included from conf/filter/01-rewrite-0.14.conf.j2: > > > > @type rewrite_tag_filter > > capitalize_regex_backreference yes > > ... > > > > key programname > > pattern ^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$ > > tag openstack_python > > > > > > If I understand this right, this basically re-tags all nova logs with "openstack_python". > > > > The same config file has an output rule at the very end. I think the intention is to make this a catch-all rule (or "match anything else"): > > > > # Included from conf/output/01-es.conf.j2: > > > > @type copy > > > > @type elasticsearch > > host 192.168.122.209 > > port 9200 > > scheme http > > > > etc. > > > > Unfortunately, the openstack_python tag doesn't match *.**, since it contains no dot. I fixed this with . Now I receive all logs, but I am not sure if this is the right way to fix it. > > I have seen log aggregation working, although possibly haven't tried > it with Victoria. I can't see any obviously relevant changes, so > please file a bug. I tried this out on a recent (CentOS) Victoria deployment. I couldn't reproduce the issue. My test case was nova-scheduler. I restarted it and verified that shutdown/startup logs appear in Elastic. Could you verify whether that case also works for you, and if so, provide a broken case. > > > > > The error, if it is one, is in https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/roles/common/templates/conf/output/01-es.conf.j2. > > > > If you want me to file a bug, please let me know how. > > > > Bernd. > > > > From mark at stackhpc.com Thu Jun 3 09:03:51 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 3 Jun 2021 10:03:51 +0100 Subject: [kolla] [kolla-ansible] fluentd doesn't forward OpenStack logs to Elasticsearch In-Reply-To: References: Message-ID: On Thu, 3 Jun 2021 at 09:53, Mark Goddard wrote: > > On Thu, 3 Jun 2021 at 09:24, Mark Goddard wrote: > > > > On Sat, 29 May 2021 at 11:24, Bernd Bausch wrote: > > > > > > I might have found a bug in Kolla-Ansible (Victoria version) but don't know where to file it. > > > > Hi Bernd, you can file kolla-ansible bugs on Launchpad [1]. > > > > [1] https://bugs.launchpad.net/kolla-ansible/+filebug > > > > > > > > This is about central logging. In my installation, none of the interesting logs (Nova, Cinder, Neutron...) are sent to Elasticsearch. I confirmed that using tcpdump. > > > > > > I found that fluentd's config file /etc/kolla/fluentd/td-agent.conf tags these logs with "kolla.*". But later in the file, one finds filters like this: > > > > > > # Included from conf/filter/01-rewrite-0.14.conf.j2: > > > > > > @type rewrite_tag_filter > > > capitalize_regex_backreference yes > > > ... > > > > > > key programname > > > pattern ^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$ > > > tag openstack_python > > > > > > > > > If I understand this right, this basically re-tags all nova logs with "openstack_python". > > > > > > The same config file has an output rule at the very end. I think the intention is to make this a catch-all rule (or "match anything else"): > > > > > > # Included from conf/output/01-es.conf.j2: > > > > > > @type copy > > > > > > @type elasticsearch > > > host 192.168.122.209 > > > port 9200 > > > scheme http > > > > > > etc. > > > > > > Unfortunately, the openstack_python tag doesn't match *.**, since it contains no dot. I fixed this with . Now I receive all logs, but I am not sure if this is the right way to fix it. > > > > I have seen log aggregation working, although possibly haven't tried > > it with Victoria. I can't see any obviously relevant changes, so > > please file a bug. > > I tried this out on a recent (CentOS) Victoria deployment. I couldn't > reproduce the issue. My test case was nova-scheduler. I restarted it > and verified that shutdown/startup logs appear in Elastic. Could you > verify whether that case also works for you, and if so, provide a > broken case. Could you provide your version of fluentd/td-agent? docker exec -it fluentd td-agent --version I have 1.11.2, although we have just confirmed a broken case with 1.12.1. John Garbutt is planning to develop a patch based on your suggested fix. > > > > > > > > > The error, if it is one, is in https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/roles/common/templates/conf/output/01-es.conf.j2. > > > > > > If you want me to file a bug, please let me know how. > > > > > > Bernd. > > > > > > From malikobaidadil at gmail.com Thu Jun 3 12:01:35 2021 From: malikobaidadil at gmail.com (Malik Obaid) Date: Thu, 3 Jun 2021 17:01:35 +0500 Subject: [wallaby][nova] Change Time Zone Message-ID: Hi, I am using Openstack Wallaby release on Ubuntu 20.04. When I try to list openstack compute services the time zone in 'Updated At' column is in UTC. root at controller-khi01 ~(keystone)# openstack compute service list +----+----------------+------------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+------------------+----------+---------+-------+----------------------------+ | 4 | nova-conductor | controller-khi01 | internal | enabled | up | 2021-06-03T11:59:59.000000 | | 5 | nova-scheduler | controller-khi01 | internal | enabled | up | 2021-06-03T12:00:08.000000 | | 8 | nova-compute | kvm03-a1-khi01 | nova | enabled | up | 2021-06-03T12:00:02.000000 | | 9 | nova-compute | kvm01-a1-khi01 | nova | enabled | up | 2021-06-03T12:00:02.000000 | +----+----------------+------------------+----------+---------+-------+----------------------------+ I want to change it to some other time zone. Is there a way to do it? I would really appreciate any input in this regard. Thank you. Regards, Malik Obaid -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Jun 3 12:57:46 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 3 Jun 2021 15:57:46 +0300 Subject: [TripleO] tripleo repos going Extended Maintenance stable/train OK? (not yet IMO) In-Reply-To: References: Message-ID: Hello all, as discussed at [1] the train branch across all tripleo repos will be moving to extended maintenance at the end of this week. Before we make the train-em tag I have pushed one last release on train with [2] and rebased [1] onto that. As discussed in [1] train is still an active branch for tripleo and we can and will continue to merge fixes there. It just means that we will no longer be making tagged releases for train branches. If you have any questions or concerns about any of this please reach out, regards, marios [1] https://review.opendev.org/c/openstack/releases/+/790778/2#message-e8ee1f6febb4780ccbb703bf378bcfc08776a49a [2] https://review.opendev.org/c/openstack/releases/+/794583 On Thu, May 13, 2021 at 3:47 PM Marios Andreou wrote: > > Hello TripleO o/ > > per [1] and the proposal at [2] the stable/train branch for all > tripleo repos [3] is going to transition to extended maintenance [4]. > > Once [2] merges, we can still merge things to stable/train but it > means we can no longer make official openstack tagged releases for > stable/train. > > TripleO is a trailing project so if we want to hold on this for a > while longer I think that is OK and that would also be my personal > preference. > > From a quick check just now e.g. tripleo-heat-templates @ [5] and at > current time there are 87 commits since last September which isn't a > tiny amount. So I don't think TripleO is ready to declare stable/train > as extended maintenance, but perhaps I am wrong, what do you think? > > Please comment here or directly at [2] if you prefer > > regards, marios > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022287.html > [2] https://review.opendev.org/c/openstack/releases/+/790778/2#message-e981f749aeca64ea971f4e697dd16ba5100ca4a4 > [3] https://releases.openstack.org/teams/tripleo.html#train > [4] https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > [5] https://github.com/openstack/tripleo-heat-templates/compare/11.5.0...stable/train From smooney at redhat.com Thu Jun 3 13:02:03 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 03 Jun 2021 14:02:03 +0100 Subject: [wallaby][nova] Change Time Zone In-Reply-To: References: Message-ID: <9c5db04b47bfe0fe51d343121273847731dd0179.camel@redhat.com> On Thu, 2021-06-03 at 17:01 +0500, Malik Obaid wrote: > Hi, > > I am using Openstack Wallaby release on Ubuntu 20.04. > > When I try to list openstack compute services the time zone in 'Updated At' > column is in UTC. i suspect that that is based on the default timzone of the server where the conductor is running or perhaps the individual services. nova does not have any configuration for this and the updated_at time filed is generally provded by the NovaPersistentObject mixin https://github.com/openstack/nova/blob/master/nova/objects/base.py#L134-L149 for the compute nodes table its set by https://github.com/openstack/nova/blob/da57eebc9e1ab7e48d4c4ef6ec1eeba80d867d81/nova/db/sqlalchemy/api.py#L737-L738 and we are converting explitly to utc ltaher here https://github.com/openstack/nova/blob/da57eebc9e1ab7e48d4c4ef6ec1eeba80d867d81/nova/db/sqlalchemy/api.py#L302-L318 while i have not explictly found where the compute service record update_at is set i would guess it a deliberate design descisn to only store data information in utc format and leave it to the clinets to convert to local timezones if desired. i think that is proably the correct approch to take. although i guess you could confvert it at the api potentially though im not sure that would genrelaly be a good impromenbt to make > root at controller-khi01 ~(keystone)# openstack compute service list > +----+----------------+------------------+----------+---------+-------+----------------------------+ > > ID | Binary | Host | Zone | Status | State | > Updated At | > +----+----------------+------------------+----------+---------+-------+----------------------------+ > > 4 | nova-conductor | controller-khi01 | internal | enabled | up | > 2021-06-03T11:59:59.000000 | > > 5 | nova-scheduler | controller-khi01 | internal | enabled | up | > 2021-06-03T12:00:08.000000 | > > 8 | nova-compute | kvm03-a1-khi01 | nova | enabled | up | > 2021-06-03T12:00:02.000000 | > > 9 | nova-compute | kvm01-a1-khi01 | nova | enabled | up | > 2021-06-03T12:00:02.000000 | > +----+----------------+------------------+----------+---------+-------+----------------------------+ > > I want to change it to some other time zone. Is there a way to do it? not that i am aware no. > I would really appreciate any input in this regard. > > Thank you. > > Regards, > Malik Obaid From katonalala at gmail.com Thu Jun 3 13:05:43 2021 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 3 Jun 2021 15:05:43 +0200 Subject: [Neutron] sriov network setup for victoria - clarification needed In-Reply-To: References: Message-ID: Hi, 0.6.3 has another increase for the DEFAULT_RCVBUF: https://github.com/svinota/pyroute2/issues/813 Regards Lajos Katona (lajoskatona) Rodolfo Alonso Hernandez ezt írta (időpont: 2021. jún. 3., Cs, 9:16): > Hi Piotr: > > I think you are hitting [1]. As you said, each PF has 63 VFs configured. > Your error looks very similar to this one reported. > > Try updating pyroute2 to version 0.6.2. That should contain the fix for > this error. > > Regards. > > [1]https://github.com/svinota/pyroute2/issues/751 > > On Thu, Jun 3, 2021 at 12:06 AM wrote: > >> Muchas gracias Alonso para tu ayuda! >> >> >> I've commented out the decorator line, new exception popped out, I've >> updated my gist: >> >> https://gist.github.com/8e6272cbe7748b2c5210fab291360e0b >> >> >> BR, >> >> Piotr Mossakowski >> >> Sent from ProtonMail mobile >> >> >> -------- Original Message -------- >> On 31 May 2021, 18:08, Rodolfo Alonso Hernandez < ralonsoh at redhat.com> >> wrote: >> >> >> Hello Piotr: >> >> Maybe you should update the pyroute2 library, but this is a blind shot. >> >> What I recommend you do is to find the error you have when retrieving the >> interface VFs. In the same compute node, use this method [1] but remove the >> decorator [2]. Then, in a root shell, run python again: >> >>> from neutron.privileged.agent.linux import ip_lib >> >>> ip_lib.get_link_vfs('ens2f0', '') >> >> That will execute the pyroute2 code without the privsep decorator. You'll >> see what error is returning the method. >> >> Regards. >> >> [1] >> https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L396-L410 >> [2] >> https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L395 >> >> >> On Mon, May 31, 2021 at 5:50 PM wrote: >> >>> Hello, >>> I have two victoria environments: >>> 1) a working one, standard setup with separate dedicated interface for >>> sriov (pt0 and pt1) >>> 2) a broken one, where I'm trying to reuse one of already used >>> interfaces (ens2f0 or ens2f1) for sriov. ens2f0 is used for several VLANs >>> (mgmt and storage) and ens2f1 is a neutron external interface which I >>> bridged for VLAN tenant networks. On both I have enabled 63 VFs, it's a >>> standard intetl 10Gb x540 adapter. >>> >>> On broken environment, when I'm trying to boot a VM with sriov port that >>> I created before, I see this error shown on below gist: >>> https://gist.github.com/moss2k13/8e6272cbe7748b2c5210fab291360e0b >>> >>> I'm investigating this for couple days now but I'm out of ideas so I'd >>> like to ask for your support. Is this possible to achieve what I'm trying >>> to do on 2nd environment? To use PF as normal interface and use its VFs for >>> sriov-agent at the same time? >>> >>> Regards, >>> Piotr Mossakowski >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Jun 3 14:51:09 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 3 Jun 2021 17:51:09 +0300 Subject: [TripleO] tripleo repos going Extended Maintenance stable/train OK? (not yet IMO) In-Reply-To: <2F244667-94A4-4048-A6B1-96D5DC692B39@redhat.com> References: <2F244667-94A4-4048-A6B1-96D5DC692B39@redhat.com> Message-ID: On Thursday, June 3, 2021, Jesse Pretorius wrote: > > > > On 3 Jun 2021, at 13:57, Marios Andreou wrote: > > > > Hello all, > > > > as discussed at [1] the train branch across all tripleo repos will be > > moving to extended maintenance at the end of this week. Before we make > > the train-em tag I have pushed one last release on train with [2] and > > rebased [1] onto that. > > > > As discussed in [1] train is still an active branch for tripleo and we > > can and will continue to merge fixes there. It just means that we will > > no longer be making tagged releases for train branches. > > > > If you have any questions or concerns about any of this please reach out, > > I think this would be problematic for us. We’re still actively submitting > changes to stable/train for tripleo and will likely be for some time. > > yes agree but this does not stop us from continuing to merge whatever we need across train tripleo repos. It only affects tagged releases. > I don’t know what the effect is to us downstream for not being able to tag > upstream. I think the RDO folks (who do the packaging) would need to > respond to that for us to make a suitable final call. As far as I know there is no direct correlation between upstream git repo tags and downstream packaging. I believe the import point used is a particular commit hash for a given repo. I'll reach out to rhos delivery folks and point at this email though to confirm. If there is a problem I am not sure how we can resolve it as it sounds like this is a mandatory move for us per the discussion at https://review.opendev.org/c/openstack/releases/+/790778/2#message-e8ee1f6febb4780ccbb703bf378bcfc08776a49a But let's see what packaging folks think thanks for the suggestion regards, marios -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu Jun 3 15:40:48 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 3 Jun 2021 16:40:48 +0100 Subject: [kolla] Reorganization of kolla-ansible documentation In-Reply-To: References: Message-ID: On Mon, 24 May 2021 at 09:30, Mark Goddard wrote: > > On Fri, 14 May 2021 at 21:27, Klemen Pogacnik wrote: > > > > Hello! > > Hi Klemen, > > Thank you for your evaluation of the documentation. I think a lot of > it aligns with the discussions we had in the Kolla Kalls [1] some time > ago. I'll add notes inline. > > It's worth looking at other similar projects for inspiration, e.g. OSA > [2] and TripleO [3]. > > [1] https://etherpad.opendev.org/p/kollakall > [2] https://docs.openstack.org/openstack-ansible/latest/ > [3] https://docs.openstack.org/tripleo-docs/latest/ > > Mark > > > > > I promised to prepare my view as a user of kolla-ansible on its documentation. In my opinion the division between admin guides and user guides is artificial, as the user of kolla-ansible is actually the cloud administrator. > > Absolutely agreed. > > > > > Maybe it would be good to think about reorganizing the structure of documentation. Many good chapters are already written, they only have to be positioned in the right place to be found more easily. > > Agreed also. We now have redirect support [4] in place to keep old > links working, assuming only whole pages are moved. > > [4] doc/source/_extra/.htaccess > > > > > So here is my proposal of kolla-ansible doc's structure: > > > > 1. Introduction > > 1.1. mission > > 1.2. benefits > > 1.3. support matrix > > How about a 'getting started' page, similar to [5]? > > [5] https://docs.openstack.org/kayobe/latest/getting-started.html > > > 2. Architecture > > 2.1. basic architecture > > 2.2. HA architecture > > 2.3. network architecture > > 2.4. storage architecture > > 3. Workflows > > 3.1. preparing the surroundings (networking, docker registry, ...) > > 3.2. preparing servers (packages installation) > > Installation of kolla-ansible should go here. > > > 3.3. configuration (of kolla-ansible and description of basic logic for configuration of Openstack modules) > > 3.4. 1st day procedures (bootstrap, deploy, destroy) > > 3.5. 2nd day procedures (reconfigure, upgrade, add, remove nodes ...) > > 3.6. multiple regions > > 3.7. multiple cloud > > 3.8. security > > 3.9. troubleshooting (how to check, if cloud works, what to do, if it doesn't) > > > 4. Use Cases > > 4.1. all-in-one > > 4.2. basic vm multinode > > 4.3. some production use cases > > What do these pages contain? Something like the current quickstart? > > > 5. Reference guide > > Mostly the same structure as already is. Except it would be desirable that description of each module has: > > - purpose of the module > > - configuration of the module > > - how to use it with links to module docs > > - basic troubleshooting > > 6. Contributor guide > > > > > > The documentation also needs figures, pictures, diagrams to be more understandable. So at least in the first chapters some of them shall be added. > > This is a common request from users. We have lots of reference > documentation, but need more high level architectural information and > diagrams. Unfortunately this type of documentation is quite hard to > create, but we would welcome improvements. > > > > > > > I'm also thinking about convergence of documentation of kayobe, kolla and kolla-ansible projects. It's true that there's no strict connection between kayobe and other two and kolla containers can be used without kolla-ansible playbooks. But the real benefit the user can get is to use all three projects together. But let's leave that for the second phase. > > > > I'm not so sure about converging them into one set of docs. They are > each fairly separate tools. We added a short section [6] to each > covering related projects. Perhaps we should make this a dedicated > page, and provide more information about the Kolla ecosystem? > > [6] https://docs.openstack.org/kolla/latest/#related-projects > > > > > > > So please comment on this proposal. Do you think it's going in the right direction? If yes, I can refine it. Following up on this, we discussed it in this week's IRC meeting [1]. We agreed that a good first step would be a simple refactor to remove the artificial user/admin split. Some more challenging additions and reworking could follow that, starting with the intro/architecture sections. [1] http://eavesdrop.openstack.org/meetings/kolla/2021/kolla.2021-06-02-15.02.log.html#l-136 > > > > From dangerzonen at gmail.com Thu Jun 3 01:55:05 2021 From: dangerzonen at gmail.com (dangerzone ar) Date: Thu, 3 Jun 2021 09:55:05 +0800 Subject: [Tacker] Tacker Not able to create VIM Message-ID: Hi all, I just deployed Tacker and tried to add my 1st VIM but I’m getting errors as per attached file. Pls advise how to resolve this problem. Thanks 1. *Error: *Failed to register VIM: {"error": {"message": "( http://192.168.0.121:5000/v3/tokens): The resource could not be found.", "code": 404, "title": "Not Found"}} 1. *Error as below**à** WARNING keystonemiddleware.auth_token [-] Authorization failed for token: InvalidToken* *{"vim": {"vim_project": {"name": "admin", "project_domain_name": "Default"}, "description": "d", "is_default": false, "auth_cred": {"username": "admin", "user_domain_name": "Default", "password": "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 **", "type": "openstack", "name": "d"}} process_request /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] Authorization failed for token: InvalidToken* *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* Below is my tacker.conf [DEFAULT] auth_strategy = keystone policy_file = /etc/tacker/policy.json debug = True use_syslog = False bind_host = 192.168.0.121 bind_port = 9890 service_plugins = nfvo,vnfm state_path = /var/lib/tacker [nfvo] vim_drivers = openstack [keystone_authtoken] region_name = RegionOne auth_type = password project_domain_name = Default user_domain_name = Default username = tacker password = password auth_url = http://192.168.0.121:35357 auth_uri = http://192.168.0.121:5000 [agent] root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf [database] connection = mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: er1.jpg Type: image/jpeg Size: 61973 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: er2.jpg Type: image/jpeg Size: 214252 bytes Desc: not available URL: From jpretori at redhat.com Thu Jun 3 13:02:57 2021 From: jpretori at redhat.com (Jesse Pretorius) Date: Thu, 3 Jun 2021 14:02:57 +0100 Subject: [rhos-dev] [TripleO] tripleo repos going Extended Maintenance stable/train OK? (not yet IMO) In-Reply-To: References: Message-ID: <2F244667-94A4-4048-A6B1-96D5DC692B39@redhat.com> > On 3 Jun 2021, at 13:57, Marios Andreou wrote: > > Hello all, > > as discussed at [1] the train branch across all tripleo repos will be > moving to extended maintenance at the end of this week. Before we make > the train-em tag I have pushed one last release on train with [2] and > rebased [1] onto that. > > As discussed in [1] train is still an active branch for tripleo and we > can and will continue to merge fixes there. It just means that we will > no longer be making tagged releases for train branches. > > If you have any questions or concerns about any of this please reach out, I think this would be problematic for us. We’re still actively submitting changes to stable/train for tripleo and will likely be for some time. I don’t know what the effect is to us downstream for not being able to tag upstream. I think the RDO folks (who do the packaging) would need to respond to that for us to make a suitable final call. From hjensas at redhat.com Thu Jun 3 18:07:07 2021 From: hjensas at redhat.com (Harald Jensas) Date: Thu, 3 Jun 2021 20:07:07 +0200 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: <731c69a4-bbe8-fd5b-22b4-c8c52686a021@redhat.com> On 6/2/21 1:17 PM, Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > +1 From victoria at vmartinezdelacruz.com Thu Jun 3 18:19:43 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Thu, 3 Jun 2021 20:19:43 +0200 Subject: [Manila ] Upcoming Bug Squash starting June 7th through June 11th 2021 In-Reply-To: References: Message-ID: Hi all, Just dropping you a line to remind you about this event and also to let you know that we doubled down and we will have two calls for this. * Monday June 7th, 2021 at 15:00 UTC * in the aforementioned Jitsi bridge [1] to go over the list of bugs we have for this bug squash [2] And join us again on * Thursday June 10th, 2021 at 15:00 UTC * (instead of the weekly meeting) to do a live review session with some of the core reviewers. The goal of this second session is to show live how a bug review is done: what we look at when doing a bug review, coding best practices, commit messages, release notes, and more. We will use same Jitsi bridge we use for the session on Monday [1] We will remind you about this again on our IRC channel (#openstack-manila in OFTC) when we are closer to the event :) Everybody is invited to join us. Cheers, V [1] https://meetpad.opendev.org/ManilaX-ReleaseBugSquash [2] https://ethercalc.openstack.org/i3vwocrkk776 On Wed, May 26, 2021 at 9:14 PM Vida Haririan wrote: > Hi everyone, > > > As discussed, a new Bug Squash event is around the corner! > > > The event will be held from 7th to 11th June, 2021, providing an extended > contribution window. There will be a synchronous call held simultaneously on > IRC, Thursday June 10th, 2021 at 15:00 UTC and we will use this Jitsi > bridge [1]. > > > A list of selected bugs will be shared here [2]. Please feel free to add > any additional bugs you would like to address during the event. > > Thanks for your participation in advance. > > > Vida > > > [1] https://meetpad.opendev.org/ManilaX-ReleaseBugSquash > > [2] https://ethercalc.openstack.org/i3vwocrkk776 > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Jun 3 20:52:20 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 3 Jun 2021 22:52:20 +0200 Subject: [tc] Moving IRC meetings to project channels Message-ID: Hello, In the guidance to PTLs for the freenode to OFTC migration, there was this guideline: > The TC is asking that projects take advantage of this time of change to consider moving project meetings from the #openstack-meeting* channels to their project channel. I was surprised since it was the first time I heard about this suggested change. The project team guide [1] actually still states the following: > The OpenStack infrastructure team maintains a limited number of channels dedicated to meetings. While teams can hold meetings on their own team IRC channels, they are encouraged to use those common meeting channels to give their meeting some external exposure. The limited number of meeting channels encourages teams to spread their meetings around and reduce conflicts. Is there any background regarding this proposed change? Not that I am against it in any way: I have participated in meetings in both kinds of channels and haven't really seen any difference. Thanks, Pierre Riteau (priteau) [1] https://docs.openstack.org/project-team-guide/open-community.html#public-meetings-on-irc From gmann at ghanshyammann.com Thu Jun 3 22:29:36 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 03 Jun 2021 17:29:36 -0500 Subject: [tc] Moving IRC meetings to project channels In-Reply-To: References: Message-ID: <179d3ff053d.c7797314160734.5796180177007608281@ghanshyammann.com> ---- On Thu, 03 Jun 2021 15:52:20 -0500 Pierre Riteau wrote ---- > Hello, > > In the guidance to PTLs for the freenode to OFTC migration, there was > this guideline: > > > The TC is asking that projects take advantage of this time of change to consider moving project meetings from the #openstack-meeting* channels to their project channel. > > I was surprised since it was the first time I heard about this > suggested change. The project team guide [1] actually still states the > following: > > > The OpenStack infrastructure team maintains a limited number of channels dedicated to meetings. While teams can hold meetings on their own team IRC channels, they are encouraged to use those common meeting channels to give their meeting some external exposure. The limited number of meeting channels encourages teams to spread their meetings around and reduce conflicts. > > Is there any background regarding this proposed change? Not that I am > against it in any way: I have participated in meetings in both kinds > of channels and haven't really seen any difference. Idea behind this is to avoid confusion over which channel has which project meeting. There are multiple meeting channel #openstack-meeting-3, #openstack-meeting-4, #openstack-meeting-5, #openstack-meeting-alt, #openstack-meeting and sometime it is difficult to remember which channel has which project meeting until you go and check the project doc/wiki page or so. Having meeting in channel itself avoid such confusion. We have been doing this for QA, TC since many year and it work perfectly. But this is project side choice, TC is suggesting this option. I will make project-team-guide changes to add this suggestion. -gmann > > Thanks, > Pierre Riteau (priteau) > > [1] https://docs.openstack.org/project-team-guide/open-community.html#public-meetings-on-irc > > From gmann at ghanshyammann.com Thu Jun 3 22:39:23 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 03 Jun 2021 17:39:23 -0500 Subject: [all] CRITICAL: Upcoming changes to the OpenStack Community IRC this weekend In-Reply-To: <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> References: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> Message-ID: <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> ---- On Mon, 31 May 2021 09:06:11 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Updates: > > As you might have seen in the Fungi email reply on service-discuss ML, all the bot and logging migration is complete now. > > * Now onwards every discussion or meeting now needs to be done on OFTC, not on Freenode. As you can see many projects PTL started sending email on their next meeting on OFTC, please do if you have not done yet. > > * I have started a new etherpad for tracking all the migration tasks (all action items we collected from Wed TC meeting.). Please plan the work needed from the project team side and mark the progress. > > - https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc Hello Everyone, There were two question in openstack-tc this morning which we discussed in today TC meeting and agreed on below points: 1. Backporting OFTC reference changes * Agreed to backport the changes as much as possible. * On keeping doc/source/contributor/contributing.rst on stable branches: ** We do not need to maintain this on stable as such, because master version of it can be referred from doc or top level CONTRIBUTING.rst ** Fungi will add the global redirect link to master/latest version in openstack-manual. Project does not need to do this explicitly. ** Project can remove doc/source/contributor/contributing.rst from stable branch as per their convenience 2. Topic change on Freenode channel * We decided to do this on June 11th and until then continue redirecting people from old channel to OFTC. -gmann > > -gmann > > > ---- On Wed, 26 May 2021 12:19:26 -0500 Ghanshyam Mann wrote ---- > > Greetings contributors & community members! > > > > With recent events, the Technical Committee held an emergency meeting today (Wednesday, May 26th, 2021) > > regarding Freenode IRC and what our decision would be [1]. Earlier in the week, the consensus amongst the TC > > was to gather more information from the individual projects, and make a decision from there[2]. With #rdo, > > #ubuntu, and #wikipedia having been hijacked, the consensus amongst the TC and the community members > > who were able to attend the meeting was to move away from Freenode as soon as possible. The TC agreed > > that this move away from Freenode needs to be a community-wide move to the same, new IRC network for > > all projects to avoid splintering of the community. As has been long-planned in the event of a contingency, we > > will be moving to OFTC. > > > > We recognize this is a contentious topic, and ultimately we seek to ensure community continuity before evolution > > to something beyond IRC, as many have expressed interest in doing via Mailing List discussions. At this point, we > > had to make a decision to solve the immediate problem in the simplest and most expedient way possible, so this is > > that announcement. We welcome continued discussion about future alternatives on the other threads. > > > > With this in mind, we suggest the following steps. > > > > Everyone: > > ======= > > 1. Do NOT change any channel topics to represent this change. This is likely to result in the channel being taken > > over by Freenode and will disrupt communications within our community. > > 2. Register your nicknames on OFTC [3][4] > > 3. Be *prepared* to join your channels on OFTC[4]. The OpenStack community channels have already been > > registered on OFTC and await you. > > 4. Continue to use Freenode for OpenStack discussions until the bots have been moved and the official cut-over > > takes place this coming weekend. We anticipate using OFTC starting Monday, May 31st. > > > > Projects/Project Leaders: > > ==================== > > 1. Projects should work to get a few volunteers to staff their project channels on Freenode, for the near future to help > > redirect people to OFTC. This should occur via private messages to avoid a ban. > > 2. Continue to hold project meetings on Freenode until the bots are enabled on OFTC. > > 3. Update project wikis/documentation with the new IRC network information. We ask that you consider referring to > > the central contributor guide[5]. > > 4. The TC is asking that projects take advantage of this time of change to consider moving project meetings from > > the #openstack-meeting* channels to their project channel. > > 5. Please avoid discussing the move to OFTC in Freenode channels as this may also trigger a takeover of the channel. > > > > We are working on getting our bots over to OFTC, and they will be moved over the weekend. Starting Monday May 31, > > the bots will be on OFTC. Communication regarding this migration will take place on OFTC[4] in #openstack-dev, and > > we're working on updating the contributor guide[5] to reflect this migration. > > > > Sincerely, > > > > The OpenStack TC and community leaders who came together to agree on a path forward. > > > > [1]: https://etherpad.opendev.org/p/openstack-irc > > [2]: https://etherpad.opendev.org/p/feedback-on-freenode > > [3]: https://www.oftc.net/Services/#register-your-account > > [4]: https://www.oftc.net/ > > [5]: https://docs.openstack.org/contributors/common/irc.html > > > > > > From fungi at yuggoth.org Thu Jun 3 23:09:46 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 3 Jun 2021 23:09:46 +0000 Subject: [tc] Moving IRC meetings to project channels In-Reply-To: <179d3ff053d.c7797314160734.5796180177007608281@ghanshyammann.com> References: <179d3ff053d.c7797314160734.5796180177007608281@ghanshyammann.com> Message-ID: <20210603230946.bwjmx5pnpz6zd5ig@yuggoth.org> On 2021-06-03 17:29:36 -0500 (-0500), Ghanshyam Mann wrote: [...] > Idea behind this is to avoid confusion over which channel has > which project meeting. There are multiple meeting channel > #openstack-meeting-3, #openstack-meeting-4, #openstack-meeting-5, > #openstack-meeting-alt, #openstack-meeting and sometime it is > difficult to remember which channel has which project meeting > until you go and check the project doc/wiki page or so. > > Having meeting in channel itself avoid such confusion. We have > been doing this for QA, TC since many year and it work perfectly. [...] The idea behind having meetings in common channels is that it reduces the number of channels people need to join if they just want to lurk the team meetings but not necessarily be in the team channels, it avoids people distracting the meeting with unrelated in-channel banter or noise from notification bots about things like change uploads to Gerrit, and it slightly decreases the chances that too many meetings get scheduled into the same timeslots. I also participate in some projects which do it that way and some which have their meetings in-channel. For the most part, meetings for smaller teams without a lot of overlap with other projects and low volumes of normal discussion in their channels seem to be happy with in-channel meetings. Large teams with a bunch of tendrils to and from other projects and lots of crosstalk in their channel tend to prefer the option of a separate meeting channel. Also there are no -4 and -5 meeting channels any longer, since at least a year if not more; we're down to just the other three you listed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From vuk.gojnic at gmail.com Fri Jun 4 06:22:36 2021 From: vuk.gojnic at gmail.com (Vuk Gojnic) Date: Fri, 4 Jun 2021 08:22:36 +0200 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: I found where was my issue. After using different GRUBX64.efi and BOOTX64.efi binaries (this time from https://vault.centos.org/8.3.2011/BaseOS/x86_64/kickstart/EFI/BOOT/ instead from Ubuntu Bionic LiveCD) everything worked normally and the large initrd was successfully loaded. It seems that used EFI binaries from Ubuntu had that issue. Advice in such case is: check with another bootloader variant/version if the problem persists. Thanks! -Vuk On Mon, May 17, 2021 at 4:14 PM Dmitry Tantsur wrote: > > Hi, > > I'm not sure. We have never hit this problem with DIB-built images before. I know that TripleO uses an even larger image than one we publish on tarballs.o.o. From skaplons at redhat.com Fri Jun 4 06:41:48 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 04 Jun 2021 08:41:48 +0200 Subject: [neutron] Meetings channel changes Message-ID: <4434280.3qEIF5uYtV@p1> Hi, As we discussed on our last team meeting, I just proposed change to move our meetings from the openstack-meeeting-* channels to the openstack-neutron channel [1]. Let's have today's drivers meeting still on the #openstack-meeting channel @OFTC but starting next week all our meetings will take place on the #openstack-neutron channel. [1] https://review.opendev.org/c/opendev/irc-meetings/+/794711[1] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/opendev/irc-meetings/+/794711 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From jpodivin at redhat.com Fri Jun 4 06:46:10 2021 From: jpodivin at redhat.com (Jiri Podivin) Date: Fri, 4 Jun 2021 08:46:10 +0200 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: <731c69a4-bbe8-fd5b-22b4-c8c52686a021@redhat.com> References: <731c69a4-bbe8-fd5b-22b4-c8c52686a021@redhat.com> Message-ID: +1 On Thu, Jun 3, 2021 at 8:14 PM Harald Jensas wrote: > On 6/2/21 1:17 PM, Marios Andreou wrote: > > Hello all > > > > Having discussed this with some members of the tripleo ci team > > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > > tripleo-quickstart and tripleo-quickstart-extras). > > > > Sandeep joined the team about 1.5 years ago and has from the start > > demonstrated his eagerness to learn and an excellent work ethic, > > having made many useful code submissions [1] and code reviews [2] to > > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > > > Please reply to this mail with a +1 or -1 for objections in the usual > > manner. If there are no objections we can declare it official in a few > > days > > > > +1 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gchamoul at redhat.com Fri Jun 4 08:18:37 2021 From: gchamoul at redhat.com (=?utf-8?B?R2HDq2w=?= Chamoulaud) Date: Fri, 4 Jun 2021 10:18:37 +0200 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> Of course, a big +1! On 02/Jun/2021 14:17, Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > Best Regards, Gaël -- Gaël Chamoulaud - (He/Him/His) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From malikobaidadil at gmail.com Fri Jun 4 09:20:31 2021 From: malikobaidadil at gmail.com (Malik Obaid) Date: Fri, 4 Jun 2021 14:20:31 +0500 Subject: [wallaby][neutron][ovn] MTU in Neutron for Production Message-ID: Hi, I am using Openstack Wallaby release on Ubuntu 20.04. I am configuring openstack neutron for production. While setting mtu I am a bit confused between the 2 use cases. *Case 1* External (public) network mtu 1500 self service (tenent) network geneve mtu 9000 VLAN external network mtu 1500 *Case 2* External (public) network mtu 9000 self service (tenent) network geneve mtu 9000 VLAN external network mtu 9000 I just want to know which case would be better for production. I would really appreciate any input in this regard. Thank you. Regards, Malik Obaid -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.canonico at uniupo.it Fri Jun 4 10:50:15 2021 From: massimo.canonico at uniupo.it (Massimo Canonico) Date: Fri, 4 Jun 2021 12:50:15 +0200 Subject: libcloud Message-ID: <09f1daf4-83c3-0d5c-2a38-ad2379b35d37@uniupo.it> Hi, I'm new and I'm not sure if here is the right place to post the questions related to OpenStack and Libcloud. I've used for years OpenStack provided by Chameleon project and recently they change the autentication procedure (they use a federate login). Since them, I'm having problem to use my script with libcloud. This script was working with the legacy login: provider = get_driver(Provider.OPENSTACK) conn = provider(auth_username,auth_password,ex_force_auth_url=auth_url,         ex_force_auth_version='3.x_password',     ex_tenant_name=project_name,     ex_force_service_region=region_name,api_version='2.1') Now it is not working. If I take a look at the openrc file I can note this: export OS_AUTH_TYPE="v3oidcpassword" Maybe, this is the problem. Any idea about how can I fix my script? Thanks, Massimo From pierre at stackhpc.com Fri Jun 4 11:02:17 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 4 Jun 2021 13:02:17 +0200 Subject: [blazar] IRC meeting moving to #openstack-blazar Message-ID: Hello, Following the latest recommendation from the TC, the bi-weekly IRC meeting of the Blazar project is moving to the #openstack-blazar channel on OFTC. This will be effective from the next meeting on June 17. Pierre Riteau (priteau) From mnaser at vexxhost.com Fri Jun 4 11:31:53 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 4 Jun 2021 07:31:53 -0400 Subject: libcloud In-Reply-To: <09f1daf4-83c3-0d5c-2a38-ad2379b35d37@uniupo.it> References: <09f1daf4-83c3-0d5c-2a38-ad2379b35d37@uniupo.it> Message-ID: I think you’re having problems because libcloud might not support OIDC with natively. On Fri, Jun 4, 2021 at 6:54 AM Massimo Canonico wrote: > Hi, > > I'm new and I'm not sure if here is the right place to post the > questions related to OpenStack and Libcloud. > > I've used for years OpenStack provided by Chameleon project and recently > they change the autentication procedure (they use a federate login). > Since them, I'm having problem to use my script with libcloud. > > This script was working with the legacy login: > > provider = get_driver(Provider.OPENSTACK) > conn = provider(auth_username,auth_password,ex_force_auth_url=auth_url, > > ex_force_auth_version='3.x_password', > > ex_tenant_name=project_name, > > ex_force_service_region=region_name,api_version='2.1') > > > Now it is not working. If I take a look at the openrc file I can note this: > > export OS_AUTH_TYPE="v3oidcpassword" > > Maybe, this is the problem. > > Any idea about how can I fix my script? > > Thanks, > > Massimo > > > > -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Fri Jun 4 12:52:09 2021 From: bkslash at poczta.onet.pl (at) Date: Fri, 4 Jun 2021 14:52:09 +0200 Subject: [glance] How to limit access to particular store Message-ID: <49F175A2-A993-424B-97BF-F4EFB8129321@poczta.onet.pl> Hi, I have Glance with multi-store config and I want one store (not default) to be read-only for everyone except cloud Admin. How can I do it? Is there any way to limit store names visibility (which are visible i.e. in properties section of "openstack image show IMAGE_NAME" output)? Best regards Adam Tomas From rosmaita.fossdev at gmail.com Fri Jun 4 13:34:23 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 4 Jun 2021 09:34:23 -0400 Subject: [cinder] xena R-18 mid-cycle summary available Message-ID: In case you missed the Xena cinder R-18 virtual mid-cycle session earlier this week, I've posted a summary: https://wiki.openstack.org/wiki/CinderWallabyMidCycleSummary It includes a link to the recording if you want more context for any topic that interests you. We're planning to have another mid-cycle session the week of R-9, namely, on Wednesday 4 August 2021, 1400-1600 UTC. As always, you can add topics to the planning etherpad: https://etherpad.opendev.org/p/cinder-xena-mid-cycles cheers, brian From rosmaita.fossdev at gmail.com Fri Jun 4 13:39:17 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 4 Jun 2021 09:39:17 -0400 Subject: [cinder] xena R-18 mid-cycle summary available In-Reply-To: References: Message-ID: <433902db-a799-89ff-ab60-b495908ca175@gmail.com> On 6/4/21 9:34 AM, Brian Rosmaita wrote: > In case you missed the Xena cinder R-18 virtual mid-cycle session > earlier this week, I've posted a summary: >   https://wiki.openstack.org/wiki/CinderWallabyMidCycleSummary The attentive reader will note that I mistakenly linked to the wallaby summary, which though well worth reading, is off topic. The Xena summary is here: https://wiki.openstack.org/wiki/CinderXenaMidCycleSummary > It includes a link to the recording if you want more context for any > topic that interests you. > > We're planning to have another mid-cycle session the week of R-9, > namely, on Wednesday 4 August 2021, 1400-1600 UTC.  As always, you can > add topics to the planning etherpad: >   https://etherpad.opendev.org/p/cinder-xena-mid-cycles > > > cheers, > brian From rosmaita.fossdev at gmail.com Fri Jun 4 13:52:06 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 4 Jun 2021 09:52:06 -0400 Subject: [security-sig][cinder] propose vulnerability:managed tag for os-brick Message-ID: <746fb327-dcd8-d479-e0da-8facf400e780@gmail.com> I've posted a patch to add the 'vulnerablity:managed' tag to the os-brick library: https://review.opendev.org/c/openstack/governance/+/794680 I just want to give a heads-up to the OpenStack Vulnerablity Management Team, since this will impact the VMT, though hopefully not very much. The Cinder team was under the impression that the VMT was already managing private security bugs for os-brick. The issue may not have come up before because usually there's a driver + connector involved and the bug gets filed under cinder (which is already tagged vulnerablity:managed). In any case, the cinder team discussed this at our recent midcycle meeting and decided that we appreciate the extra eyes and long-term perspective the VMT brings to the table, and we'd like to formalize a relation between the VMT and the os-brick library. cheers, brian From bkslash at poczta.onet.pl Fri Jun 4 13:53:42 2021 From: bkslash at poczta.onet.pl (at) Date: Fri, 4 Jun 2021 15:53:42 +0200 Subject: [kolla-ansible] kolla-ansible destroy Message-ID: <476495C0-A42E-4B74-AF46-13FF814C974B@poczta.onet.pl> Hi is kolla-ansible destroy "--tags" aware? What is the best way to remove all unwanted containers, configuration files, logs, etc. when you want to remove some service or move it to another node? Regards Adam Tomas From rosmaita.fossdev at gmail.com Fri Jun 4 14:02:32 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 4 Jun 2021 10:02:32 -0400 Subject: [cinder] reminder: xena spec freeze 25 June Message-ID: This is a reminder that all Cinder Specs for features to be implemented in Xena must be approved by Friday 25 June 2021 (23:59 UTC). We discussed several specs at the R-18 virtual midcycle meeting. Please take a look at the summary and take any appropriate action for your spec proposal: https://wiki.openstack.org/wiki/CinderXenaMidCycleSummary#Xena_Specs_Review Anyone with a spec proposal that wasn't discussed, and who needs more feedback than is currently on the Gerrit review, should reach out to the Cinder team for help by putting a topic on the weekly meeting agenda, asking in the OFTC #openstack-cinder channel, or via the openstack-discuss mailing list. cheers, brian From fungi at yuggoth.org Fri Jun 4 14:14:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 4 Jun 2021 14:14:35 +0000 Subject: [security-sig][cinder] propose vulnerability:managed tag for os-brick In-Reply-To: <746fb327-dcd8-d479-e0da-8facf400e780@gmail.com> References: <746fb327-dcd8-d479-e0da-8facf400e780@gmail.com> Message-ID: <20210604141435.da5x2lrmubrfbpqv@yuggoth.org> On 2021-06-04 09:52:06 -0400 (-0400), Brian Rosmaita wrote: [...] > I just want to give a heads-up to the OpenStack Vulnerablity Management > Team, since this will impact the VMT, though hopefully not very much. [...] Thanks! We loosened up the requirements well over a year ago with https://review.opendev.org/678426 in hopes more projects would check whether their deliverables met the requirements and formally enlist our assistance, but so far there's been little uptake there. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Yong.Huang at Dell.com Fri Jun 4 00:45:55 2021 From: Yong.Huang at Dell.com (Huang, Yong) Date: Fri, 4 Jun 2021 00:45:55 +0000 Subject: [victoria][cinder ?] Dell Unity + Iscsi In-Reply-To: References: Message-ID: Hi Albert, Did you configure multipath? Could you attach the output of `multipath -ll` and the content of ` /etc/multipath.conf`? Thanks Yong Huang -----Original Message----- From: Albert Shih Sent: Wednesday, June 2, 2021 2:45 AM To: openstack-discuss at lists.openstack.org Subject: [victoria][cinder ?] Dell Unity + Iscsi [EXTERNAL EMAIL] Hi everyone I've a small openstack configuration with 4 computes nodes, a Dell Unity 480F for the storage. I'm using cinder with iscsi. Everything work when I create a instance. But some instance after few time are not reponsive. When I check on the hypervisor I can see [888240.310461] sd 14:0:0:2: [sdb] tag#120 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.310493] sd 14:0:0:2: [sdb] tag#120 Sense Key : Illegal Request [current] [888240.310502] sd 14:0:0:2: [sdb] tag#120 Add. Sense: Logical unit not supported [888240.310510] sd 14:0:0:2: [sdb] tag#120 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 [888240.310519] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 [888240.311045] sd 14:0:0:2: [sdb] tag#121 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.311050] sd 14:0:0:2: [sdb] tag#121 Sense Key : Illegal Request [current] [888240.311065] sd 14:0:0:2: [sdb] tag#121 Add. Sense: Logical unit not supported [888240.311070] sd 14:0:0:2: [sdb] tag#121 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 [888240.311074] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 [888240.342482] sd 14:0:0:2: [sdb] tag#70 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.342490] sd 14:0:0:2: [sdb] tag#70 Sense Key : Illegal Request [current] [888240.342496] sd 14:0:0:2: [sdb] tag#70 Add. Sense: Logical unit not supported I check on the hypervisor, no error at all on the ethernet interface. I check on the switch, no error at all on the interface on the switch. No sure but it's seem the problem appear more often when the instance are doing nothing during some time. Every firmware, software on the Unity are uptodate. The 4 computes are exactly same, they run the same version of the nova-compute & OS & firmware on the hardware. Any clue ? Or place to search the problem ? Regards -- Albert SHIH Observatoire de Paris xmpp: jas at obspm.fr Heure local/Local time: Tue Jun 1 08:27:42 PM CEST 2021 From wangtaihao at inspur.com Fri Jun 4 03:44:07 2021 From: wangtaihao at inspur.com (=?gb2312?B?VGFob2UgV2FuZyAozfXMq7rGKQ==?=) Date: Fri, 4 Jun 2021 03:44:07 +0000 Subject: [vitrage]The vitrage api "vitrage alarm list" get wrong response Message-ID: <9b8b00abf9dc450bab65cd14f34ab950@inspur.com> Hello. I have successfully installed vitrage, configured the datasources from nova. host and Prometheus, also configured the mapping file from ALARM to RESOURCE, and seen the alarm received from the request log of vitrage API. However, when I use the “vitrage alarm list” command through cli, the list returned is empty. Why? Look forward to your reply. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3774 bytes Desc: not available URL: From yasufum.o at gmail.com Fri Jun 4 13:39:34 2021 From: yasufum.o at gmail.com (yasufum) Date: Fri, 4 Jun 2021 22:39:34 +0900 Subject: [Tacker] Tacker Not able to create VIM In-Reply-To: References: Message-ID: Hi, It might be a failure of not tacker but authentication because I've run VIM registration as you tried and no failure happened although it's just a bit different from your environment. Could you run it from CLI again referring [1] if you cannot register from horizon? [1] https://docs.openstack.org/tacker/latest/install/getting_started.html Thanks, Yasufumi On 2021/06/03 10:55, dangerzone ar wrote: > Hi all, > > I just deployed Tacker and tried to add my 1^st VIM but I’m getting > errors as per attached file. Pls advise how to resolve this problem. Thanks > > 1. *Error: *Failed to register VIM: {"error": {"message": > "(http://192.168.0.121:5000/v3/tokens > ): The resource could not be > found.", "code": 404, "title": "Not Found"}} > > 2. *Error as below**à**WARNING keystonemiddleware.auth_token [-] > Authorization failed for token: InvalidToken*** > > ** > > *{"vim": {"vim_project": {"name": "admin", "project_domain_name": > "Default"}, "description": "d", "is_default": false, "auth_cred": > {"username": "admin", "user_domain_name": "Default", "password": > "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 > **", "type": "openstack", "name": "d"}} > process_request > /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* > > *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] > Authorization failed for token: InvalidToken* > > *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - > [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* > > ** > > Below is my tacker.conf > > [DEFAULT] > auth_strategy = keystone > policy_file = /etc/tacker/policy.json > debug = True > use_syslog = False > bind_host = 192.168.0.121 > bind_port = 9890 > service_plugins = nfvo,vnfm > state_path = /var/lib/tacker > > > [nfvo] > vim_drivers = openstack > > [keystone_authtoken] > region_name = RegionOne > auth_type = password > project_domain_name = Default > user_domain_name = Default > username = tacker > password = password > auth_url = http://192.168.0.121:35357 > auth_uri = http://192.168.0.121:5000 > > [agent] > root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf > > > [database] > connection = > mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 > ** > -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2021-06-04 22-16-47.png Type: image/png Size: 51040 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2021-06-04 22-17-42.png Type: image/png Size: 47649 bytes Desc: not available URL: -------------- next part -------------- [DEFAULT] auth_strategy = keystone debug = True logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s %(instance)s logging_debug_format_suffix = from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d logging_default_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [-%(color)s] %(instance)s%(color)s%(message)s logging_context_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [%(request_id)s %(project_name)s %(user_name)s%(color)s] %(instance)s%(color)s%(message)s use_syslog = False state_path = /opt/stack/data/tacker transport_url = rabbit://stackrabbit:devstack at 192.168.33.11:5672/ # # From oslo.log # # If set to true, the logging level will be set to DEBUG instead of the default # INFO level. (boolean value) # Note: This option can be changed without restarting. #debug = false # The name of a logging configuration file. This file is appended to any # existing logging configuration files. For details about logging configuration # files, see the Python logging module documentation. Note that when logging # configuration files are used then all logging configuration is set in the # configuration file and other logging configuration options are ignored (for # example, log-date-format). (string value) # Note: This option can be changed without restarting. # Deprecated group/name - [DEFAULT]/log_config #log_config_append = # Defines the format string for %%(asctime)s in log records. Default: # %(default)s . This option is ignored if log_config_append is set. (string # value) #log_date_format = %Y-%m-%d %H:%M:%S # (Optional) Name of log file to send logging output to. If no default is set, # logging will go to stderr as defined by use_stderr. This option is ignored if # log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logfile #log_file = # (Optional) The base directory used for relative log_file paths. This option # is ignored if log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logdir #log_dir = # Uses logging handler designed to watch file system. When log file is moved or # removed this handler will open a new log file with specified path # instantaneously. It makes sense only if log_file option is specified and # Linux platform is used. This option is ignored if log_config_append is set. # (boolean value) #watch_log_file = false # Use syslog for logging. Existing syslog format is DEPRECATED and will be # changed later to honor RFC5424. This option is ignored if log_config_append # is set. (boolean value) #use_syslog = false # Enable journald for logging. If running in a systemd environment you may wish # to enable journal support. Doing so will use the journal native protocol # which includes structured metadata in addition to log messages.This option is # ignored if log_config_append is set. (boolean value) #use_journal = false # Syslog facility to receive log lines. This option is ignored if # log_config_append is set. (string value) #syslog_log_facility = LOG_USER # Use JSON formatting for logging. This option is ignored if log_config_append # is set. (boolean value) #use_json = false # Log output to standard error. This option is ignored if log_config_append is # set. (boolean value) #use_stderr = false # Log output to Windows Event Log. (boolean value) #use_eventlog = false # The amount of time before the log files are rotated. This option is ignored # unless log_rotation_type is set to "interval". (integer value) #log_rotate_interval = 1 # Rotation interval type. The time of the last file change (or the time when # the service was started) is used when scheduling the next rotation. (string # value) # Possible values: # Seconds - # Minutes - # Hours - # Days - # Weekday - # Midnight - #log_rotate_interval_type = days # Maximum number of rotated log files. (integer value) #max_logfile_count = 30 # Log file maximum size in MB. This option is ignored if "log_rotation_type" is # not set to "size". (integer value) #max_logfile_size_mb = 200 # Log rotation type. (string value) # Possible values: # interval - Rotate logs at predefined time intervals. # size - Rotate logs once they reach a predefined size. # none - Do not rotate log files. #log_rotation_type = none # Format string to use for log messages with context. Used by # oslo_log.formatters.ContextFormatter (string value) #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s # Format string to use for log messages when context is undefined. Used by # oslo_log.formatters.ContextFormatter (string value) #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s # Additional data to append to log message when logging level for the message # is DEBUG. Used by oslo_log.formatters.ContextFormatter (string value) #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d # Prefix each line of exception output with this format. Used by # oslo_log.formatters.ContextFormatter (string value) #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s # Defines the format string for %(user_identity)s that is used in # logging_context_format_string. Used by oslo_log.formatters.ContextFormatter # (string value) #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s # List of package logging levels in logger=LEVEL pairs. This option is ignored # if log_config_append is set. (list value) #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,oslo_policy=INFO,dogpile.core.dogpile=INFO # Enables or disables publication of error events. (boolean value) #publish_errors = false # The format for an instance that is passed with the log message. (string # value) #instance_format = "[instance: %(uuid)s] " # The format for an instance UUID that is passed with the log message. (string # value) #instance_uuid_format = "[instance: %(uuid)s] " # Interval, number of seconds, of log rate limiting. (integer value) #rate_limit_interval = 0 # Maximum number of logged messages per rate_limit_interval. (integer value) #rate_limit_burst = 0 # Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG # or empty string. Logs with level greater or equal to rate_limit_except_level # are not filtered. An empty string means that all levels are filtered. (string # value) #rate_limit_except_level = CRITICAL # Enables or disables fatal status of deprecations. (boolean value) #fatal_deprecations = false # # From oslo.messaging # # Size of RPC connection pool. (integer value) # Minimum value: 1 #rpc_conn_pool_size = 30 # The pool size limit for connections expiration policy (integer value) #conn_pool_min_size = 2 # The time-to-live in sec of idle connections in the pool (integer value) #conn_pool_ttl = 1200 # Size of executor thread pool when executor is threading or eventlet. (integer # value) # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size #executor_thread_pool_size = 64 # Seconds to wait for a response from a call. (integer value) #rpc_response_timeout = 60 # The network address and optional user credentials for connecting to the # messaging backend, in URL format. The expected format is: # # driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query # # Example: rabbit://rabbitmq:password at 127.0.0.1:5672// # # For full details on the fields in the URL see the documentation of # oslo_messaging.TransportURL at # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html # (string value) #transport_url = rabbit:// # The default exchange under which topics are scoped. May be overridden by an # exchange name specified in the transport_url option. (string value) #control_exchange = tacker # Add an endpoint to answer to ping calls. Endpoint is named # oslo_rpc_server_ping (boolean value) #rpc_ping_enabled = false # # From oslo.service.service # # Enable eventlet backdoor. Acceptable values are 0, , and # :, where 0 results in listening on a random tcp port number; # results in listening on the specified port number (and not enabling # backdoor if that port is in use); and : results in listening on # the smallest unused port number within the specified range of port numbers. # The chosen port is displayed in the service's log file. (string value) #backdoor_port = # Enable eventlet backdoor, using the provided path as a unix socket that can # receive connections. This option is mutually exclusive with 'backdoor_port' # in that only one should be provided. If both are provided then the existence # of this option overrides the usage of that option. Inside the path {pid} will # be replaced with the PID of the current process. (string value) #backdoor_socket = # Enables or disables logging values of all registered options when starting a # service (at DEBUG level). (boolean value) #log_options = true # Specify a timeout after which a gracefully shutdown server will exit. Zero # value means endless wait. (integer value) #graceful_shutdown_timeout = 60 # # From tacker.common.config # # The host IP to bind to (host address value) #bind_host = 0.0.0.0 # The port to bind to (integer value) #bind_port = 9890 # The API paste config file to use (string value) #api_paste_config = api-paste.ini # The path for API extensions (string value) #api_extensions_path = # The service plugins Tacker will use (list value) #service_plugins = nfvo,vnfm # The type of authentication to use (string value) #auth_strategy = keystone # Allow the usage of the bulk API (boolean value) #allow_bulk = true # Allow the usage of the pagination (boolean value) #allow_pagination = false # Allow the usage of the sorting (boolean value) #allow_sorting = false # The maximum number of items returned in a single response, value was # 'infinite' or negative integer means no limit (string value) #pagination_max_limit = -1 # The hostname Tacker is running on (host address value) #host = controller # Where to store Tacker state files. This directory must be writable by the # agent. (string value) #state_path = /var/lib/tacker # # From tacker.conf # # Seconds between running periodic tasks to cleanup residues of deleted vnf # packages (integer value) #vnf_package_delete_interval = 1800 # # From tacker.service # # Seconds between running components report states (integer value) #report_interval = 10 # Seconds between running periodic tasks (integer value) #periodic_interval = 40 # Number of separate worker processes for service (integer value) #api_workers = 0 # Range of seconds to randomly delay when starting the periodic task scheduler # to reduce stampeding. (Disable by setting to 0) (integer value) #periodic_fuzzy_delay = 5 # # From tacker.wsgi # # Number of backlog requests to configure the socket with (integer value) #backlog = 4096 # Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not # supported on OS X. (integer value) #tcp_keepidle = 600 # Number of seconds to keep retrying to listen (integer value) #retry_until_window = 30 # Max header line to accommodate large tokens (integer value) #max_header_line = 16384 # Enable SSL on the API server (boolean value) #use_ssl = false # CA certificate file to use to verify connecting clients (string value) #ssl_ca_file = # Certificate file to use when starting the server securely (string value) #ssl_cert_file = # Private key file to use when starting the server securely (string value) #ssl_key_file = [alarm_auth] url = http://192.168.33.11:5000/v3 project_name = admin password = devstack username = admin # # From tacker.alarm_receiver # # User name for alarm monitoring (string value) #username = admin # Password for alarm monitoring (string value) #password = devstack # Project name for alarm monitoring (string value) #project_name = admin # User domain name for alarm monitoring (string value) #user_domain_name = default # Project domain name for alarm monitoring (string value) #project_domain_name = default [ceilometer] # # From tacker.vnfm.monitor_drivers.ceilometer.ceilometer # # Address which drivers use to trigger (host address value) #host = controller # port number which drivers use to trigger (port value) # Minimum value: 0 # Maximum value: 65535 #port = 9890 [coordination] # # From tacker.conf # # The backend URL to use for distributed coordination. (string value) #backend_url = file://$state_path [cors] # # From oslo.middleware # # Indicate whether this resource may be shared with the domain received in the # requests "origin" header. Format: "://[:]", no trailing # slash. Example: https://horizon.example.com (list value) #allowed_origin = # Indicate that the actual request can include user credentials (boolean value) #allow_credentials = true # Indicate which headers are safe to expose to the API. Defaults to HTTP Simple # Headers. (list value) #expose_headers = # Maximum cache age of CORS preflight requests. (integer value) #max_age = 3600 # Indicate which methods can be used during the actual request. (list value) #allow_methods = OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PATCH # Indicate which header field names may be used during the actual request. # (list value) #allow_headers = [database] connection = mysql+pymysql://root:devstack at 127.0.0.1/tacker?charset=utf8 # # From oslo.db # # If True, SQLite uses synchronous mode. (boolean value) #sqlite_synchronous = true # The back end to use for the database. (string value) # Deprecated group/name - [DEFAULT]/db_backend #backend = sqlalchemy # The SQLAlchemy connection string to use to connect to the database. (string # value) # Deprecated group/name - [DEFAULT]/sql_connection # Deprecated group/name - [DATABASE]/sql_connection # Deprecated group/name - [sql]/connection #connection = # The SQLAlchemy connection string to use to connect to the slave database. # (string value) #slave_connection = # The SQL mode to be used for MySQL sessions. This option, including the # default, overrides any server-set SQL mode. To use whatever SQL mode is set # by the server configuration, set this to no value. Example: mysql_sql_mode= # (string value) #mysql_sql_mode = TRADITIONAL # If True, transparently enables support for handling MySQL Cluster (NDB). # (boolean value) #mysql_enable_ndb = false # Connections which have been present in the connection pool longer than this # number of seconds will be replaced with a new one the next time they are # checked out from the pool. (integer value) # Deprecated group/name - [DATABASE]/idle_timeout # Deprecated group/name - [database]/idle_timeout # Deprecated group/name - [DEFAULT]/sql_idle_timeout # Deprecated group/name - [DATABASE]/sql_idle_timeout # Deprecated group/name - [sql]/idle_timeout #connection_recycle_time = 3600 # Maximum number of SQL connections to keep open in a pool. Setting a value of # 0 indicates no limit. (integer value) #max_pool_size = 5 # Maximum number of database connection retries during startup. Set to -1 to # specify an infinite retry count. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_retries # Deprecated group/name - [DATABASE]/sql_max_retries #max_retries = 10 # Interval between retries of opening a SQL connection. (integer value) # Deprecated group/name - [DEFAULT]/sql_retry_interval # Deprecated group/name - [DATABASE]/reconnect_interval #retry_interval = 10 # If set, use this value for max_overflow with SQLAlchemy. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_overflow # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow #max_overflow = 50 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer # value) # Minimum value: 0 # Maximum value: 100 # Deprecated group/name - [DEFAULT]/sql_connection_debug #connection_debug = 0 # Add Python stack traces to SQL as comment strings. (boolean value) # Deprecated group/name - [DEFAULT]/sql_connection_trace #connection_trace = false # If set, use this value for pool_timeout with SQLAlchemy. (integer value) # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout #pool_timeout = # Enable the experimental use of database reconnect on connection lost. # (boolean value) #use_db_reconnect = false # Seconds between retries of a database transaction. (integer value) #db_retry_interval = 1 # If True, increases the interval between retries of a database operation up to # db_max_retry_interval. (boolean value) #db_inc_retry_interval = true # If db_inc_retry_interval is set, the maximum seconds between retries of a # database operation. (integer value) #db_max_retry_interval = 10 # Maximum retries in case of connection error or deadlock error before error is # raised. Set to -1 to specify an infinite retry count. (integer value) #db_max_retries = 20 # Optional URL parameters to append onto the connection URL at connect time; # specify as param1=value1¶m2=value2&... (string value) #connection_parameters = [glance_store] default_backend = fast filesystem_store_datadir = /opt/stack/data/tacker/csar_files # # From glance.store # # DEPRECATED: # List of enabled Glance stores. # # Register the storage backends to use for storing disk images # as a comma separated list. The default stores enabled for # storing disk images with Glance are ``file`` and ``http``. # # Possible values: # * A comma separated list that could include: # * file # * http # * swift # * rbd # * cinder # * vmware # * s3 # # Related Options: # * default_store # # (list value) # This option is deprecated for removal since Rocky. # Its value may be silently ignored in the future. # Reason: # This option is deprecated against new config option # ``enabled_backends`` which helps to configure multiple backend stores # of different schemes. # # This option is scheduled for removal in the U development # cycle. #stores = file,http # DEPRECATED: # The default scheme to use for storing images. # # Provide a string value representing the default scheme to use for # storing images. If not set, Glance uses ``file`` as the default # scheme to store images with the ``file`` store. # # NOTE: The value given for this configuration option must be a valid # scheme for a store registered with the ``stores`` configuration # option. # # Possible values: # * file # * filesystem # * http # * https # * swift # * swift+http # * swift+https # * swift+config # * rbd # * cinder # * vsphere # * s3 # # Related Options: # * stores # # (string value) # Possible values: # file - # filesystem - # http - # https - # swift - # swift+http - # swift+https - # swift+config - # rbd - # cinder - # vsphere - # s3 - # This option is deprecated for removal since Rocky. # Its value may be silently ignored in the future. # Reason: # This option is deprecated against new config option # ``default_backend`` which acts similar to ``default_store`` config # option. # # This option is scheduled for removal in the U development # cycle. #default_store = file # # Information to match when looking for cinder in the service catalog. # # When the ``cinder_endpoint_template`` is not set and any of # ``cinder_store_auth_address``, ``cinder_store_user_name``, # ``cinder_store_project_name``, ``cinder_store_password`` is not set, # cinder store uses this information to lookup cinder endpoint from the service # catalog in the current context. ``cinder_os_region_name``, if set, is taken # into consideration to fetch the appropriate endpoint. # # The service catalog can be listed by the ``openstack catalog list`` command. # # Possible values: # * A string of of the following form: # ``::`` # At least ``service_type`` and ``interface`` should be specified. # ``service_name`` can be omitted. # # Related options: # * cinder_os_region_name # * cinder_endpoint_template # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # * cinder_store_password # # (string value) #cinder_catalog_info = volumev3::publicURL # # Override service catalog lookup with template for cinder endpoint. # # When this option is set, this value is used to generate cinder endpoint, # instead of looking up from the service catalog. # This value is ignored if ``cinder_store_auth_address``, # ``cinder_store_user_name``, ``cinder_store_project_name``, and # ``cinder_store_password`` are specified. # # If this configuration option is set, ``cinder_catalog_info`` will be ignored. # # Possible values: # * URL template string for cinder endpoint, where ``%%(tenant)s`` is # replaced with the current tenant (project) name. # For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s`` # # Related options: # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # * cinder_store_password # * cinder_catalog_info # # (string value) #cinder_endpoint_template = # # Region name to lookup cinder service from the service catalog. # # This is used only when ``cinder_catalog_info`` is used for determining the # endpoint. If set, the lookup for cinder endpoint by this node is filtered to # the specified region. It is useful when multiple regions are listed in the # catalog. If this is not set, the endpoint is looked up from every region. # # Possible values: # * A string that is a valid region name. # # Related options: # * cinder_catalog_info # # (string value) # Deprecated group/name - [glance_store]/os_region_name #cinder_os_region_name = # # Location of a CA certificates file used for cinder client requests. # # The specified CA certificates file, if set, is used to verify cinder # connections via HTTPS endpoint. If the endpoint is HTTP, this value is # ignored. # ``cinder_api_insecure`` must be set to ``True`` to enable the verification. # # Possible values: # * Path to a ca certificates file # # Related options: # * cinder_api_insecure # # (string value) #cinder_ca_certificates_file = # # Number of cinderclient retries on failed http calls. # # When a call failed by any errors, cinderclient will retry the call up to the # specified times after sleeping a few seconds. # # Possible values: # * A positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #cinder_http_retries = 3 # # Time period, in seconds, to wait for a cinder volume transition to # complete. # # When the cinder volume is created, deleted, or attached to the glance node to # read/write the volume data, the volume's state is changed. For example, the # newly created volume status changes from ``creating`` to ``available`` after # the creation process is completed. This specifies the maximum time to wait # for # the status change. If a timeout occurs while waiting, or the status is # changed # to an unexpected value (e.g. `error``), the image creation fails. # # Possible values: # * A positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #cinder_state_transition_timeout = 300 # # Allow to perform insecure SSL requests to cinder. # # If this option is set to True, HTTPS endpoint connection is verified using # the # CA certificates file specified by ``cinder_ca_certificates_file`` option. # # Possible values: # * True # * False # # Related options: # * cinder_ca_certificates_file # # (boolean value) #cinder_api_insecure = false # # The address where the cinder authentication service is listening. # # When all of ``cinder_store_auth_address``, ``cinder_store_user_name``, # ``cinder_store_project_name``, and ``cinder_store_password`` options are # specified, the specified values are always used for the authentication. # This is useful to hide the image volumes from users by storing them in a # project/tenant specific to the image service. It also enables users to share # the image volume among other projects under the control of glance's ACL. # # If either of these options are not set, the cinder endpoint is looked up # from the service catalog, and current context's user and project are used. # # Possible values: # * A valid authentication service address, for example: # ``http://openstack.example.org/identity/v2.0`` # # Related options: # * cinder_store_user_name # * cinder_store_password # * cinder_store_project_name # # (string value) #cinder_store_auth_address = # # User name to authenticate against cinder. # # This must be used with all the following related options. If any of these are # not specified, the user of the current context is used. # # Possible values: # * A valid user name # # Related options: # * cinder_store_auth_address # * cinder_store_password # * cinder_store_project_name # # (string value) #cinder_store_user_name = # # Password for the user authenticating against cinder. # # This must be used with all the following related options. If any of these are # not specified, the user of the current context is used. # # Possible values: # * A valid password for the user specified by ``cinder_store_user_name`` # # Related options: # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # # (string value) #cinder_store_password = # # Project name where the image volume is stored in cinder. # # If this configuration option is not set, the project in current context is # used. # # This must be used with all the following related options. If any of these are # not specified, the project of the current context is used. # # Possible values: # * A valid project name # # Related options: # * ``cinder_store_auth_address`` # * ``cinder_store_user_name`` # * ``cinder_store_password`` # # (string value) #cinder_store_project_name = # # Path to the rootwrap configuration file to use for running commands as root. # # The cinder store requires root privileges to operate the image volumes (for # connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). # The configuration file should allow the required commands by cinder store and # os-brick library. # # Possible values: # * Path to the rootwrap config file # # Related options: # * None # # (string value) #rootwrap_config = /etc/glance/rootwrap.conf # # Volume type that will be used for volume creation in cinder. # # Some cinder backends can have several volume types to optimize storage usage. # Adding this option allows an operator to choose a specific volume type # in cinder that can be optimized for images. # # If this is not set, then the default volume type specified in the cinder # configuration will be used for volume creation. # # Possible values: # * A valid volume type from cinder # # Related options: # * None # # NOTE: You cannot use an encrypted volume_type associated with an NFS backend. # An encrypted volume stored on an NFS backend will raise an exception whenever # glance_store tries to write or access image data stored in that volume. # Consult your Cinder administrator to determine an appropriate volume_type. # # (string value) #cinder_volume_type = # # If this is set to True, attachment of volumes for image transfer will # be aborted when multipathd is not running. Otherwise, it will fallback # to single path. # # Possible values: # * True or False # # Related options: # * cinder_use_multipath # # (boolean value) #cinder_enforce_multipath = false # # Flag to identify mutipath is supported or not in the deployment. # # Set it to False if multipath is not supported. # # Possible values: # * True or False # # Related options: # * cinder_enforce_multipath # # (boolean value) #cinder_use_multipath = false # # Directory where the NFS volume is mounted on the glance node. # # Possible values: # # * A string representing absolute path of mount point. # (string value) #cinder_mount_point_base = /var/lib/glance/mnt # # Directory to which the filesystem backend store writes images. # # Upon start up, Glance creates the directory if it doesn't already # exist and verifies write access to the user under which # ``glance-api`` runs. If the write access isn't available, a # ``BadStoreConfiguration`` exception is raised and the filesystem # store may not be available for adding new images. # # NOTE: This directory is used only when filesystem store is used as a # storage backend. Either ``filesystem_store_datadir`` or # ``filesystem_store_datadirs`` option must be specified in # ``glance-api.conf``. If both options are specified, a # ``BadStoreConfiguration`` will be raised and the filesystem store # may not be available for adding new images. # # Possible values: # * A valid path to a directory # # Related options: # * ``filesystem_store_datadirs`` # * ``filesystem_store_file_perm`` # # (string value) #filesystem_store_datadir = /var/lib/glance/images # # List of directories and their priorities to which the filesystem # backend store writes images. # # The filesystem store can be configured to store images in multiple # directories as opposed to using a single directory specified by the # ``filesystem_store_datadir`` configuration option. When using # multiple directories, each directory can be given an optional # priority to specify the preference order in which they should # be used. Priority is an integer that is concatenated to the # directory path with a colon where a higher value indicates higher # priority. When two directories have the same priority, the directory # with most free space is used. When no priority is specified, it # defaults to zero. # # More information on configuring filesystem store with multiple store # directories can be found at # https://docs.openstack.org/glance/latest/configuration/configuring.html # # NOTE: This directory is used only when filesystem store is used as a # storage backend. Either ``filesystem_store_datadir`` or # ``filesystem_store_datadirs`` option must be specified in # ``glance-api.conf``. If both options are specified, a # ``BadStoreConfiguration`` will be raised and the filesystem store # may not be available for adding new images. # # Possible values: # * List of strings of the following form: # * ``:`` # # Related options: # * ``filesystem_store_datadir`` # * ``filesystem_store_file_perm`` # # (multi valued) #filesystem_store_datadirs = # # Filesystem store metadata file. # # The path to a file which contains the metadata to be returned with any # location # associated with the filesystem store. Once this option is set, it is used for # new images created afterward only - previously existing images are not # affected. # # The file must contain a valid JSON object. The object should contain the keys # ``id`` and ``mountpoint``. The value for both keys should be a string. # # Possible values: # * A valid path to the store metadata file # # Related options: # * None # # (string value) #filesystem_store_metadata_file = # # File access permissions for the image files. # # Set the intended file access permissions for image data. This provides # a way to enable other services, e.g. Nova, to consume images directly # from the filesystem store. The users running the services that are # intended to be given access to could be made a member of the group # that owns the files created. Assigning a value less then or equal to # zero for this configuration option signifies that no changes be made # to the default permissions. This value will be decoded as an octal # digit. # # For more information, please refer the documentation at # https://docs.openstack.org/glance/latest/configuration/configuring.html # # Possible values: # * A valid file access permission # * Zero # * Any negative integer # # Related options: # * None # # (integer value) #filesystem_store_file_perm = 0 # # Chunk size, in bytes. # # The chunk size used when reading or writing image files. Raising this value # may improve the throughput but it may also slightly increase the memory usage # when handling a large number of requests. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #filesystem_store_chunk_size = 65536 # # Enable or not thin provisioning in this backend. # # This configuration option enable the feature of not really write null byte # sequences on the filesystem, the holes who can appear will automatically # be interpreted by the filesystem as null bytes, and do not really consume # your storage. # Enabling this feature will also speed up image upload and save network trafic # in addition to save space in the backend, as null bytes sequences are not # sent over the network. # # Possible Values: # * True # * False # # Related options: # * None # # (boolean value) #filesystem_thin_provisioning = false # # Path to the CA bundle file. # # This configuration option enables the operator to use a custom # Certificate Authority file to verify the remote server certificate. If # this option is set, the ``https_insecure`` option will be ignored and # the CA file specified will be used to authenticate the server # certificate and establish a secure connection to the server. # # Possible values: # * A valid path to a CA file # # Related options: # * https_insecure # # (string value) #https_ca_certificates_file = # # Set verification of the remote server certificate. # # This configuration option takes in a boolean value to determine # whether or not to verify the remote server certificate. If set to # True, the remote server certificate is not verified. If the option is # set to False, then the default CA truststore is used for verification. # # This option is ignored if ``https_ca_certificates_file`` is set. # The remote server certificate will then be verified using the file # specified using the ``https_ca_certificates_file`` option. # # Possible values: # * True # * False # # Related options: # * https_ca_certificates_file # # (boolean value) #https_insecure = true # # The http/https proxy information to be used to connect to the remote # server. # # This configuration option specifies the http/https proxy information # that should be used to connect to the remote server. The proxy # information should be a key value pair of the scheme and proxy, for # example, http:10.0.0.1:3128. You can also specify proxies for multiple # schemes by separating the key value pairs with a comma, for example, # http:10.0.0.1:3128, https:10.0.0.1:1080. # # Possible values: # * A comma separated list of scheme:proxy pairs as described above # # Related options: # * None # # (dict value) #http_proxy_information = # # Size, in megabytes, to chunk RADOS images into. # # Provide an integer value representing the size in megabytes to chunk # Glance images into. The default chunk size is 8 megabytes. For optimal # performance, the value should be a power of two. # # When Ceph's RBD object storage system is used as the storage backend # for storing Glance images, the images are chunked into objects of the # size set using this option. These chunked objects are then stored # across the distributed block data store to use for Glance. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #rbd_store_chunk_size = 8 # # RADOS pool in which images are stored. # # When RBD is used as the storage backend for storing Glance images, the # images are stored by means of logical grouping of the objects (chunks # of images) into a ``pool``. Each pool is defined with the number of # placement groups it can contain. The default pool that is used is # 'images'. # # More information on the RBD storage backend can be found here: # http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ # # Possible Values: # * A valid pool name # # Related options: # * None # # (string value) #rbd_store_pool = images # # RADOS user to authenticate as. # # This configuration option takes in the RADOS user to authenticate as. # This is only needed when RADOS authentication is enabled and is # applicable only if the user is using Cephx authentication. If the # value for this option is not set by the user or is set to None, a # default value will be chosen, which will be based on the client. # section in rbd_store_ceph_conf. # # Possible Values: # * A valid RADOS user # # Related options: # * rbd_store_ceph_conf # # (string value) #rbd_store_user = # # Ceph configuration file path. # # This configuration option specifies the path to the Ceph configuration # file to be used. If the value for this option is not set by the user # or is set to the empty string, librados will read the standard ceph.conf # file by searching the default Ceph configuration file locations in # sequential order. See the Ceph documentation for details. # # NOTE: If using Cephx authentication, this file should include a reference # to the right keyring in a client. section # # NOTE 2: If you leave this option empty (the default), the actual Ceph # configuration file used may change depending on what version of librados # is being used. If it is important for you to know exactly which # configuration # file is in effect, you may specify that file here using this option. # # Possible Values: # * A valid path to a configuration file # # Related options: # * rbd_store_user # # (string value) #rbd_store_ceph_conf = # # Timeout value for connecting to Ceph cluster. # # This configuration option takes in the timeout value in seconds used # when connecting to the Ceph cluster i.e. it sets the time to wait for # glance-api before closing the connection. This prevents glance-api # hangups during the connection to RBD. If the value for this option # is set to less than or equal to 0, no timeout is set and the default # librados value is used. # # Possible Values: # * Any integer value # # Related options: # * None # # (integer value) #rados_connect_timeout = 0 # # Enable or not thin provisioning in this backend. # # This configuration option enable the feature of not really write null byte # sequences on the RBD backend, the holes who can appear will automatically # be interpreted by Ceph as null bytes, and do not really consume your storage. # Enabling this feature will also speed up image upload and save network trafic # in addition to save space in the backend, as null bytes sequences are not # sent over the network. # # Possible Values: # * True # * False # # Related options: # * None # # (boolean value) #rbd_thin_provisioning = false # # The host where the S3 server is listening. # # This configuration option sets the host of the S3 or S3 compatible storage # Server. This option is required when using the S3 storage backend. # The host can contain a DNS name (e.g. s3.amazonaws.com, my-object- # storage.com) # or an IP address (127.0.0.1). # # Possible values: # * A valid DNS name # * A valid IPv4 address # # Related Options: # * s3_store_access_key # * s3_store_secret_key # # (string value) #s3_store_host = # # The S3 query token access key. # # This configuration option takes the access key for authenticating with the # Amazon S3 or S3 compatible storage server. This option is required when using # the S3 storage backend. # # Possible values: # * Any string value that is the access key for a user with appropriate # privileges # # Related Options: # * s3_store_host # * s3_store_secret_key # # (string value) #s3_store_access_key = # # The S3 query token secret key. # # This configuration option takes the secret key for authenticating with the # Amazon S3 or S3 compatible storage server. This option is required when using # the S3 storage backend. # # Possible values: # * Any string value that is a secret key corresponding to the access key # specified using the ``s3_store_host`` option # # Related Options: # * s3_store_host # * s3_store_access_key # # (string value) #s3_store_secret_key = # # The S3 bucket to be used to store the Glance data. # # This configuration option specifies where the glance images will be stored # in the S3. If ``s3_store_create_bucket_on_put`` is set to true, it will be # created automatically even if the bucket does not exist. # # Possible values: # * Any string value # # Related Options: # * s3_store_create_bucket_on_put # * s3_store_bucket_url_format # # (string value) #s3_store_bucket = # # Determine whether S3 should create a new bucket. # # This configuration option takes boolean value to indicate whether Glance # should # create a new bucket to S3 if it does not exist. # # Possible values: # * Any Boolean value # # Related Options: # * None # # (boolean value) #s3_store_create_bucket_on_put = false # # The S3 calling format used to determine the object. # # This configuration option takes access model that is used to specify the # address of an object in an S3 bucket. # # NOTE: # In ``path``-style, the endpoint for the object looks like # 'https://s3.amazonaws.com/bucket/example.img'. # And in ``virtual``-style, the endpoint for the object looks like # 'https://bucket.s3.amazonaws.com/example.img'. # If you do not follow the DNS naming convention in the bucket name, you can # get objects in the path style, but not in the virtual style. # # Possible values: # * Any string value of ``auto``, ``virtual``, or ``path`` # # Related Options: # * s3_store_bucket # # (string value) #s3_store_bucket_url_format = auto # # What size, in MB, should S3 start chunking image files and do a multipart # upload in S3. # # This configuration option takes a threshold in MB to determine whether to # upload the image to S3 as is or to split it (Multipart Upload). # # Note: You can only split up to 10,000 images. # # Possible values: # * Any positive integer value # # Related Options: # * s3_store_large_object_chunk_size # * s3_store_thread_pools # # (integer value) #s3_store_large_object_size = 100 # # What multipart upload part size, in MB, should S3 use when uploading parts. # # This configuration option takes the image split size in MB for Multipart # Upload. # # Note: You can only split up to 10,000 images. # # Possible values: # * Any positive integer value (must be greater than or equal to 5M) # # Related Options: # * s3_store_large_object_size # * s3_store_thread_pools # # (integer value) #s3_store_large_object_chunk_size = 10 # # The number of thread pools to perform a multipart upload in S3. # # This configuration option takes the number of thread pools when performing a # Multipart Upload. # # Possible values: # * Any positive integer value # # Related Options: # * s3_store_large_object_size # * s3_store_large_object_chunk_size # # (integer value) #s3_store_thread_pools = 10 # # Set verification of the server certificate. # # This boolean determines whether or not to verify the server # certificate. If this option is set to True, swiftclient won't check # for a valid SSL certificate when authenticating. If the option is set # to False, then the default CA truststore is used for verification. # # Possible values: # * True # * False # # Related options: # * swift_store_cacert # # (boolean value) #swift_store_auth_insecure = false # # Path to the CA bundle file. # # This configuration option enables the operator to specify the path to # a custom Certificate Authority file for SSL verification when # connecting to Swift. # # Possible values: # * A valid path to a CA file # # Related options: # * swift_store_auth_insecure # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #swift_store_cacert = /etc/ssl/certs/ca-certificates.crt # # The region of Swift endpoint to use by Glance. # # Provide a string value representing a Swift region where Glance # can connect to for image storage. By default, there is no region # set. # # When Glance uses Swift as the storage backend to store images # for a specific tenant that has multiple endpoints, setting of a # Swift region with ``swift_store_region`` allows Glance to connect # to Swift in the specified region as opposed to a single region # connectivity. # # This option can be configured for both single-tenant and # multi-tenant storage. # # NOTE: Setting the region with ``swift_store_region`` is # tenant-specific and is necessary ``only if`` the tenant has # multiple endpoints across different regions. # # Possible values: # * A string value representing a valid Swift region. # # Related Options: # * None # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #swift_store_region = RegionTwo # # The URL endpoint to use for Swift backend storage. # # Provide a string value representing the URL endpoint to use for # storing Glance images in Swift store. By default, an endpoint # is not set and the storage URL returned by ``auth`` is used. # Setting an endpoint with ``swift_store_endpoint`` overrides the # storage URL and is used for Glance image storage. # # NOTE: The URL should include the path up to, but excluding the # container. The location of an object is obtained by appending # the container and object to the configured URL. # # Possible values: # * String value representing a valid URL path up to a Swift container # # Related Options: # * None # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name # # Endpoint Type of Swift service. # # This string value indicates the endpoint type to use to fetch the # Swift endpoint. The endpoint type determines the actions the user will # be allowed to perform, for instance, reading and writing to the Store. # This setting is only used if swift_store_auth_version is greater than # 1. # # Possible values: # * publicURL # * adminURL # * internalURL # # Related options: # * swift_store_endpoint # # (string value) # Possible values: # publicURL - # adminURL - # internalURL - #swift_store_endpoint_type = publicURL # # Type of Swift service to use. # # Provide a string value representing the service type to use for # storing images while using Swift backend storage. The default # service type is set to ``object-store``. # # NOTE: If ``swift_store_auth_version`` is set to 2, the value for # this configuration option needs to be ``object-store``. If using # a higher version of Keystone or a different auth scheme, this # option may be modified. # # Possible values: # * A string representing a valid service type for Swift storage. # # Related Options: # * None # # (string value) #swift_store_service_type = object-store # # Name of single container to store images/name prefix for multiple containers # # When a single container is being used to store images, this configuration # option indicates the container within the Glance account to be used for # storing all images. When multiple containers are used to store images, this # will be the name prefix for all containers. Usage of single/multiple # containers can be controlled using the configuration option # ``swift_store_multiple_containers_seed``. # # When using multiple containers, the containers will be named after the value # set for this configuration option with the first N chars of the image UUID # as the suffix delimited by an underscore (where N is specified by # ``swift_store_multiple_containers_seed``). # # Example: if the seed is set to 3 and swift_store_container = ``glance``, then # an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed # in # the container ``glance_fda``. All dashes in the UUID are included when # creating the container name but do not count toward the character limit, so # when N=10 the container name would be ``glance_fdae39a1-ba.`` # # Possible values: # * If using single container, this configuration option can be any string # that is a valid swift container name in Glance's Swift account # * If using multiple containers, this configuration option can be any # string as long as it satisfies the container naming rules enforced by # Swift. The value of ``swift_store_multiple_containers_seed`` should be # taken into account as well. # # Related options: # * ``swift_store_multiple_containers_seed`` # * ``swift_store_multi_tenant`` # * ``swift_store_create_container_on_put`` # # (string value) #swift_store_container = glance # # The size threshold, in MB, after which Glance will start segmenting image # data. # # Swift has an upper limit on the size of a single uploaded object. By default, # this is 5GB. To upload objects bigger than this limit, objects are segmented # into multiple smaller objects that are tied together with a manifest file. # For more detail, refer to # https://docs.openstack.org/swift/latest/overview_large_objects.html # # This configuration option specifies the size threshold over which the Swift # driver will start segmenting image data into multiple smaller files. # Currently, the Swift driver only supports creating Dynamic Large Objects. # # NOTE: This should be set by taking into account the large object limit # enforced by the Swift cluster in consideration. # # Possible values: # * A positive integer that is less than or equal to the large object limit # enforced by the Swift cluster in consideration. # # Related options: # * ``swift_store_large_object_chunk_size`` # # (integer value) # Minimum value: 1 #swift_store_large_object_size = 5120 # # The maximum size, in MB, of the segments when image data is segmented. # # When image data is segmented to upload images that are larger than the limit # enforced by the Swift cluster, image data is broken into segments that are no # bigger than the size specified by this configuration option. # Refer to ``swift_store_large_object_size`` for more detail. # # For example: if ``swift_store_large_object_size`` is 5GB and # ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will # be # segmented into 7 segments where the first six segments will be 1GB in size # and # the seventh segment will be 0.2GB. # # Possible values: # * A positive integer that is less than or equal to the large object limit # enforced by Swift cluster in consideration. # # Related options: # * ``swift_store_large_object_size`` # # (integer value) # Minimum value: 1 #swift_store_large_object_chunk_size = 200 # # Create container, if it doesn't already exist, when uploading image. # # At the time of uploading an image, if the corresponding container doesn't # exist, it will be created provided this configuration option is set to True. # By default, it won't be created. This behavior is applicable for both single # and multiple containers mode. # # Possible values: # * True # * False # # Related options: # * None # # (boolean value) #swift_store_create_container_on_put = false # # Store images in tenant's Swift account. # # This enables multi-tenant storage mode which causes Glance images to be # stored # in tenant specific Swift accounts. If this is disabled, Glance stores all # images in its own account. More details multi-tenant store can be found at # https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage # # NOTE: If using multi-tenant swift store, please make sure # that you do not set a swift configuration file with the # 'swift_store_config_file' option. # # Possible values: # * True # * False # # Related options: # * swift_store_config_file # # (boolean value) #swift_store_multi_tenant = false # # Seed indicating the number of containers to use for storing images. # # When using a single-tenant store, images can be stored in one or more than # one # containers. When set to 0, all images will be stored in one single container. # When set to an integer value between 1 and 32, multiple containers will be # used to store images. This configuration option will determine how many # containers are created. The total number of containers that will be used is # equal to 16^N, so if this config option is set to 2, then 16^2=256 containers # will be used to store images. # # Please refer to ``swift_store_container`` for more detail on the naming # convention. More detail about using multiple containers can be found at # https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store- # multiple-containers.html # # NOTE: This is used only when swift_store_multi_tenant is disabled. # # Possible values: # * A non-negative integer less than or equal to 32 # # Related options: # * ``swift_store_container`` # * ``swift_store_multi_tenant`` # * ``swift_store_create_container_on_put`` # # (integer value) # Minimum value: 0 # Maximum value: 32 #swift_store_multiple_containers_seed = 0 # # List of tenants that will be granted admin access. # # This is a list of tenants that will be granted read/write access on # all Swift containers created by Glance in multi-tenant mode. The # default value is an empty list. # # Possible values: # * A comma separated list of strings representing UUIDs of Keystone # projects/tenants # # Related options: # * None # # (list value) #swift_store_admin_tenants = # # SSL layer compression for HTTPS Swift requests. # # Provide a boolean value to determine whether or not to compress # HTTPS Swift requests for images at the SSL layer. By default, # compression is enabled. # # When using Swift as the backend store for Glance image storage, # SSL layer compression of HTTPS Swift requests can be set using # this option. If set to False, SSL layer compression of HTTPS # Swift requests is disabled. Disabling this option may improve # performance for images which are already in a compressed format, # for example, qcow2. # # Possible values: # * True # * False # # Related Options: # * None # # (boolean value) #swift_store_ssl_compression = true # # The number of times a Swift download will be retried before the # request fails. # # Provide an integer value representing the number of times an image # download must be retried before erroring out. The default value is # zero (no retry on a failed image download). When set to a positive # integer value, ``swift_store_retry_get_count`` ensures that the # download is attempted this many more times upon a download failure # before sending an error message. # # Possible values: # * Zero # * Positive integer value # # Related Options: # * None # # (integer value) # Minimum value: 0 #swift_store_retry_get_count = 0 # # Time in seconds defining the size of the window in which a new # token may be requested before the current token is due to expire. # # Typically, the Swift storage driver fetches a new token upon the # expiration of the current token to ensure continued access to # Swift. However, some Swift transactions (like uploading image # segments) may not recover well if the token expires on the fly. # # Hence, by fetching a new token before the current token expiration, # we make sure that the token does not expire or is close to expiry # before a transaction is attempted. By default, the Swift storage # driver requests for a new token 60 seconds or less before the # current token expiration. # # Possible values: # * Zero # * Positive integer value # # Related Options: # * None # # (integer value) # Minimum value: 0 #swift_store_expire_soon_interval = 60 # # Use trusts for multi-tenant Swift store. # # This option instructs the Swift store to create a trust for each # add/get request when the multi-tenant store is in use. Using trusts # allows the Swift store to avoid problems that can be caused by an # authentication token expiring during the upload or download of data. # # By default, ``swift_store_use_trusts`` is set to ``True``(use of # trusts is enabled). If set to ``False``, a user token is used for # the Swift connection instead, eliminating the overhead of trust # creation. # # NOTE: This option is considered only when # ``swift_store_multi_tenant`` is set to ``True`` # # Possible values: # * True # * False # # Related options: # * swift_store_multi_tenant # # (boolean value) #swift_store_use_trusts = true # # Buffer image segments before upload to Swift. # # Provide a boolean value to indicate whether or not Glance should # buffer image data to disk while uploading to swift. This enables # Glance to resume uploads on error. # # NOTES: # When enabling this option, one should take great care as this # increases disk usage on the API node. Be aware that depending # upon how the file system is configured, the disk space used # for buffering may decrease the actual disk space available for # the glance image cache. Disk utilization will cap according to # the following equation: # (``swift_store_large_object_chunk_size`` * ``workers`` * 1000) # # Possible values: # * True # * False # # Related options: # * swift_upload_buffer_dir # # (boolean value) #swift_buffer_on_upload = false # # Reference to default Swift account/backing store parameters. # # Provide a string value representing a reference to the default set # of parameters required for using swift account/backing store for # image storage. The default reference value for this configuration # option is 'ref1'. This configuration option dereferences the # parameters and facilitates image storage in Swift storage backend # every time a new image is added. # # Possible values: # * A valid string value # # Related options: # * None # # (string value) #default_swift_reference = ref1 # DEPRECATED: Version of the authentication service to use. Valid versions are # 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'auth_version' in the Swift back-end configuration file is # used instead. #swift_store_auth_version = 2 # DEPRECATED: The address where the Swift authentication service is listening. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'auth_address' in the Swift back-end configuration file is # used instead. #swift_store_auth_address = # DEPRECATED: The user to authenticate against the Swift authentication # service. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'user' in the Swift back-end configuration file is set instead. #swift_store_user = # DEPRECATED: Auth key for the user authenticating against the Swift # authentication service. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'key' in the Swift back-end configuration file is used # to set the authentication key instead. #swift_store_key = # # Absolute path to the file containing the swift account(s) # configurations. # # Include a string value representing the path to a configuration # file that has references for each of the configured Swift # account(s)/backing stores. By default, no file path is specified # and customized Swift referencing is disabled. Configuring this # option is highly recommended while using Swift storage backend for # image storage as it avoids storage of credentials in the database. # # NOTE: Please do not configure this option if you have set # ``swift_store_multi_tenant`` to ``True``. # # Possible values: # * String value representing an absolute path on the glance-api # node # # Related options: # * swift_store_multi_tenant # # (string value) #swift_store_config_file = # # Directory to buffer image segments before upload to Swift. # # Provide a string value representing the absolute path to the # directory on the glance node where image segments will be # buffered briefly before they are uploaded to swift. # # NOTES: # * This is required only when the configuration option # ``swift_buffer_on_upload`` is set to True. # * This directory should be provisioned keeping in mind the # ``swift_store_large_object_chunk_size`` and the maximum # number of images that could be uploaded simultaneously by # a given glance node. # # Possible values: # * String value representing an absolute directory path # # Related options: # * swift_buffer_on_upload # * swift_store_large_object_chunk_size # # (string value) #swift_upload_buffer_dir = # # Address of the ESX/ESXi or vCenter Server target system. # # This configuration option sets the address of the ESX/ESXi or vCenter # Server target system. This option is required when using the VMware # storage backend. The address can contain an IP address (127.0.0.1) or # a DNS name (www.my-domain.com). # # Possible Values: # * A valid IPv4 or IPv6 address # * A valid DNS name # # Related options: # * vmware_server_username # * vmware_server_password # # (host address value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #vmware_server_host = 127.0.0.1 # # Server username. # # This configuration option takes the username for authenticating with # the VMware ESX/ESXi or vCenter Server. This option is required when # using the VMware storage backend. # # Possible Values: # * Any string that is the username for a user with appropriate # privileges # # Related options: # * vmware_server_host # * vmware_server_password # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #vmware_server_username = root # # Server password. # # This configuration option takes the password for authenticating with # the VMware ESX/ESXi or vCenter Server. This option is required when # using the VMware storage backend. # # Possible Values: # * Any string that is a password corresponding to the username # specified using the "vmware_server_username" option # # Related options: # * vmware_server_host # * vmware_server_username # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #vmware_server_password = vmware # # The number of VMware API retries. # # This configuration option specifies the number of times the VMware # ESX/VC server API must be retried upon connection related issues or # server API call overload. It is not possible to specify 'retry # forever'. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #vmware_api_retry_count = 10 # # Interval in seconds used for polling remote tasks invoked on VMware # ESX/VC server. # # This configuration option takes in the sleep time in seconds for polling an # on-going async task as part of the VMWare ESX/VC server API call. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #vmware_task_poll_interval = 5 # # The directory where the glance images will be stored in the datastore. # # This configuration option specifies the path to the directory where the # glance images will be stored in the VMware datastore. If this option # is not set, the default directory where the glance images are stored # is openstack_glance. # # Possible Values: # * Any string that is a valid path to a directory # # Related options: # * None # # (string value) #vmware_store_image_dir = /openstack_glance # # Set verification of the ESX/vCenter server certificate. # # This configuration option takes a boolean value to determine # whether or not to verify the ESX/vCenter server certificate. If this # option is set to True, the ESX/vCenter server certificate is not # verified. If this option is set to False, then the default CA # truststore is used for verification. # # This option is ignored if the "vmware_ca_file" option is set. In that # case, the ESX/vCenter server certificate will then be verified using # the file specified using the "vmware_ca_file" option . # # Possible Values: # * True # * False # # Related options: # * vmware_ca_file # # (boolean value) # Deprecated group/name - [glance_store]/vmware_api_insecure #vmware_insecure = false # # Absolute path to the CA bundle file. # # This configuration option enables the operator to use a custom # Cerificate Authority File to verify the ESX/vCenter certificate. # # If this option is set, the "vmware_insecure" option will be ignored # and the CA file specified will be used to authenticate the ESX/vCenter # server certificate and establish a secure connection to the server. # # Possible Values: # * Any string that is a valid absolute path to a CA file # # Related options: # * vmware_insecure # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #vmware_ca_file = /etc/ssl/certs/ca-certificates.crt # # The datastores where the image can be stored. # # This configuration option specifies the datastores where the image can # be stored in the VMWare store backend. This option may be specified # multiple times for specifying multiple datastores. The datastore name # should be specified after its datacenter path, separated by ":". An # optional weight may be given after the datastore name, separated again # by ":" to specify the priority. Thus, the required format becomes # ::. # # When adding an image, the datastore with highest weight will be # selected, unless there is not enough free space available in cases # where the image size is already known. If no weight is given, it is # assumed to be zero and the directory will be considered for selection # last. If multiple datastores have the same weight, then the one with # the most free space available is selected. # # Possible Values: # * Any string of the format: # :: # # Related options: # * None # # (multi valued) #vmware_datastores = [healthcheck] # # From oslo.middleware # # DEPRECATED: The path to respond to healtcheck requests on. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #path = /healthcheck # Show more detailed information as part of the response. Security note: # Enabling this option may expose sensitive details about the service being # monitored. Be sure to verify that it will not violate your security policies. # (boolean value) #detailed = false # Additional backends that can perform health checks and report that # information back as part of a request. (list value) #backends = # Check the presence of a file to determine if an application is running on a # port. Used by DisableByFileHealthcheck plugin. (string value) #disable_by_file_path = # Check the presence of a file based on a port to determine if an application # is running on a port. Expects a "port:path" list of strings. Used by # DisableByFilesPortsHealthcheck plugin. (list value) #disable_by_file_paths = [k8s_vim] # # From tacker.nfvo.drivers.vim.kubernetes_driver # # Use barbican to encrypt vim password if True, save vim credentials in local # file system if False (boolean value) #use_barbican = true [key_manager] # # From tacker.keymgr # # The full class name of the key manager API class (string value) #api_class = tacker.keymgr.barbican_key_manager.BarbicanKeyManager [keystone_authtoken] memcached_servers = localhost:11211 cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = devstack username = tacker auth_url = http://192.168.33.11/identity interface = public auth_type = password # # From keystonemiddleware.auth_token # # Complete "public" Identity API endpoint. This endpoint should not be an # "admin" endpoint, as it should be accessible by all end users. # Unauthenticated clients are redirected to this endpoint to authenticate. # Although this endpoint should ideally be unversioned, client support in the # wild varies. If you're using a versioned v2 endpoint here, then this should # *not* be the same endpoint the service user utilizes for validating tokens, # because normal end users may not be able to reach that endpoint. (string # value) # Deprecated group/name - [keystone_authtoken]/auth_uri #www_authenticate_uri = # DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not # be an "admin" endpoint, as it should be accessible by all end users. # Unauthenticated clients are redirected to this endpoint to authenticate. # Although this endpoint should ideally be unversioned, client support in the # wild varies. If you're using a versioned v2 endpoint here, then this should # *not* be the same endpoint the service user utilizes for validating tokens, # because normal end users may not be able to reach that endpoint. This option # is deprecated in favor of www_authenticate_uri and will be removed in the S # release. (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: The auth_uri option is deprecated in favor of www_authenticate_uri # and will be removed in the S release. #auth_uri = # API version of the Identity API endpoint. (string value) #auth_version = # Interface to use for the Identity API endpoint. Valid values are "public", # "internal" (default) or "admin". (string value) #interface = internal # Do not handle authorization requests within the middleware, but delegate the # authorization decision to downstream WSGI components. (boolean value) #delay_auth_decision = false # Request timeout value for communicating with Identity API server. (integer # value) #http_connect_timeout = # How many times are we trying to reconnect when communicating with Identity # API Server. (integer value) #http_request_max_retries = 3 # Request environment key where the Swift cache object is stored. When # auth_token middleware is deployed with a Swift cache, use this option to have # the middleware share a caching backend with swift. Otherwise, use the # ``memcached_servers`` option instead. (string value) #cache = # Required if identity server requires client certificate (string value) #certfile = # Required if identity server requires client certificate (string value) #keyfile = # A PEM encoded Certificate Authority to use when verifying HTTPs connections. # Defaults to system CAs. (string value) #cafile = # Verify HTTPS connections. (boolean value) #insecure = false # The region in which the identity server can be found. (string value) #region_name = # Optionally specify a list of memcached server(s) to use for caching. If left # undefined, tokens will instead be cached in-process. (list value) # Deprecated group/name - [keystone_authtoken]/memcache_servers #memcached_servers = # In order to prevent excessive effort spent validating tokens, the middleware # caches previously-seen tokens for a configurable duration (in seconds). Set # to -1 to disable caching completely. (integer value) #token_cache_time = 300 # (Optional) If defined, indicate whether token data should be authenticated or # authenticated and encrypted. If MAC, token data is authenticated (with HMAC) # in the cache. If ENCRYPT, token data is encrypted and authenticated in the # cache. If the value is not one of these options or empty, auth_token will # raise an exception on initialization. (string value) # Possible values: # None - # MAC - # ENCRYPT - #memcache_security_strategy = None # (Optional, mandatory if memcache_security_strategy is defined) This string is # used for key derivation. (string value) #memcache_secret_key = # (Optional) Number of seconds memcached server is considered dead before it is # tried again. (integer value) #memcache_pool_dead_retry = 300 # (Optional) Maximum total number of open connections to every memcached # server. (integer value) #memcache_pool_maxsize = 10 # (Optional) Socket timeout in seconds for communicating with a memcached # server. (integer value) #memcache_pool_socket_timeout = 3 # (Optional) Number of seconds a connection to memcached is held unused in the # pool before it is closed. (integer value) #memcache_pool_unused_timeout = 60 # (Optional) Number of seconds that an operation will wait to get a memcached # client connection from the pool. (integer value) #memcache_pool_conn_get_timeout = 10 # (Optional) Use the advanced (eventlet safe) memcached client pool. (boolean # value) #memcache_use_advanced_pool = true # (Optional) Indicate whether to set the X-Service-Catalog header. If False, # middleware will not ask for service catalog on token validation and will not # set the X-Service-Catalog header. (boolean value) #include_service_catalog = true # Used to control the use and type of token binding. Can be set to: "disabled" # to not check token binding. "permissive" (default) to validate binding # information if the bind type is of a form known to the server and ignore it # if not. "strict" like "permissive" but if the bind type is unknown the token # will be rejected. "required" any form of token binding is needed to be # allowed. Finally the name of a binding method that must be present in tokens. # (string value) #enforce_token_bind = permissive # A choice of roles that must be present in a service token. Service tokens are # allowed to request that an expired token can be used and so this check should # tightly control that only actual services should be sending this token. Roles # here are applied as an ANY check so any role in this list must be present. # For backwards compatibility reasons this currently only affects the # allow_expired check. (list value) #service_token_roles = service # For backwards compatibility reasons we must let valid service tokens pass # that don't pass the service_token_roles check as valid. Setting this true # will become the default in a future release and should be enabled if # possible. (boolean value) #service_token_roles_required = false # The name or type of the service as it appears in the service catalog. This is # used to validate tokens that have restricted access rules. (string value) #service_type = # Authentication type to load (string value) # Deprecated group/name - [keystone_authtoken]/auth_plugin #auth_type = # Config Section from which to load plugin specific options (string value) #auth_section = [kubernetes_vim] # # From tacker.vnfm.infra_drivers.kubernetes.kubernetes_driver # # Number of attempts to retry for stack creation/deletion (integer value) #stack_retries = 100 # Wait time (in seconds) between consecutive stack create/delete retries # (integer value) #stack_retry_wait = 5 [monitor] # # From tacker.vnfm.monitor # # check interval for monitor (integer value) #check_intvl = 10 [monitor_http_ping] # # From tacker.vnfm.monitor_drivers.http_ping.http_ping # # Number of times to retry (integer value) #retry = 5 # Number of seconds to wait for a response (integer value) #timeout = 1 # HTTP port number to send request (integer value) #port = 80 [monitor_ping] # # From tacker.vnfm.monitor_drivers.ping.ping # # Number of ICMP packets to send (integer value) #count = 5 # Number of seconds to wait for a response (floating point value) #timeout = 5 # Number of seconds to wait between packets (floating point value) #interval = 1 # Number of ping retries (integer value) #retry = 1 [nfvo_vim] # # From tacker.nfvo.nfvo_plugin # # VIM driver for launching VNFs (list value) #vim_drivers = openstack,kubernetes # Interval to check for VIM health (integer value) #monitor_interval = 30 [openstack_vim] # # From tacker.vnfm.infra_drivers.openstack.openstack # # Number of attempts to retry for stack creation/deletion (integer value) #stack_retries = 60 # Wait time (in seconds) between consecutive stack create/delete retries # (integer value) #stack_retry_wait = 10 [openwrt] # # From tacker.vnfm.mgmt_drivers.openwrt.openwrt # # User name to login openwrt (string value) #user = root # Password to login openwrt (string value) #password = [oslo_messaging_amqp] # # From oslo.messaging # # Name for the AMQP container. must be globally unique. Defaults to a generated # UUID (string value) #container_name = # Timeout for inactive connections (in seconds) (integer value) #idle_timeout = 0 # Debug: dump AMQP frames to stdout (boolean value) #trace = false # Attempt to connect via SSL. If no other ssl-related parameters are given, it # will use the system's CA-bundle to verify the server's certificate. (boolean # value) #ssl = false # CA certificate PEM file used to verify the server's certificate (string # value) #ssl_ca_file = # Self-identifying certificate PEM file for client authentication (string # value) #ssl_cert_file = # Private key PEM file used to sign ssl_cert_file certificate (optional) # (string value) #ssl_key_file = # Password for decrypting ssl_key_file (if encrypted) (string value) #ssl_key_password = # By default SSL checks that the name in the server's certificate matches the # hostname in the transport_url. In some configurations it may be preferable to # use the virtual hostname instead, for example if the server uses the Server # Name Indication TLS extension (rfc6066) to provide a certificate per virtual # host. Set ssl_verify_vhost to True if the server's SSL certificate uses the # virtual host name instead of the DNS name. (boolean value) #ssl_verify_vhost = false # Space separated list of acceptable SASL mechanisms (string value) #sasl_mechanisms = # Path to directory that contains the SASL configuration (string value) #sasl_config_dir = # Name of configuration file (without .conf suffix) (string value) #sasl_config_name = # SASL realm to use if no realm present in username (string value) #sasl_default_realm = # Seconds to pause before attempting to re-connect. (integer value) # Minimum value: 1 #connection_retry_interval = 1 # Increase the connection_retry_interval by this many seconds after each # unsuccessful failover attempt. (integer value) # Minimum value: 0 #connection_retry_backoff = 2 # Maximum limit for connection_retry_interval + connection_retry_backoff # (integer value) # Minimum value: 1 #connection_retry_interval_max = 30 # Time to pause between re-connecting an AMQP 1.0 link that failed due to a # recoverable error. (integer value) # Minimum value: 1 #link_retry_delay = 10 # The maximum number of attempts to re-send a reply message which failed due to # a recoverable error. (integer value) # Minimum value: -1 #default_reply_retry = 0 # The deadline for an rpc reply message delivery. (integer value) # Minimum value: 5 #default_reply_timeout = 30 # The deadline for an rpc cast or call message delivery. Only used when caller # does not provide a timeout expiry. (integer value) # Minimum value: 5 #default_send_timeout = 30 # The deadline for a sent notification message delivery. Only used when caller # does not provide a timeout expiry. (integer value) # Minimum value: 5 #default_notify_timeout = 30 # The duration to schedule a purge of idle sender links. Detach link after # expiry. (integer value) # Minimum value: 1 #default_sender_link_timeout = 600 # Indicates the addressing mode used by the driver. # Permitted values: # 'legacy' - use legacy non-routable addressing # 'routable' - use routable addresses # 'dynamic' - use legacy addresses if the message bus does not support routing # otherwise use routable addressing (string value) #addressing_mode = dynamic # Enable virtual host support for those message buses that do not natively # support virtual hosting (such as qpidd). When set to true the virtual host # name will be added to all message bus addresses, effectively creating a # private 'subnet' per virtual host. Set to False if the message bus supports # virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative # as the name of the virtual host. (boolean value) #pseudo_vhost = true # address prefix used when sending to a specific server (string value) #server_request_prefix = exclusive # address prefix used when broadcasting to all servers (string value) #broadcast_prefix = broadcast # address prefix when sending to any server in group (string value) #group_request_prefix = unicast # Address prefix for all generated RPC addresses (string value) #rpc_address_prefix = openstack.org/om/rpc # Address prefix for all generated Notification addresses (string value) #notify_address_prefix = openstack.org/om/notify # Appended to the address prefix when sending a fanout message. Used by the # message bus to identify fanout messages. (string value) #multicast_address = multicast # Appended to the address prefix when sending to a particular RPC/Notification # server. Used by the message bus to identify messages sent to a single # destination. (string value) #unicast_address = unicast # Appended to the address prefix when sending to a group of consumers. Used by # the message bus to identify messages that should be delivered in a round- # robin fashion across consumers. (string value) #anycast_address = anycast # Exchange name used in notification addresses. # Exchange name resolution precedence: # Target.exchange if set # else default_notification_exchange if set # else control_exchange if set # else 'notify' (string value) #default_notification_exchange = # Exchange name used in RPC addresses. # Exchange name resolution precedence: # Target.exchange if set # else default_rpc_exchange if set # else control_exchange if set # else 'rpc' (string value) #default_rpc_exchange = # Window size for incoming RPC Reply messages. (integer value) # Minimum value: 1 #reply_link_credit = 200 # Window size for incoming RPC Request messages (integer value) # Minimum value: 1 #rpc_server_credit = 100 # Window size for incoming Notification messages (integer value) # Minimum value: 1 #notify_server_credit = 100 # Send messages of this type pre-settled. # Pre-settled messages will not receive acknowledgement # from the peer. Note well: pre-settled messages may be # silently discarded if the delivery fails. # Permitted values: # 'rpc-call' - send RPC Calls pre-settled # 'rpc-reply'- send RPC Replies pre-settled # 'rpc-cast' - Send RPC Casts pre-settled # 'notify' - Send Notifications pre-settled # (multi valued) #pre_settled = rpc-cast #pre_settled = rpc-reply [oslo_messaging_kafka] # # From oslo.messaging # # Max fetch bytes of Kafka consumer (integer value) #kafka_max_fetch_bytes = 1048576 # Default timeout(s) for Kafka consumers (floating point value) #kafka_consumer_timeout = 1.0 # DEPRECATED: Pool Size for Kafka Consumers (integer value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #pool_size = 10 # DEPRECATED: The pool size limit for connections expiration policy (integer # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #conn_pool_min_size = 2 # DEPRECATED: The time-to-live in sec of idle connections in the pool (integer # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #conn_pool_ttl = 1200 # Group id for Kafka consumer. Consumers in one group will coordinate message # consumption (string value) #consumer_group = oslo_messaging_consumer # Upper bound on the delay for KafkaProducer batching in seconds (floating # point value) #producer_batch_timeout = 0.0 # Size of batch for the producer async send (integer value) #producer_batch_size = 16384 # The compression codec for all data generated by the producer. If not set, # compression will not be used. Note that the allowed values of this depend on # the kafka version (string value) # Possible values: # none - # gzip - # snappy - # lz4 - # zstd - #compression_codec = none # Enable asynchronous consumer commits (boolean value) #enable_auto_commit = false # The maximum number of records returned in a poll call (integer value) #max_poll_records = 500 # Protocol used to communicate with brokers (string value) # Possible values: # PLAINTEXT - # SASL_PLAINTEXT - # SSL - # SASL_SSL - #security_protocol = PLAINTEXT # Mechanism when security protocol is SASL (string value) #sasl_mechanism = PLAIN # CA certificate PEM file used to verify the server certificate (string value) #ssl_cafile = # Client certificate PEM file used for authentication. (string value) #ssl_client_cert_file = # Client key PEM file used for authentication. (string value) #ssl_client_key_file = # Client key password file used for authentication. (string value) #ssl_client_key_password = [oslo_messaging_notifications] # # From oslo.messaging # # The Drivers(s) to handle sending notifications. Possible values are # messaging, messagingv2, routing, log, test, noop (multi valued) # Deprecated group/name - [DEFAULT]/notification_driver #driver = # A URL representing the messaging driver to use for notifications. If not set, # we fall back to the same configuration used for RPC. (string value) # Deprecated group/name - [DEFAULT]/notification_transport_url #transport_url = # AMQP topic used for OpenStack notifications. (list value) # Deprecated group/name - [rpc_notifier2]/topics # Deprecated group/name - [DEFAULT]/notification_topics #topics = notifications # The maximum number of attempts to re-send a notification message which failed # to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite # (integer value) #retry = -1 [oslo_messaging_rabbit] # # From oslo.messaging # # Use durable queues in AMQP. (boolean value) #amqp_durable_queues = false # Auto-delete queues in AMQP. (boolean value) #amqp_auto_delete = false # Connect over SSL. (boolean value) # Deprecated group/name - [oslo_messaging_rabbit]/rabbit_use_ssl #ssl = false # SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and # SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some # distributions. (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version #ssl_version = # SSL key file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile #ssl_key_file = # SSL cert file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile #ssl_cert_file = # SSL certification authority file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs #ssl_ca_file = # DEPRECATED: Run the health check heartbeat thread through a native python # thread by default. If this option is equal to False then the health check # heartbeat will inherit the execution model from the parent process. For # example if the parent process has monkey patched the stdlib by using # eventlet/greenlet then the heartbeat will be run through a green thread. # (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #heartbeat_in_pthread = true # How long to wait before reconnecting in response to an AMQP consumer cancel # notification. (floating point value) #kombu_reconnect_delay = 1.0 # EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not # be used. This option may not be available in future versions. (string value) #kombu_compression = # How long to wait a missing client before abandoning to send it its replies. # This value should not be longer than rpc_response_timeout. (integer value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout #kombu_missing_consumer_retry_timeout = 60 # Determines how the next RabbitMQ node is chosen in case the one we are # currently connected to becomes unavailable. Takes effect only if more than # one RabbitMQ node is provided in config. (string value) # Possible values: # round-robin - # shuffle - #kombu_failover_strategy = round-robin # The RabbitMQ login method. (string value) # Possible values: # PLAIN - # AMQPLAIN - # RABBIT-CR-DEMO - #rabbit_login_method = AMQPLAIN # How frequently to retry connecting with RabbitMQ. (integer value) #rabbit_retry_interval = 1 # How long to backoff for between retries when connecting to RabbitMQ. (integer # value) #rabbit_retry_backoff = 2 # Maximum interval of RabbitMQ connection retries. Default is 30 seconds. # (integer value) #rabbit_interval_max = 30 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring # is no longer controlled by the x-ha-policy argument when declaring a queue. # If you just want to make sure that all queues (except those with auto- # generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy # HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value) #rabbit_ha_queues = false # Positive integer representing duration in seconds for queue TTL (x-expires). # Queues which are unused for the duration of the TTL are automatically # deleted. The parameter affects only reply and fanout queues. (integer value) # Minimum value: 1 #rabbit_transient_queues_ttl = 1800 # Specifies the number of messages to prefetch. Setting to zero allows # unlimited messages. (integer value) #rabbit_qos_prefetch_count = 0 # Number of seconds after which the Rabbit broker is considered down if # heartbeat's keep-alive fails (0 disables heartbeat). (integer value) #heartbeat_timeout_threshold = 60 # How often times during the heartbeat_timeout_threshold we check the # heartbeat. (integer value) #heartbeat_rate = 2 # DEPRECATED: (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for # direct send. The direct send is used as reply, so the MessageUndeliverable # exception is raised in case the client queue does not # exist.MessageUndeliverable exception will be used to loop for a timeout to # lets a chance to sender to recover.This flag is deprecated and it will not be # possible to deactivate this functionality anymore (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Mandatory flag no longer deactivable. #direct_mandatory_flag = true # Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and # notify consumerswhen queue is down (boolean value) #enable_cancel_on_failover = false [oslo_middleware] # # From oslo.middleware # # The maximum body size for each request, in bytes. (integer value) # Deprecated group/name - [DEFAULT]/osapi_max_request_body_size # Deprecated group/name - [DEFAULT]/max_request_body_size #max_request_body_size = 114688 # DEPRECATED: The HTTP Header that will be used to determine what the original # request protocol scheme was, even if it was hidden by a SSL termination # proxy. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #secure_proxy_ssl_header = X-Forwarded-Proto # Whether the application is behind a proxy or not. This determines if the # middleware should parse the headers or not. (boolean value) #enable_proxy_headers_parsing = false [oslo_policy] # # From oslo.policy # # This option controls whether or not to enforce scope when evaluating # policies. If ``True``, the scope of the token used in the request is compared # to the ``scope_types`` of the policy being enforced. If the scopes do not # match, an ``InvalidScope`` exception will be raised. If ``False``, a message # will be logged informing operators that policies are being invoked with # mismatching scope. (boolean value) #enforce_scope = false # This option controls whether or not to use old deprecated defaults when # evaluating policies. If ``True``, the old deprecated defaults are not going # to be evaluated. This means if any existing token is allowed for old defaults # but is disallowed for new defaults, it will be disallowed. It is encouraged # to enable this flag along with the ``enforce_scope`` flag so that you can get # the benefits of new defaults and ``scope_type`` together (boolean value) #enforce_new_defaults = false # The relative or absolute path of a file that maps roles to permissions for a # given service. Relative paths must be specified in relation to the # configuration file setting this option. (string value) #policy_file = policy.yaml # Default rule. Enforced when a requested rule is not found. (string value) #policy_default_rule = default # Directories where policy configuration files are stored. They can be relative # to any directory in the search path defined by the config_dir option, or # absolute paths. The file defined by policy_file must exist for these # directories to be searched. Missing or empty directories are ignored. (multi # valued) #policy_dirs = policy.d # Content Type to send and receive data for REST based policy check (string # value) # Possible values: # application/x-www-form-urlencoded - # application/json - #remote_content_type = application/x-www-form-urlencoded # server identity verification for REST based policy check (boolean value) #remote_ssl_verify_server_crt = false # Absolute path to ca cert file for REST based policy check (string value) #remote_ssl_ca_crt_file = # Absolute path to client cert for REST based policy check (string value) #remote_ssl_client_crt_file = # Absolute path client key file REST based policy check (string value) #remote_ssl_client_key_file = [tacker] # # From tacker.vnflcm.vnflcm_driver # # Hosting vnf drivers tacker plugin will use (list value) #vnflcm_infra_driver = openstack,kubernetes # MGMT driver to communicate with Hosting VNF/logical service instance tacker # plugin will use (list value) #vnflcm_mgmt_driver = vnflcm_noop # # From tacker.vnfm.monitor # # Monitor driver to communicate with Hosting VNF/logical service instance # tacker plugin will use (list value) #monitor_driver = ping,http_ping # Alarm monitoring driver to communicate with Hosting VNF/logical service # instance tacker plugin will use (list value) #alarm_monitor_driver = ceilometer # App monitoring driver to communicate with Hosting VNF/logical service # instance tacker plugin will use (list value) #app_monitor_driver = zabbix # # From tacker.vnfm.plugin # # MGMT driver to communicate with Hosting VNF/logical service instance tacker # plugin will use (list value) #mgmt_driver = noop,openwrt # Time interval to wait for VM to boot (integer value) #boot_wait = 30 # Hosting vnf drivers tacker plugin will use (list value) #infra_driver = noop,openstack,kubernetes # Hosting vnf drivers tacker plugin will use (list value) #policy_action = autoscaling,respawn,vdu_autoheal,log,log_and_kill [vim_keys] use_barbican = True # # From tacker.nfvo.drivers.vim.openstack_driver # # Dir.path to store fernet keys. (string value) #openstack = /etc/tacker/vim/fernet_keys # Use barbican to encrypt vim password if True, save vim credentials in local # file system if False (boolean value) #use_barbican = false [vim_monitor] # # From tacker.nfvo.drivers.vim.openstack_driver # # Number of ICMP packets to send (string value) #count = 1 # Number of seconds to wait for a response (string value) #timeout = 1 # Number of seconds to wait between packets (string value) #interval = 1 [vnf_lcm] # Vnflcm options group # # From tacker.conf # # endpoint_url (string value) #endpoint_url = http://localhost:9890/ # Number of subscriptions (integer value) #subscription_num = 100 # Number of retry (integer value) #retry_num = 3 # Retry interval(sec) (integer value) #retry_wait = 10 # Retry Timeout(sec) (integer value) #retry_timeout = 10 # Test callbackUri (boolean value) #test_callback_uri = true [vnf_package] vnf_package_csar_path = /opt/stack/data/tacker/vnfpackage # # Options under this group are used to store vnf packages in glance store. # # From tacker.conf # # Path to store extracted CSAR file (string value) #vnf_package_csar_path = /var/lib/tacker/vnfpackages/ # # Maximum size of CSAR file a user can upload in GB. # # An CSAR file upload greater than the size mentioned here would result # in an CSAR upload failure. This configuration option defaults to # 1024 GB (1 TiB). # # NOTES: # * This value should only be increased after careful # consideration and must be set less than or equal to # 8 EiB (~9223372036). # * This value must be set with careful consideration of the # backend storage capacity. Setting this to a very low value # may result in a large number of image failures. And, setting # this to a very large value may result in faster consumption # of storage. Hence, this must be set according to the nature of # images created and storage capacity available. # # Possible values: # * Any positive number less than or equal to 9223372036854775808 # (floating point value) # Minimum value: 1e-06 # Maximum value: 9223372036 #csar_file_size_cap = 1024 # # Secure hashing algorithm used for computing the 'hash' property. # # Possible values: # * sha256, sha512 # # Related options: # * None # (string value) #hashing_algorithm = sha512 # List of items to get from top-vnfd (list value) #get_top_list = tosca_definitions_version,description,metadata # Exclude node from node_template (list value) #exclude_node = VNF # List of types to get from lower-vnfd (list value) #get_lower_list = tosca.nodes.nfv.VNF,tosca.nodes.nfv.VDU.Tacker # List of del inputs from lower-vnfd (list value) #del_input_list = descriptor_id,descriptor_versionprovider,product_name,software_version,vnfm_info,flavour_id,flavour_description [agent] root_helper = sudo /usr/local/bin/tacker-rootwrap /etc/tacker/rootwrap.conf From cmccarth at mathworks.com Fri Jun 4 15:15:39 2021 From: cmccarth at mathworks.com (Christopher McCarthy) Date: Fri, 4 Jun 2021 15:15:39 +0000 Subject: [ops] rabbitmq queues for nova versioned notifications queues keep filling up In-Reply-To: References: Message-ID: Hi Ajay, We work around this by setting a TTL on our notifications queues via RabbitMQ policy definition. We include the following in our definitions.json for RabbitMQ: "policies":[ {"vhost": "/", "name": "notifications-ttl", "pattern": "^(notifications|versioned_notifications)\\.", "apply-to": "queues", "definition": {"message-ttl":600000}, "priority":0} ] This expires messages in the notifications and versioned_notifications queues after 10 minutes, which seems to work well for us. I believe we initially picked up this workaround from this[1] bug report. Hope this helps, - Chris -- Christopher McCarthy MathWorks cmccarth at mathworks.com [1] https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1737170 Date: Wed, 2 Jun 2021 22:39:54 -0000 From: "Ajay Tikoo (BLOOMBERG/ 120 PARK)" To: openstack-discuss at lists.openstack.org Subject: [ops] rabbitmq queues for nova versioned notifications queues keep filling up Message-ID: <60B808BA00D0068401D80001_0_3025859 at msclnypmsgsv04> Content-Type: text/plain; charset="utf-8" I am not sure if this is the right channel/format to post this question, so my apologies in advance if this is not the right place. We are using Openstack Rocky. Watcher needs versioned notifications to be enabled. However after enabling versioned notifications, the queues for versioned_notifications (info and error) keep filling up Based on the updates the the Watchers cluster data model, it appears that Watcher is consuming messages, but they still linger in these queues. So with nova versioned notifications disabled, Watcher is unable to update the cluster data model (between rebuild intervals), and with them enabled, it keeps filling up the MQ queues. What is the best way to resolve this? Thank you, Ajay Tikoo -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.camilleri at zylacomputing.com Fri Jun 4 15:16:18 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Fri, 4 Jun 2021 17:16:18 +0200 Subject: [Victoria][magnum][octavia]ingress-controller health degraded Message-ID: Hi Everyone, we have the following problem that we are trying to identify the main cause: Basically we have deployed an ingress and an ingress-controller (using the following deployment file https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml) the ingress controller deployment is successful with 1 replica of the ingress-controller pod, the Octavia LoadBalancer is successfully created and points to the NodePorts being published on each node. This was showing only 1 member in the LoadBalancers screen as healthy/online. I increased the replicas to 3, from the LoadBalancers screen in Horizon, I can see the service being reported as degraded and only the kubernetes-worker-nodes that have the ingress-controller pod/s deployed on are being reported as online, this behaviour is not the same as a standard deployment where the NodePort actually communicates with the ClusterIP:Port of the internal service and hence once there is a single pod UP the NodePorts are shown as up when queried: ingress-nginx-controller-74fd5565fb-d86h9   1/1     Running 0          14h     10.100.3.13 k8s-c1-prod-2-klctfd24lze6-node-1   ingress-nginx-controller-74fd5565fb-h9985   1/1     Running 0          15h     10.100.1.8 k8s-c1-prod-2-klctfd24lze6-node-0   ingress-nginx-controller-74fd5565fb-qkddq   1/1     Running 0          15h     10.100.1.7 k8s-c1-prod-2-klctfd24lze6-node-0   The below shows the status of the members in the pool as replica3: | 834750fe-e43e-408d-abc3-aad3dcde0fdb | member_0_node-0 | id | ACTIVE              | 192.168.1.75  |         32054 | ONLINE           |      1 | | 1ddffd80-acae-40b3-a2de-19be0a69a039 | member_0_node-2 | id | ACTIVE              | 192.168.1.90  |         32054 | ERROR            |      1 | | d4e4baa4-0a69-4775-8ea0-165a207f11ae | member_0_node-1| id | ACTIVE              | 192.168.1.148 |         32054 | ONLINE           |      1 | In fact to have the deployment spread across all 3 nodes, I had to increase the replicas until all 3 nodes had at least an instance of the ingress controller running on them (in this case it was replica 5). I do not believe this as being an Octavia issue as the health check is being done via TCP port number which is the NodePort exposed by Kubernetes and if the ingress-controller is not running on that node the port check fails, I added the label octavia mainly to get some input that may confirm the correct behavior of Octavia I am expecting to receive a healthy state when i check the members of the pool since I can query the ClusterIP from any worker node on ports 80 and 443 and the outcome is always successful but not when using the NodePort Thanks in advance From marios at redhat.com Fri Jun 4 15:19:10 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 4 Jun 2021 18:19:10 +0300 Subject: [TripleO] next irc meeting Tuesday 08 June @ 1400 UTC in OFTC #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 08 June 1400 UTC in OFTC irc channel: #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Add anything you want to highlight at https://etherpad.opendev.org/p/tripleo-meeting-items This can be recently completed things, ongoing review requests, blocking issues, or anything else tripleo you want to share. Our last meeting was on May 25 - you can find logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-05-25-14.00.html Hope you can make it on Tuesday, regards, marios From derekokeeffe85 at yahoo.ie Fri Jun 4 15:20:19 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 4 Jun 2021 15:20:19 +0000 (UTC) Subject: [novnc-console] Cannot connect to console References: <408400332.2018688.1622820019304.ref@mail.yahoo.com> Message-ID: <408400332.2018688.1622820019304@mail.yahoo.com> Hi all, This is my first post to this list so excuse me if I have not submitted correctly.  I have installed openstack Victoria manually as a multi node setup. A controller & 3 computes. Everything works fine and the way it's expected. I have secured horizon with letsencrypt certs (for now) and again all is fine. When I did a test deploy I also used those certs to load the novnc console securely and it worked. My problem with my new deploy is that the console will not load no matter what I try. I get the following error when I enable debug mode in nova. 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy Traceback (most recent call last):2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy   File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 691, in top_new_client2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy     client = self.do_handshake(startsock, address)2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy   File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 578, in do_handshake2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy     context.load_cert_chain(certfile=self.cert, keyfile=self.key, password=self.key_password)2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy PermissionError: [Errno 13] Permission denied If I don't have debug enabled I just get the permission denied error. I have switched to the nova user and confirmed I can access the certs directory and read the certs. All my nova services are running fine as well. My controller conf is the following:[default]ssl_only=true cert=/etc/letsencrypt/live/ /fullchain.pemkey=/etc/letsencrypt/live/ /privkey.pem [vnc]enabled = trueserver_listen = 0.0.0.0server_proxyclient_address = $my_ipnovncproxy_base_url = https://:6080/vnc_auto.html My compute config is the following:[vnc]enabled = trueserver_listen = 0.0.0.0server_proxyclient_address = $my_ipnovncproxy_base_url = https://:6080/vnc_auto.html  If anyone could help that would be really appreciated or any advice to further troubleshoot!! I cannot see anything else in any logs but I might not be looking in the right place. Thank you in advance. Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Jun 4 15:21:52 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 4 Jun 2021 16:21:52 +0100 Subject: [kolla-ansible] kolla-ansible destroy In-Reply-To: <476495C0-A42E-4B74-AF46-13FF814C974B@poczta.onet.pl> References: <476495C0-A42E-4B74-AF46-13FF814C974B@poczta.onet.pl> Message-ID: On Fri, 4 Jun 2021 at 14:54, at wrote: > > Hi > is kolla-ansible destroy "--tags" aware? What is the best way to remove all unwanted containers, configuration files, logs, etc. when you want to remove some service or move it to another node? > Regards > Adam Tomas Hi Adam, Currently it is not aware of tags, and will remove all services. We have talked about improving it in the past, but it needs someone to work on it. Thanks, Mark > > > From bkslash at poczta.onet.pl Fri Jun 4 15:31:40 2021 From: bkslash at poczta.onet.pl (at) Date: Fri, 4 Jun 2021 17:31:40 +0200 Subject: [kolla-ansible] kolla-ansible destroy In-Reply-To: References: Message-ID: <0DEAC90A-9F1A-4910-AA6A-02A36E3B55DD@poczta.onet.pl> Hi Mark, thank you for the answer. So what is the "cleanest" way to remove some service? For example I've moved gnocchi and ceilometer from controllers to dedicated nodes but there's a lot of "leftovers" on controllers - it won't be easy to find every one... Best regards Adam Tomas P.S. kolla-ansible is the best Openstack deployment method anyway :) > Wiadomość napisana przez Mark Goddard w dniu 04.06.2021, o godz. 17:21: > > On Fri, 4 Jun 2021 at 14:54, at wrote: >> >> Hi >> is kolla-ansible destroy "--tags" aware? What is the best way to remove all unwanted containers, configuration files, logs, etc. when you want to remove some service or move it to another node? >> Regards >> Adam Tomas > > Hi Adam, > > Currently it is not aware of tags, and will remove all services. We > have talked about improving it in the past, but it needs someone to > work on it. > > Thanks, > Mark > >> >> >> From mark at stackhpc.com Fri Jun 4 15:46:17 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 4 Jun 2021 16:46:17 +0100 Subject: [kolla-ansible] kolla-ansible destroy In-Reply-To: <0DEAC90A-9F1A-4910-AA6A-02A36E3B55DD@poczta.onet.pl> References: <0DEAC90A-9F1A-4910-AA6A-02A36E3B55DD@poczta.onet.pl> Message-ID: On Fri, 4 Jun 2021 at 16:31, at wrote: > > Hi Mark, thank you for the answer. So what is the "cleanest" way to remove some service? For example I've moved gnocchi and ceilometer from controllers to dedicated nodes but there's a lot of "leftovers" on controllers - it won't be easy to find every one... Removing the containers will at least stop it from running, but you may also want to remove users & endpoints from keystone, remove container configuration from /etc/kolla/, and potentially other service-specific stuff. > Best regards > Adam Tomas > P.S. kolla-ansible is the best Openstack deployment method anyway :) > > > Wiadomość napisana przez Mark Goddard w dniu 04.06.2021, o godz. 17:21: > > > > On Fri, 4 Jun 2021 at 14:54, at wrote: > >> > >> Hi > >> is kolla-ansible destroy "--tags" aware? What is the best way to remove all unwanted containers, configuration files, logs, etc. when you want to remove some service or move it to another node? > >> Regards > >> Adam Tomas > > > > Hi Adam, > > > > Currently it is not aware of tags, and will remove all services. We > > have talked about improving it in the past, but it needs someone to > > work on it. > > > > Thanks, > > Mark > > > >> > >> > >> > From mark at stackhpc.com Fri Jun 4 15:47:07 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 4 Jun 2021 16:47:07 +0100 Subject: [kolla-ansible] kolla-ansible destroy In-Reply-To: References: <0DEAC90A-9F1A-4910-AA6A-02A36E3B55DD@poczta.onet.pl> Message-ID: On Fri, 4 Jun 2021 at 16:46, Mark Goddard wrote: > > On Fri, 4 Jun 2021 at 16:31, at wrote: > > > > Hi Mark, thank you for the answer. So what is the "cleanest" way to remove some service? For example I've moved gnocchi and ceilometer from controllers to dedicated nodes but there's a lot of "leftovers" on controllers - it won't be easy to find every one... > Removing the containers will at least stop it from running, but you > may also want to remove users & endpoints from keystone, remove > container configuration from /etc/kolla/, and potentially > other service-specific stuff. See L422 https://etherpad.opendev.org/p/kolla-xena-ptg > > Best regards > > Adam Tomas > > P.S. kolla-ansible is the best Openstack deployment method anyway :) > > > > > Wiadomość napisana przez Mark Goddard w dniu 04.06.2021, o godz. 17:21: > > > > > > On Fri, 4 Jun 2021 at 14:54, at wrote: > > >> > > >> Hi > > >> is kolla-ansible destroy "--tags" aware? What is the best way to remove all unwanted containers, configuration files, logs, etc. when you want to remove some service or move it to another node? > > >> Regards > > >> Adam Tomas > > > > > > Hi Adam, > > > > > > Currently it is not aware of tags, and will remove all services. We > > > have talked about improving it in the past, but it needs someone to > > > work on it. > > > > > > Thanks, > > > Mark > > > > > >> > > >> > > >> > > From DHilsbos at performair.com Fri Jun 4 15:54:34 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 4 Jun 2021 15:54:34 +0000 Subject: [ops] Windows Guest Resolution Message-ID: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> All; We finally have reliable means to generate Windows images for our OpenStack, but we're running into a minor annoyance. Our Windows instances appear to have a resolution cap of 1024x768. It would be extremely useful to be able use resolutions higher than this, especially 1920x1080. Is this possible with OpenStack on KVM? As a second request; is there a way to add a second virtual monitor? Or to achieve the same thing, increase the resolution to 3840x1080? Thank you, Dominic L. Hilsbos, MBA Vice President - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From luke.camilleri at zylacomputing.com Fri Jun 4 17:01:49 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Fri, 4 Jun 2021 19:01:49 +0200 Subject: [ops] Windows Guest Resolution In-Reply-To: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> References: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> Message-ID: <0a9c4759-8bd7-b1c2-6ca4-b15225f6413a@zylacomputing.com> I believe you need the guest drivers https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers Right now the instance does not seem to have a windows driver for the video hardware and hence will use a generic video driver On 04/06/2021 17:54, DHilsbos at performair.com wrote: > All; > > We finally have reliable means to generate Windows images for our OpenStack, but we're running into a minor annoyance. Our Windows instances appear to have a resolution cap of 1024x768. It would be extremely useful to be able use resolutions higher than this, especially 1920x1080. Is this possible with OpenStack on KVM? > > As a second request; is there a way to add a second virtual monitor? Or to achieve the same thing, increase the resolution to 3840x1080? > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From Arkady.Kanevsky at dell.com Fri Jun 4 17:21:58 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 4 Jun 2021 17:21:58 +0000 Subject: [Interop] draft of presentation to the board Message-ID: https://docs.google.com/presentation/d/1-9H1cTXZxW0vCSTzfBe0aMKbd7nggd8SOHQOT987nFs/ Comments welcome. Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Fri Jun 4 18:04:59 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Fri, 4 Jun 2021 23:04:59 +0500 Subject: [ops] Windows Guest Resolution In-Reply-To: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> References: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> Message-ID: Hi, Please try to install latest drivers from below links. https://www.linuxsysadmins.com/create-windows-server-image-for-openstack/ https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.190-1/ Ammad On Fri, Jun 4, 2021 at 8:59 PM wrote: > All; > > We finally have reliable means to generate Windows images for our > OpenStack, but we're running into a minor annoyance. Our Windows instances > appear to have a resolution cap of 1024x768. It would be extremely useful > to be able use resolutions higher than this, especially 1920x1080. Is this > possible with OpenStack on KVM? > > As a second request; is there a way to add a second virtual monitor? Or > to achieve the same thing, increase the resolution to 3840x1080? > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri Jun 4 18:53:36 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 4 Jun 2021 14:53:36 -0400 Subject: [novnc-console] Cannot connect to console In-Reply-To: <408400332.2018688.1622820019304@mail.yahoo.com> References: <408400332.2018688.1622820019304.ref@mail.yahoo.com> <408400332.2018688.1622820019304@mail.yahoo.com> Message-ID: Hi Derek, What's the permissions of the letsencrypt cert files and the user that Nova is running on? sudo -u nova stat /etc/letsencrypt/live/ /fullchain.pem Will probably fail, so you might wanna fix that! M On Fri, Jun 4, 2021 at 11:23 AM Derek O keeffe wrote: > > Hi all, > > This is my first post to this list so excuse me if I have not submitted correctly. > > I have installed openstack Victoria manually as a multi node setup. A controller & 3 computes. Everything works fine and the way it's expected. I have secured horizon with letsencrypt certs (for now) and again all is fine. When I did a test deploy I also used those certs to load the novnc console securely and it worked. > > My problem with my new deploy is that the console will not load no matter what I try. I get the following error when I enable debug mode in nova. > > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy Traceback (most recent call last): > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 691, in top_new_client > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy client = self.do_handshake(startsock, address) > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 578, in do_handshake > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy context.load_cert_chain(certfile=self.cert, keyfile=self.key, password=self.key_password) > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy PermissionError: [Errno 13] Permission denied > > If I don't have debug enabled I just get the permission denied error. I have switched to the nova user and confirmed I can access the certs directory and read the certs. All my nova services are running fine as well. > > My controller conf is the following: > [default] > ssl_only=true > cert=/etc/letsencrypt/live/ /fullchain.pem > key=/etc/letsencrypt/live/ /privkey.pem > > [vnc] > enabled = true > server_listen = 0.0.0.0 > server_proxyclient_address = $my_ip > novncproxy_base_url = https://:6080/vnc_auto.html > > My compute config is the following: > [vnc] > enabled = true > server_listen = 0.0.0.0 > server_proxyclient_address = $my_ip > novncproxy_base_url = https://:6080/vnc_auto.html > > > If anyone could help that would be really appreciated or any advice to further troubleshoot!! I cannot see anything else in any logs but I might not be looking in the right place. Thank you in advance. > > Derek > > > -- Mohammed Naser VEXXHOST, Inc. From melwittt at gmail.com Fri Jun 4 19:04:04 2021 From: melwittt at gmail.com (melanie witt) Date: Fri, 4 Jun 2021 12:04:04 -0700 Subject: [nova] stable branches nova-grenade-multinode job broken Message-ID: Hi all, FYI the nova-grenade-multinode CI job is known to be broken on stable branches at the moment due to a too new version of Ceph (Pacific) being installed that is incompatible with the older jobs. We have fixes proposed with the following patch (and its backports to victoria/ussuri/train) to convert the job to native Zuul v3: https://review.opendev.org/c/openstack/nova/+/794345 Once these patches merge, the CI should be passing again. Cheers, -melanie From derekokeeffe85 at yahoo.ie Fri Jun 4 19:32:12 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 4 Jun 2021 19:32:12 +0000 (UTC) Subject: [novnc-console] Cannot connect to console In-Reply-To: References: <408400332.2018688.1622820019304.ref@mail.yahoo.com> <408400332.2018688.1622820019304@mail.yahoo.com> Message-ID: <1859406315.1820662.1622835132691@mail.yahoo.com> Hi Mohammad, Thanks you for the reply. Below is the output of the command you sent: sudo -u nova stat /etc/letsencrypt/live//fullchain.pem   File: /etc/letsencrypt/live//fullchain.pem  Size: 5616      Blocks: 16         IO Block: 4096   regular fileDevice: 802h/2050d Inode: 7340138     Links: 1Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)Access: 2021-06-04 15:47:48.544545426 +0100Modify: 2021-06-03 11:50:26.410071017 +0100Change: 2021-06-03 11:52:39.870554481 +0100 Birth: - The permissions on the live directory are: ls -al /etc/letsencrypt/live/total 16drwx--x--x 3 root root 4096 Jun  3 11:53 .drwxr-xr-x 9 root root 4096 Jun  3 11:50 ..-rw-r--r-- 1 root root  740 Jun  3 11:50 READMEdrwxr-xr-x 2 root root 4096 Jun  3 11:50  I changed the owner and group to nova as a test to see if that was the issue but it still didn't work. The first error I had was as you say a permissions issue on the live directory and as nova (su nova -s /bin/bash) I couldn't access that directory so I changed the permissions and tested it as the nova user (cd /etc/letsencrypt/live & cat fullchain.pem) and I could read the files in there. I then had the error I sent in the original email. The funny thing is I had a test deploy and it all worked fine but when I redeployed it on new machines with the same OS (ubuntu 20.04) it won't work for me.  Regards,Derek On Friday 4 June 2021, 19:59:31 IST, Mohammed Naser wrote: Hi Derek, What's the permissions of the letsencrypt cert files and the user that Nova is running on? sudo -u nova stat /etc/letsencrypt/live/ /fullchain.pem Will probably fail, so you might wanna fix that! M On Fri, Jun 4, 2021 at 11:23 AM Derek O keeffe wrote: > > Hi all, > > This is my first post to this list so excuse me if I have not submitted correctly. > > I have installed openstack Victoria manually as a multi node setup. A controller & 3 computes. Everything works fine and the way it's expected. I have secured horizon with letsencrypt certs (for now) and again all is fine. When I did a test deploy I also used those certs to load the novnc console securely and it worked. > > My problem with my new deploy is that the console will not load no matter what I try. I get the following error when I enable debug mode in nova. > > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy Traceback (most recent call last): > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy  File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 691, in top_new_client > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy    client = self.do_handshake(startsock, address) > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy  File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 578, in do_handshake > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy    context.load_cert_chain(certfile=self.cert, keyfile=self.key, password=self.key_password) > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy PermissionError: [Errno 13] Permission denied > > If I don't have debug enabled I just get the permission denied error. I have switched to the nova user and confirmed I can access the certs directory and read the certs. All my nova services are running fine as well. > > My controller conf is the following: > [default] > ssl_only=true > cert=/etc/letsencrypt/live/ /fullchain.pem > key=/etc/letsencrypt/live/ /privkey.pem > > [vnc] > enabled = true > server_listen = 0.0.0.0 > server_proxyclient_address = $my_ip > novncproxy_base_url = https://:6080/vnc_auto.html > > My compute config is the following: > [vnc] > enabled = true > server_listen = 0.0.0.0 > server_proxyclient_address = $my_ip > novncproxy_base_url = https://:6080/vnc_auto.html > > > If anyone could help that would be really appreciated or any advice to further troubleshoot!! I cannot see anything else in any logs but I might not be looking in the right place. Thank you in advance. > > Derek > > > -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangerzonen at gmail.com Fri Jun 4 15:48:23 2021 From: dangerzonen at gmail.com (dangerzone ar) Date: Fri, 4 Jun 2021 23:48:23 +0800 Subject: [Tacker] Tacker Not able to create VIM In-Reply-To: References: Message-ID: Hi All, I'm struggling these few days to register vim on my Tacker from the dashboard. What I did is I removed tacker.conf and with the original file and set back the setting each line..when I run the create from dashboard I'm still not able to register the VIM but now I'm getting a new error below. Below is the error *error: failed to register vim: unable to find key file for vim* I also tried from cli and still failed with error below command run:- tacker vim-register --config-file vim_config.yaml --is-default vim-default --os-username admin --os-project-name admin --os-project-domain-name Default --os-auth-url http://192.168.0.121:5000/v3 --os-password c81e0c7a842f40c6 error return:- Expecting to find domain in user. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-a980cea4-adf2-4461-a66d-4c6c3bfd2e7d) Most of the line in tacker.conf setting is based on https://docs.openstack.org/tacker/latest/install/manual_installation.html I'm running all-in-one openstack packstack (queens) and deploying Tacker manually. I really hope someone could advise and help me please. Thank you for your help and support. **Attached image file and tacker.log for ref.* On Fri, Jun 4, 2021 at 11:05 PM yasufum wrote: > Hi, > > It might be a failure of not tacker but authentication because I've run > VIM registration as you tried and no failure happened although it's just > a bit different from your environment. Could you run it from CLI again > referring [1] if you cannot register from horizon? > > [1] https://docs.openstack.org/tacker/latest/install/getting_started.html > > Thanks, > Yasufumi > > On 2021/06/03 10:55, dangerzone ar wrote: > > Hi all, > > > > I just deployed Tacker and tried to add my 1^st VIM but I’m getting > > errors as per attached file. Pls advise how to resolve this problem. > Thanks > > > > 1. *Error: *Failed to register VIM: {"error": {"message": > > "(http://192.168.0.121:5000/v3/tokens > > ): The resource could not be > > found.", "code": 404, "title": "Not Found"}} > > > > 2. *Error as below**à**WARNING keystonemiddleware.auth_token [-] > > Authorization failed for token: InvalidToken*** > > > > ** > > > > *{"vim": {"vim_project": {"name": "admin", "project_domain_name": > > "Default"}, "description": "d", "is_default": false, "auth_cred": > > {"username": "admin", "user_domain_name": "Default", "password": > > "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 > > **", "type": "openstack", "name": "d"}} > > process_request > > /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* > > > > *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] > > Authorization failed for token: InvalidToken* > > > > *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - > > [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* > > > > ** > > > > Below is my tacker.conf > > > > [DEFAULT] > > auth_strategy = keystone > > policy_file = /etc/tacker/policy.json > > debug = True > > use_syslog = False > > bind_host = 192.168.0.121 > > bind_port = 9890 > > service_plugins = nfvo,vnfm > > state_path = /var/lib/tacker > > > > > > [nfvo] > > vim_drivers = openstack > > > > [keystone_authtoken] > > region_name = RegionOne > > auth_type = password > > project_domain_name = Default > > user_domain_name = Default > > username = tacker > > password = password > > auth_url = http://192.168.0.121:35357 > > auth_uri = http://192.168.0.121:5000 > > > > [agent] > > root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf > > > > > > [database] > > connection = > > mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 > > ** > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: err1.jpg Type: image/jpeg Size: 59986 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tacker.log Type: application/octet-stream Size: 11279 bytes Desc: not available URL: From amy at demarco.com Fri Jun 4 21:00:12 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 4 Jun 2021 16:00:12 -0500 Subject: [Diversity] Diversity and Inclusion Meeting Reminder - OFTC Message-ID: The Diversity & Inclusion WG invites members of all OIF projects to attend our next meeting Monday June 7th, at 17:00 UTC in the #openinfra-diversity channel on OFTC. The agenda can be found at https://etherpad.openstack.org/p/ diversity-wg-agenda. Please feel free to add any topics you wish to discuss at the meeting. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Fri Jun 4 21:31:36 2021 From: haleyb.dev at gmail.com (Brian Haley) Date: Fri, 4 Jun 2021 17:31:36 -0400 Subject: [neutron][all] Functional/tempest/rally jobs not running on changes Message-ID: Hi, This might be affecting more than Neutron so I added the [all] tag, and is maybe being discussed in one of the #opendev channels and I missed it (?), but looking at a recent patch recheck shows a number of jobs not being run, for example [0] has just 11 jobs instead of 25 in the previous run. So for now I would not approve any changes since they could merge accidentally with broken code. I pinged gmann and he thought [1] might have caused this, and it just merged... so perhaps a quick revert is in order. -Brian [0] https://review.opendev.org/c/openstack/neutron/+/790060 [1] https://review.opendev.org/c/openstack/devstack/+/791541 From gmann at ghanshyammann.com Fri Jun 4 21:38:35 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 04 Jun 2021 16:38:35 -0500 Subject: [neutron][all] Functional/tempest/rally jobs not running on changes In-Reply-To: References: Message-ID: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> ---- On Fri, 04 Jun 2021 16:31:36 -0500 Brian Haley wrote ---- > Hi, > > This might be affecting more than Neutron so I added the [all] tag, and > is maybe being discussed in one of the #opendev channels and I missed it > (?), but looking at a recent patch recheck shows a number of jobs not > being run, for example [0] has just 11 jobs instead of 25 in the > previous run. > > So for now I would not approve any changes since they could merge > accidentally with broken code. > > I pinged gmann and he thought [1] might have caused this, and it just > merged... so perhaps a quick revert is in order. yeah, that is only patch in devstack side we merged.I have not clue why 791541 is causing the issue for check pipleline on master. gate pipeline is all fine and all the jobs are running there. Anyways I proposed the revert for now and meanwhile we can debug what went wrong with 'pragma'. - https://review.opendev.org/c/openstack/devstack/+/794822 -gmann > > -Brian > > [0] https://review.opendev.org/c/openstack/neutron/+/790060 > [1] https://review.opendev.org/c/openstack/devstack/+/791541 > > From cboylan at sapwetik.org Fri Jun 4 22:39:22 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 04 Jun 2021 15:39:22 -0700 Subject: =?UTF-8?Q?Re:_[neutron][all]_Functional/tempest/rally_jobs_not_running_o?= =?UTF-8?Q?n_changes?= In-Reply-To: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> References: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> Message-ID: On Fri, Jun 4, 2021, at 2:38 PM, Ghanshyam Mann wrote: > ---- On Fri, 04 Jun 2021 16:31:36 -0500 Brian Haley > wrote ---- > > Hi, > > > > This might be affecting more than Neutron so I added the [all] tag, > and > > is maybe being discussed in one of the #opendev channels and I > missed it > > (?), but looking at a recent patch recheck shows a number of jobs > not > > being run, for example [0] has just 11 jobs instead of 25 in the > > previous run. > > > > So for now I would not approve any changes since they could merge > > accidentally with broken code. > > > > I pinged gmann and he thought [1] might have caused this, and it > just > > merged... so perhaps a quick revert is in order. > > yeah, that is only patch in devstack side we merged.I have not clue why > 791541 > is causing the issue for check pipleline on master. gate pipeline is > all fine and > all the jobs are running there. Anyways I proposed the revert for now > and meanwhile > we can debug what went wrong with 'pragma'. Reading the docs [2] I think you need to include the current branch too. That pragma doesn't appear to be additive and instead defines the complete list. This means you not only need the feature/r1 branch but also master. > > - https://review.opendev.org/c/openstack/devstack/+/794822 > > -gmann > > > > > -Brian > > > > [0] https://review.opendev.org/c/openstack/neutron/+/790060 > > [1] https://review.opendev.org/c/openstack/devstack/+/791541 [2] https://zuul-ci.org/docs/zuul/reference/pragma_def.html#attr-pragma.implied-branches From fungi at yuggoth.org Fri Jun 4 23:33:10 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 4 Jun 2021 23:33:10 +0000 Subject: [docs] Project contributor doc latest redirects (was: Upcoming changes to the OpenStack Community IRC this weekend) In-Reply-To: <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> References: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> Message-ID: <20210604233309.3wvrmwytxph7q2j6@yuggoth.org> On 2021-06-03 17:39:23 -0500 (-0500), Ghanshyam Mann wrote: [...] > Fungi will add the global redirect link to master/latest version > in openstack-manual. Project does not need to do this explicitly. [...] Proposed now as https://review.opendev.org/794874 if anyone feels up for reviewing it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Sat Jun 5 07:18:18 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 5 Jun 2021 09:18:18 +0200 Subject: [docs] Project contributor doc latest redirects (was: Upcoming changes to the OpenStack Community IRC this weekend) In-Reply-To: <20210604233309.3wvrmwytxph7q2j6@yuggoth.org> References: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> <20210604233309.3wvrmwytxph7q2j6@yuggoth.org> Message-ID: On Sat, Jun 5, 2021 at 1:34 AM Jeremy Stanley wrote: > > On 2021-06-03 17:39:23 -0500 (-0500), Ghanshyam Mann wrote: > [...] > > Fungi will add the global redirect link to master/latest version > > in openstack-manual. Project does not need to do this explicitly. > [...] > > Proposed now as https://review.opendev.org/794874 if anyone feels up > for reviewing it. It's going in! -yoctozepto From fungi at yuggoth.org Sat Jun 5 13:37:19 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 5 Jun 2021 13:37:19 +0000 Subject: [docs] Project contributor doc latest redirects (was: Upcoming changes to the OpenStack Community IRC this weekend) In-Reply-To: References: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> <20210604233309.3wvrmwytxph7q2j6@yuggoth.org> Message-ID: <20210605133719.g3udhvsotyknanmc@yuggoth.org> On 2021-06-05 09:18:18 +0200 (+0200), Radosław Piliszek wrote: > On Sat, Jun 5, 2021 at 1:34 AM Jeremy Stanley wrote: > > > > On 2021-06-03 17:39:23 -0500 (-0500), Ghanshyam Mann wrote: > > [...] > > > Fungi will add the global redirect link to master/latest version > > > in openstack-manual. Project does not need to do this explicitly. > > [...] > > > > Proposed now as https://review.opendev.org/794874 if anyone feels up > > for reviewing it. > > It's going in! And some quick spot-checks indicate it's deployed and working as intended. Let me know if anyone notices any issues with it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Sat Jun 5 22:45:32 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 05 Jun 2021 17:45:32 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 4th June, 21: Reading: 5 min Message-ID: <179de5a5664.fe10e867230787.6943675927200289834@ghanshyammann.com> Hello Everyone, Here is last week's summary of the Technical Committee activities. 1. What we completed this week: ========================= * Retired the sushy-cli[1]. * Replaced freenode ref with OFTC[2]. * Added TC resolution to move the IRC network from Freenode to OFTC[3] 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - https://meetings.opendev.org/meetings/tc/2021/tc.2021-06-03-15.00.log.html * We will have next week's meeting on June 10th, Thursday 15:00 UTC[4]. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ TC is using the etherpad[5] for Xena cycle working item. We will be checking and updating the status biweekly in the same etherpad. Open Reviews ----------------- * Two open reviews for ongoing activities[6]. MIgration from Freenode to OFTC ----------------------------------------- * All the required work for this migration is tracked in this etherpad[7] * TC resolution is merged[3]. * OFTC bot/logging is done. All project teams started their discussion/meetings to OFTC. * I have communicated the next steps on openstack-discuss ML[8] as well as to all PTLs individual email. * We are in 'Communicate with community' work where we need to update all contributor doc etc. Please finish this in your project and mark the progress in etherpad[7]. * This migration has been proposed to add in Open Infra newsletter OpenStack's news also[9]. * Topic change on Freenode channels will be done on June 11th. Nomination is open for the 'Y' release naming ------------------------------------------------------ * Y release naming process is started[10]. Nomination is open until June 10th feel free to propose names in below wiki ** https://wiki.openstack.org/wiki/Release_Naming/Y_Proposals Replacing ATC terminology with AC (Active Contributors) ------------------------------------------------------------------- * Governance charter change for ATC->AC has been merged [11]. * TC resolution to map the ATC with the new term AC from Bylaws' perspective is up[12]. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [14] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [15] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://review.opendev.org/c/openstack/governance/+/792348 [2] https://review.opendev.org/c/openstack/governance/+/793864 [3] https://review.opendev.org/c/openstack/governance/+/793260 [4] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [5] https://etherpad.opendev.org/p/tc-xena-tracker [6] https://review.opendev.org/q/project:openstack/governance+status:open [7] https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc [8] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022780.html [9] https://etherpad.opendev.org/p/newsletter-openstack-news [10] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022383.html [11] https://review.opendev.org/c/openstack/governance/+/790092 [12] https://review.opendev.org/c/openstack/governance/+/794366 [13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [14] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [15] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From gmann at ghanshyammann.com Sat Jun 5 22:49:56 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 05 Jun 2021 17:49:56 -0500 Subject: [neutron][all] Functional/tempest/rally jobs not running on changes In-Reply-To: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> References: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> Message-ID: <179de5e5d11.108fe1e3c230837.1328507055378743815@ghanshyammann.com> ---- On Fri, 04 Jun 2021 16:38:35 -0500 Ghanshyam Mann wrote ---- > ---- On Fri, 04 Jun 2021 16:31:36 -0500 Brian Haley wrote ---- > > Hi, > > > > This might be affecting more than Neutron so I added the [all] tag, and > > is maybe being discussed in one of the #opendev channels and I missed it > > (?), but looking at a recent patch recheck shows a number of jobs not > > being run, for example [0] has just 11 jobs instead of 25 in the > > previous run. > > > > So for now I would not approve any changes since they could merge > > accidentally with broken code. > > > > I pinged gmann and he thought [1] might have caused this, and it just > > merged... so perhaps a quick revert is in order. > > yeah, that is only patch in devstack side we merged.I have not clue why 791541 > is causing the issue for check pipleline on master. gate pipeline is all fine and > all the jobs are running there. Anyways I proposed the revert for now and meanwhile > we can debug what went wrong with 'pragma'. > > - https://review.opendev.org/c/openstack/devstack/+/794822 This is merged now, please do recheck if any of your patch's check pipeline did not run the complete jobs. -gmann > > -gmann > > > > > -Brian > > > > [0] https://review.opendev.org/c/openstack/neutron/+/790060 > > [1] https://review.opendev.org/c/openstack/devstack/+/791541 > > > > > > From ueha.ayumu at fujitsu.com Mon Jun 7 03:02:26 2021 From: ueha.ayumu at fujitsu.com (ueha.ayumu at fujitsu.com) Date: Mon, 7 Jun 2021 03:02:26 +0000 Subject: [Tacker] Tacker Not able to create VIM In-Reply-To: References: Message-ID: Hi Have you installed “barbican” as described in the instructions? https://docs.openstack.org/tacker/latest/install/manual_installation.html#pre-requisites I looked at the error log. It seems that the error occurred on the route that does not use barbican. Could you add the following settings to tacker.conf and try again? [vim_keys] use_barbican = True Thanks, Ueha From: dangerzone ar Sent: Saturday, June 5, 2021 12:48 AM To: yasufum Cc: OpenStack Discuss Subject: Re: [Tacker] Tacker Not able to create VIM Hi All, I'm struggling these few days to register vim on my Tacker from the dashboard. What I did is I removed tacker.conf and with the original file and set back the setting each line..when I run the create from dashboard I'm still not able to register the VIM but now I'm getting a new error below. Below is the error error: failed to register vim: unable to find key file for vim I also tried from cli and still failed with error below command run:- tacker vim-register --config-file vim_config.yaml --is-default vim-default --os-username admin --os-project-name admin --os-project-domain-name Default --os-auth-url http://192.168.0.121:5000/v3 --os-password c81e0c7a842f40c6 error return:- Expecting to find domain in user. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-a980cea4-adf2-4461-a66d-4c6c3bfd2e7d) Most of the line in tacker.conf setting is based on https://docs.openstack.org/tacker/latest/install/manual_installation.html I'm running all-in-one openstack packstack (queens) and deploying Tacker manually. I really hope someone could advise and help me please. Thank you for your help and support. *Attached image file and tacker.log for ref. On Fri, Jun 4, 2021 at 11:05 PM yasufum > wrote: Hi, It might be a failure of not tacker but authentication because I've run VIM registration as you tried and no failure happened although it's just a bit different from your environment. Could you run it from CLI again referring [1] if you cannot register from horizon? [1] https://docs.openstack.org/tacker/latest/install/getting_started.html Thanks, Yasufumi On 2021/06/03 10:55, dangerzone ar wrote: > Hi all, > > I just deployed Tacker and tried to add my 1^st VIM but I’m getting > errors as per attached file. Pls advise how to resolve this problem. Thanks > > 1. *Error: *Failed to register VIM: {"error": {"message": > "(http://192.168.0.121:5000/v3/tokens > ): The resource could not be > found.", "code": 404, "title": "Not Found"}} > > 2. *Error as below**à**WARNING keystonemiddleware.auth_token [-] > Authorization failed for token: InvalidToken*** > > ** > > *{"vim": {"vim_project": {"name": "admin", "project_domain_name": > "Default"}, "description": "d", "is_default": false, "auth_cred": > {"username": "admin", "user_domain_name": "Default", "password": > "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 > **", "type": "openstack", "name": "d"}} > process_request > /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* > > *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] > Authorization failed for token: InvalidToken* > > *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - > [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* > > ** > > Below is my tacker.conf > > [DEFAULT] > auth_strategy = keystone > policy_file = /etc/tacker/policy.json > debug = True > use_syslog = False > bind_host = 192.168.0.121 > bind_port = 9890 > service_plugins = nfvo,vnfm > state_path = /var/lib/tacker > > > [nfvo] > vim_drivers = openstack > > [keystone_authtoken] > region_name = RegionOne > auth_type = password > project_domain_name = Default > user_domain_name = Default > username = tacker > password = password > auth_url = http://192.168.0.121:35357 > auth_uri = http://192.168.0.121:5000 > > [agent] > root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf > > > [database] > connection = > mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 > ** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Mon Jun 7 06:41:08 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 7 Jun 2021 08:41:08 +0200 Subject: [baremetal-sig][ironic] Tue June 8, 2021, 2pm UTC: The Ironic Python Agent Builder Message-ID: <4af9f9ed-dd59-0463-ec41-aa2f2905aafc@cern.ch> Dear all, The Bare Metal SIG will meet tomorrow Tue June 8, 2021, at 2pm UTC on zoom. The meeting will feature a "topic-of-the-day" presentation by Dmitry Tantsur (dtantsur) with an "Introduction to the Ironic Python Agent Builder" As usual, all details on https://etherpad.opendev.org/p/bare-metal-sig Everyone is welcome, hope to see you there! Cheers, Arne From kira034 at 163.com Mon Jun 7 07:14:13 2021 From: kira034 at 163.com (Hongbin Lu) Date: Mon, 7 Jun 2021 15:14:13 +0800 (CST) Subject: [neutron] Bug deputy report - week of May 31th Message-ID: <79fe35c0.3fb1.179e55266e6.Coremail.kira034@163.com> Hi, I was bug deputy last week. Here is my report regarding bugs from it: Critical: * https://bugs.launchpad.net/neutron/+bug/1930397 neutron-lib from master branch is breaking our UT job * https://bugs.launchpad.net/neutron/+bug/1930401 Fullstack l3 agent tests failing due to timeout waiting until port is active * https://bugs.launchpad.net/neutron/+bug/1930402 SSH timeouts happens very often in the ovn based CI jobs * https://bugs.launchpad.net/neutron/+bug/1930750 pyroute2 >= 0.6.2 fails in pep8 import analysis High: * https://bugs.launchpad.net/neutron/+bug/1930367 "TestNeutronServer" related tests failing frequently Medium: * https://bugs.launchpad.net/neutron/+bug/1930294 Port deletion fails due to foreign key constraint * https://bugs.launchpad.net/neutron/+bug/1930432 [L2] provisioning_block should be added to Neutron internal service port? Or should not? * https://bugs.launchpad.net/neutron/+bug/1930443 [LB] Linux Bridge agent always loads trunk extension, regardless of the loaded service plugins * https://bugs.launchpad.net/neutron/+bug/1930926 Failing over OVN dbs can cause original controller to permanently lose connection * https://bugs.launchpad.net/neutron/+bug/1930996 "rpc_response_max_timeout" configuration variable not present in neutron-sriov-nic agent Low: * https://bugs.launchpad.net/neutron/+bug/1930283 PUT /v2.0/qos/policies/{policy_id}/minimum_bandwidth_rules/{rule_id} returns HTTP 501 which is undocumented in the API ref * https://bugs.launchpad.net/neutron/+bug/1930876 "get_reservations_for_resources" execute DB operations without opening a DB context Triaging in progress * https://bugs.launchpad.net/neutron/+bug/1930838 key error in deleted_ports * https://bugs.launchpad.net/neutron/+bug/1930858 OVN central service does not start properly -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Mon Jun 7 07:42:10 2021 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 7 Jun 2021 09:42:10 +0200 Subject: [neutron][all] Functional/tempest/rally jobs not running on changes In-Reply-To: <179de5e5d11.108fe1e3c230837.1328507055378743815@ghanshyammann.com> References: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> <179de5e5d11.108fe1e3c230837.1328507055378743815@ghanshyammann.com> Message-ID: Hi, There was a bunch of patches which tried to reduce the number of jobs executed for Neutron: https://review.opendev.org/q/topic:%22improve-neutron-ci%22+(status:open%20OR%20status:merged) worth checking it as perhaps some irrelevant file list needs to be updated. lajoskatona Ghanshyam Mann ezt írta (időpont: 2021. jún. 6., V, 0:51): > ---- On Fri, 04 Jun 2021 16:38:35 -0500 Ghanshyam Mann < > gmann at ghanshyammann.com> wrote ---- > > ---- On Fri, 04 Jun 2021 16:31:36 -0500 Brian Haley < > haleyb.dev at gmail.com> wrote ---- > > > Hi, > > > > > > This might be affecting more than Neutron so I added the [all] tag, > and > > > is maybe being discussed in one of the #opendev channels and I > missed it > > > (?), but looking at a recent patch recheck shows a number of jobs > not > > > being run, for example [0] has just 11 jobs instead of 25 in the > > > previous run. > > > > > > So for now I would not approve any changes since they could merge > > > accidentally with broken code. > > > > > > I pinged gmann and he thought [1] might have caused this, and it > just > > > merged... so perhaps a quick revert is in order. > > > > yeah, that is only patch in devstack side we merged.I have not clue why > 791541 > > is causing the issue for check pipleline on master. gate pipeline is > all fine and > > all the jobs are running there. Anyways I proposed the revert for now > and meanwhile > > we can debug what went wrong with 'pragma'. > > > > - https://review.opendev.org/c/openstack/devstack/+/794822 > > This is merged now, please do recheck if any of your patch's check > pipeline did not run the > complete jobs. > > -gmann > > > > > -gmann > > > > > > > > -Brian > > > > > > [0] https://review.opendev.org/c/openstack/neutron/+/790060 > > > [1] https://review.opendev.org/c/openstack/devstack/+/791541 > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Mon Jun 7 07:45:28 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Mon, 7 Jun 2021 16:45:28 +0900 Subject: [Tacker] Tacker Not able to create VIM In-Reply-To: References: Message-ID: <421ce3c0-87d9-dd63-ea46-65cc5ecd98d5@gmail.com> Hi Ueha, It a little bit strange because using barbican is for a consideration for security. Without using barbican, encode_vim_auth() should work because it just outputs a contents of fernet_key to a file under "/etc/tacker/vim/fernet_keys/" if `use_barbican` isn't True. https://opendev.org/openstack/tacker/src/branch/master/tacker/nfvo/drivers/vim/openstack_driver.py#L224 I think the reason of the error is the output directory doesn't exist or wrong permission (Although changing to use barbican might also be working as you suggested). What do you think? Thanks, Yasufumi On 2021/06/07 12:02, ueha.ayumu at fujitsu.com wrote: > Hi > > Have you installed “barbican” as described in the instructions? > > https://docs.openstack.org/tacker/latest/install/manual_installation.html#pre-requisites > > > I looked at the error log. It seems that the error occurred on the route > that does not use barbican. > > Could you add the following settings to tacker.conf and try again? > > [vim_keys] > > use_barbican = True > > Thanks, > > Ueha > > *From:*dangerzone ar > *Sent:* Saturday, June 5, 2021 12:48 AM > *To:* yasufum > *Cc:* OpenStack Discuss > *Subject:* Re: [Tacker] Tacker Not able to create VIM > > Hi All, > > I'm struggling these few days to register vim on my Tacker from the > dashboard. What I did is I removed tacker.conf and with the original > file and set back the setting each line..when I run the create from > dashboard I'm still not able to register the VIM but now I'm getting a > new error below. > > Below is the error > > *error: failed to register vim: unable to find key file for vim* > > I also tried from cli and still failed with error below > > command run:- > > tacker vim-register --config-file vim_config.yaml --is-default > vim-default --os-username admin --os-project-name admin > --os-project-domain-name Default --os-auth-url > http://192.168.0.121:5000/v3 >  --os-password c81e0c7a842f40c6 > > error return:- > Expecting to find domain in user. The server could not comply with the > request since it is either malformed or otherwise incorrect. The client > is assumed to be in error. (HTTP 400) (Request-ID: > req-a980cea4-adf2-4461-a66d-4c6c3bfd2e7d) > > Most of the line in tacker.conf setting is based on > > https://docs.openstack.org/tacker/latest/install/manual_installation.html > > I'm running all-in-one openstack packstack (queens) and deploying Tacker > manually. I really hope someone could advise and help me please. Thank > you for your help and support. > > **Attached image file and tacker.log for ref.* > > On Fri, Jun 4, 2021 at 11:05 PM yasufum > wrote: > > Hi, > > It might be a failure of not tacker but authentication because I've run > VIM registration as you tried and no failure happened although it's > just > a bit different from your environment. Could you run it from CLI again > referring [1] if you cannot register from horizon? > > [1] > https://docs.openstack.org/tacker/latest/install/getting_started.html > > Thanks, > Yasufumi > > On 2021/06/03 10:55, dangerzone ar wrote: > > Hi all, > > > > I just deployed Tacker and tried to add my 1^st VIM but I’m getting > > errors as per attached file. Pls advise how to resolve this problem. Thanks > > > >  1. *Error: *Failed to register VIM: {"error": {"message": > >     "(http://192.168.0.121:5000/v3/tokens > > >      >): The resource could not be > >     found.", "code": 404, "title": "Not Found"}} > > > >  2. *Error as below**à**WARNING keystonemiddleware.auth_token [-] > >     Authorization failed for token: InvalidToken*** > > > > ** > > > > *{"vim": {"vim_project": {"name": "admin", "project_domain_name": > > "Default"}, "description": "d", "is_default": false, "auth_cred": > > {"username": "admin", "user_domain_name": "Default", "password": > > "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 > > >**", > "type": "openstack", "name": "d"}} > > process_request > > /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* > > > > *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] > > Authorization failed for token: InvalidToken* > > > > *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - > > [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* > > > > ** > > > > Below is my tacker.conf > > > > [DEFAULT] > > auth_strategy = keystone > > policy_file = /etc/tacker/policy.json > > debug = True > > use_syslog = False > > bind_host = 192.168.0.121 > > bind_port = 9890 > > service_plugins = nfvo,vnfm > > state_path = /var/lib/tacker > > > > > > [nfvo] > > vim_drivers = openstack > > > > [keystone_authtoken] > > region_name = RegionOne > > auth_type = password > > project_domain_name = Default > > user_domain_name = Default > > username = tacker > > password = password > > auth_url = http://192.168.0.121:35357 > > > > auth_uri = http://192.168.0.121:5000 > > > > > > [agent] > > root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf > > > > > > [database] > > connection = > > mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 > > > >** > > > From ueha.ayumu at fujitsu.com Mon Jun 7 08:04:00 2021 From: ueha.ayumu at fujitsu.com (ueha.ayumu at fujitsu.com) Date: Mon, 7 Jun 2021 08:04:00 +0000 Subject: [Tacker] Tacker Not able to create VIM In-Reply-To: <421ce3c0-87d9-dd63-ea46-65cc5ecd98d5@gmail.com> References: <421ce3c0-87d9-dd63-ea46-65cc5ecd98d5@gmail.com> Message-ID: Hi Yasufumi, I think so, I suggested using barbican is one of workaround. As you said, I think it's better to check the directory (/etc/tacker/vim/fernet_keys) first. > I think the reason of the error is the output directory doesn't exist or wrong permission Thanks, Ueha -----Original Message----- From: Yasufumi Ogawa Sent: Monday, June 7, 2021 4:45 PM To: Ueha, Ayumu/植波 歩 ; 'dangerzone ar' Cc: OpenStack Discuss Subject: Re: [Tacker] Tacker Not able to create VIM Hi Ueha, It a little bit strange because using barbican is for a consideration for security. Without using barbican, encode_vim_auth() should work because it just outputs a contents of fernet_key to a file under "/etc/tacker/vim/fernet_keys/" if `use_barbican` isn't True. https://opendev.org/openstack/tacker/src/branch/master/tacker/nfvo/drivers/vim/openstack_driver.py#L224 I think the reason of the error is the output directory doesn't exist or wrong permission (Although changing to use barbican might also be working as you suggested). What do you think? Thanks, Yasufumi On 2021/06/07 12:02, ueha.ayumu at fujitsu.com wrote: > Hi > > Have you installed “barbican” as described in the instructions? > > https://docs.openstack.org/tacker/latest/install/manual_installation.h > tml#pre-requisites > html#pre-requisites> > > I looked at the error log. It seems that the error occurred on the > route that does not use barbican. > > Could you add the following settings to tacker.conf and try again? > > [vim_keys] > > use_barbican = True > > Thanks, > > Ueha > > *From:*dangerzone ar > *Sent:* Saturday, June 5, 2021 12:48 AM > *To:* yasufum > *Cc:* OpenStack Discuss > *Subject:* Re: [Tacker] Tacker Not able to create VIM > > Hi All, > > I'm struggling these few days to register vim on my Tacker from the > dashboard. What I did is I removed tacker.conf and with the original > file and set back the setting each line..when I run the create from > dashboard I'm still not able to register the VIM but now I'm getting a > new error below. > > Below is the error > > *error: failed to register vim: unable to find key file for vim* > > I also tried from cli and still failed with error below > > command run:- > > tacker vim-register --config-file vim_config.yaml --is-default > vim-default --os-username admin --os-project-name admin > --os-project-domain-name Default --os-auth-url > http://192.168.0.121:5000/v3 >  --os-password c81e0c7a842f40c6 > > error return:- > Expecting to find domain in user. The server could not comply with the > request since it is either malformed or otherwise incorrect. The > client is assumed to be in error. (HTTP 400) (Request-ID: > req-a980cea4-adf2-4461-a66d-4c6c3bfd2e7d) > > Most of the line in tacker.conf setting is based on > > https://docs.openstack.org/tacker/latest/install/manual_installation.h > tml > html> > > I'm running all-in-one openstack packstack (queens) and deploying > Tacker manually. I really hope someone could advise and help me > please. Thank you for your help and support. > > **Attached image file and tacker.log for ref.* > > On Fri, Jun 4, 2021 at 11:05 PM yasufum > wrote: > > Hi, > > It might be a failure of not tacker but authentication because I've run > VIM registration as you tried and no failure happened although it's > just > a bit different from your environment. Could you run it from CLI again > referring [1] if you cannot register from horizon? > > [1] > > https://docs.openstack.org/tacker/latest/install/getting_started.html > > > > Thanks, > Yasufumi > > On 2021/06/03 10:55, dangerzone ar wrote: > > Hi all, > > > > I just deployed Tacker and tried to add my 1^st VIM but I’m getting > > errors as per attached file. Pls advise how to resolve this problem. Thanks > > > >  1. *Error: *Failed to register VIM: {"error": {"message": > >     "(http://192.168.0.121:5000/v3/tokens > > >      >): The resource could not be > >     found.", "code": 404, "title": "Not Found"}} > > > >  2. *Error as below**à**WARNING keystonemiddleware.auth_token [-] > >     Authorization failed for token: InvalidToken*** > > > > ** > > > > *{"vim": {"vim_project": {"name": "admin", "project_domain_name": > > "Default"}, "description": "d", "is_default": false, "auth_cred": > > {"username": "admin", "user_domain_name": "Default", "password": > > "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 > > >**", > "type": "openstack", "name": "d"}} > > process_request > > /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* > > > > *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] > > Authorization failed for token: InvalidToken* > > > > *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - > > [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* > > > > ** > > > > Below is my tacker.conf > > > > [DEFAULT] > > auth_strategy = keystone > > policy_file = /etc/tacker/policy.json > > debug = True > > use_syslog = False > > bind_host = 192.168.0.121 > > bind_port = 9890 > > service_plugins = nfvo,vnfm > > state_path = /var/lib/tacker > > > > > > [nfvo] > > vim_drivers = openstack > > > > [keystone_authtoken] > > region_name = RegionOne > > auth_type = password > > project_domain_name = Default > > user_domain_name = Default > > username = tacker > > password = password > > auth_url = http://192.168.0.121:35357 > > > > auth_uri = http://192.168.0.121:5000 > > > > > > [agent] > > root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf > > > > > > [database] > > connection = > > mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 > > > >** > > > From mark at stackhpc.com Mon Jun 7 08:10:53 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 7 Jun 2021 09:10:53 +0100 Subject: [kolla] [kolla-ansible] Magnum UI In-Reply-To: References: Message-ID: On Thu, 3 Jun 2021 at 09:49, Alexandros Soumplis wrote: > > Hi all, > > Before submitting a bug against launchpad I would like to ask if anyone > else can confirm this issue. I deploy Magnum on Victoria release using > the ubuntu binary containers and I do not have the UI installed. > Changing to the source binaries, the UI is installed and working as > expected. Is this a configerror, a bug or a feature maybe :) Hi Alexandros, Thank you for raising this issue. It was an easy fix, so I raised a bug [1] and fixed it [2]. Mark [1] https://bugs.launchpad.net/kolla/+bug/1931075 [2] https://review.opendev.org/c/openstack/kolla/+/795054 > > Thank you, > a. > > From geguileo at redhat.com Mon Jun 7 08:54:09 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 7 Jun 2021 10:54:09 +0200 Subject: [victoria][cinder ?] Dell Unity + Iscsi In-Reply-To: References: Message-ID: <20210607085409.5heiwmvt67nv4kwa@localhost> On 01/06, Albert Shih wrote: > Hi everyone > > > I've a small openstack configuration with 4 computes nodes, a Dell Unity 480F for the storage. > > I'm using cinder with iscsi. > > Everything work when I create a instance. But some instance after few time > are not reponsive. When I check on the hypervisor I can see > > [888240.310461] sd 14:0:0:2: [sdb] tag#120 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE > [888240.310493] sd 14:0:0:2: [sdb] tag#120 Sense Key : Illegal Request [current] > [888240.310502] sd 14:0:0:2: [sdb] tag#120 Add. Sense: Logical unit not supported > [888240.310510] sd 14:0:0:2: [sdb] tag#120 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 > [888240.310519] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 > [888240.311045] sd 14:0:0:2: [sdb] tag#121 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE > [888240.311050] sd 14:0:0:2: [sdb] tag#121 Sense Key : Illegal Request [current] > [888240.311065] sd 14:0:0:2: [sdb] tag#121 Add. Sense: Logical unit not supported > [888240.311070] sd 14:0:0:2: [sdb] tag#121 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 > [888240.311074] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 > [888240.342482] sd 14:0:0:2: [sdb] tag#70 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE > [888240.342490] sd 14:0:0:2: [sdb] tag#70 Sense Key : Illegal Request [current] > [888240.342496] sd 14:0:0:2: [sdb] tag#70 Add. Sense: Logical unit not supported > > I check on the hypervisor, no error at all on the ethernet interface. > > I check on the switch, no error at all on the interface on the switch. > > No sure but it's seem the problem appear more often when the instance are > doing nothing during some time. > Hi, You should first check if the volume is still exported and mapped to the host in Unity's web console. If it is still properly mapped, you should configure mutlipathing to make it more resilient. If it isn't you probably should confirm that all nodes have different initiator name (/etc/iscsi/initiatorname.iscsi) and different hostname (if configured in nova's conf file under "host" or at the Linux level if not). In any case I would turn on debug logs on Nova and Cinder and try to follow what happened with that specific LUN. Cheers, Gorka. > Every firmware, software on the Unity are uptodate. > > The 4 computes are exactly same, they run the same version of the > nova-compute & OS & firmware on the hardware. > > Any clue ? Or place to search the problem ? > > Regards > > -- > Albert SHIH > Observatoire de Paris > xmpp: jas at obspm.fr > Heure local/Local time: > Tue Jun 1 08:27:42 PM CEST 2021 > From skaplons at redhat.com Mon Jun 7 10:46:43 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 07 Jun 2021 12:46:43 +0200 Subject: [neutron][interop][refstack] New tests and capabilities to track in interop In-Reply-To: References: <6595086.PSTg7GmUaj@p1> Message-ID: <1982844.YfaRR3DleS@p1> Hi, Dnia środa, 2 czerwca 2021 14:16:50 CEST Martin Kopec pisze: > Hi Slawek, > > thanks for getting back to us and sharing new potential tests and > capabilities from neutron-tempest-plugin. > Let's first discuss tests which are in tempest directly please. > > We have done an analysis where we have cross checked tests we have in our > guidelines with the ones (api and non-admin ones) present in tempest at the > tempest checkout we currently use and here are the results: > https://etherpad.opendev.org/p/refstack-test-analysis > There are 110 and tempest.api.network tests which we don't have in any > guideline yet. > Could you please have a look at the list of the tests? Would it make sense > to include them in a guideline? Would they extend any network capabilities > we have in OpenStack Powered Platform program or would we need to create a > new one(s)? > https://opendev.org/osf/interop/src/branch/master/next.json Sure. I took a look at that list today. I think that: * tests from the group tempest.api.network.test_allowed_address_pair could be added to the "networks-l2-CRUD". Allowed_address_pairs is API extension, but it is supported by ML2 plugin since very long time, and should be available in all clouds which are using ML2 plugin. * tests from tempest.api.network.test_dhcp_ipv6 can probably be included in "IPAM drivers" section as now I think all clouds should supports IPv6 :) * tempest.api.network.test_floating_ips - those tests could be probably added to the "Core API L3 extension" section, but I'm not sure what are the guidlines for negative tests in the refstack, * Tests from tempest.api.network.test_networks.BulkNetwork* - are similar to the other L2 CRUD tests but are testing basic bulk CRUD operations for Networks. So It could be IMO included in the "networks-l2-CRUD" section * same for all other tests from tempest.api.network.test_networks and tempest.api.network.test_networks_negative modules * Tests from tempest.api.network.test_ports can probably also be included in the "network-l2-CRUD" section as filtering is supported by core Neutron db modules, * Tests from the tempest.api.network.test_routers module can probably go to the network-l3-CRUD section, That are the tests which I think that may be included somehow in the refstack. But I'm not refstack expert so please forgive me if I included here too many of them or if some of them are not approriate to be there :) > > Thank you, > > On Mon, 24 May 2021 at 16:33, Slawek Kaplonski wrote: > > Hi, > > > > Dnia poniedziałek, 26 kwietnia 2021 17:48:08 CEST Martin Kopec pisze: > > > Hi everyone, > > > > > > > > > > > > I would like to further discuss the topics we covered with the neutron > > > > team > > > > > during > > > > > > the PTG [1]. > > > > > > > > > > > > * adding address_group API capability > > > > > > It's tested by tests in neutron-tempest-plugin. First question is if > > > > tests > > > > > which are > > > > > > not directly in tempest can be a part of a non-add-on marketing program? > > > > > > It's possible to move them to tempest though, by the time we do so, could > > > > > > they be > > > > > > marked as advisory? > > > > > > > > > > > > * Shall we include QoS tempest tests since we don't know what share of > > > > > > vendors > > > > > > enable QoS? Could it be an add-on? > > > > > > These tests are also in neutron-tempest-plugin, I assume we're talking > > > > about > > > > > neutron_tempest_plugin.api.test_qos tests. > > > > > > If we want to include these tests, which program should they belong to? > > > > Do > > > > > we wanna > > > > > > create a new one? > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Mon Jun 7 12:15:51 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 07 Jun 2021 14:15:51 +0200 Subject: [neutron] IRC meetings location Message-ID: <2488685.8ipJFutIaR@p1> Hi, As we discussed on the last team meeting, I proposed [1] and it was merged today. So our meetings starting this week will be on the #openstack-neutron channel @OFTC. Please be aware of that change and see You on the channel at the meetings :) [1] https://review.opendev.org/c/opendev/irc-meetings/+/794711[1] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/opendev/irc-meetings/+/794711 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From mkopec at redhat.com Mon Jun 7 12:31:53 2021 From: mkopec at redhat.com (Martin Kopec) Date: Mon, 7 Jun 2021 14:31:53 +0200 Subject: [valet] Should we retire x/valet? Message-ID: Hi all, x/valet project has been inactive for some time now, f.e. there is a review [1] which has been open for more than 2 years and which would solve sanity issues in Tempest. The project is on an exclude list due to that [2]. Also there haven't been any real changes for the past 3 years [3]. I'm bringing this up to start a discussion about the future of the project. Should it be retired? Is it used? Are there any plans with it? [1] https://review.opendev.org/c/x/valet/+/638339 [2] https://opendev.org/openstack/tempest/src/commit/663787ee794df54e7ded41e5f3e8ae246e9b4288/tools/generate-tempest-plugins-list.py#L53 [3] https://opendev.org/x/valet/commits/branch/master Regards, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Mon Jun 7 12:31:56 2021 From: mkopec at redhat.com (Martin Kopec) Date: Mon, 7 Jun 2021 14:31:56 +0200 Subject: [mogan] Should we retire x/mogan? Message-ID: Hi all, x/mogan project has been inactive for some time now. It causes sanity issues in Tempest due to which it's excluded from the sanity check [1] and a review which should help to resolve them [2] is left untouched with failing gates - which also shows that the project is not maintained. Plus there haven't been any real changes done in the past 3 years [3]. I'm bringing this up to start a discussion about the future of the project. Should it be retired? Is it used? Are there any plans with it? [1] https://opendev.org/openstack/tempest/src/commit/663787ee794df54e7ded41e5f3e8ae246e9b4288/tools/generate-tempest-plugins-list.py#L59 [2] https://review.opendev.org/c/x/mogan/+/767718 [3] https://opendev.org/x/mogan/commits/branch/master Regards, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Mon Jun 7 12:31:55 2021 From: mkopec at redhat.com (Martin Kopec) Date: Mon, 7 Jun 2021 14:31:55 +0200 Subject: [kingbird] Should we retire x/kingbird? Message-ID: Hi all, x/kingbird project has been inactive for some time now, f.e. there is a bug [1] which has been open for more than a year and which would solve sanity issues in Tempest. The project is on an exclude list due to that [2]. Also there haven't been any real changes for the past 3 years [3]. I'm bringing this up to start a discussion about the future of the project. Should it be retired? Is it used? Are there any plans with it? [1] https://bugs.launchpad.net/kingbird/+bug/1869722 [2] https://opendev.org/openstack/tempest/src/commit/663787ee794df54e7ded41e5f3e8ae246e9b4288/tools/generate-tempest-plugins-list.py#L54 [3] https://opendev.org/x/kingbird/commits/branch/master Regards, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Mon Jun 7 12:48:17 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 7 Jun 2021 17:48:17 +0500 Subject: [ops] Windows Guest Resolution In-Reply-To: References: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> Message-ID: Hi Hilibos, I have tested one more thing, you need to set image property hw_video_mode to qxl and install RedHat QXL controller driver from virtio-win IO. Then you will be able to set the resolution to 1080. - Ammad On Fri, Jun 4, 2021 at 11:04 PM Ammad Syed wrote: > Hi, > > Please try to install latest drivers from below links. > > https://www.linuxsysadmins.com/create-windows-server-image-for-openstack/ > > > https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.190-1/ > > Ammad > > On Fri, Jun 4, 2021 at 8:59 PM wrote: > >> All; >> >> We finally have reliable means to generate Windows images for our >> OpenStack, but we're running into a minor annoyance. Our Windows instances >> appear to have a resolution cap of 1024x768. It would be extremely useful >> to be able use resolutions higher than this, especially 1920x1080. Is this >> possible with OpenStack on KVM? >> >> As a second request; is there a way to add a second virtual monitor? Or >> to achieve the same thing, increase the resolution to 3840x1080? >> >> Thank you, >> >> Dominic L. Hilsbos, MBA >> Vice President - Information Technology >> Perform Air International Inc. >> DHilsbos at PerformAir.com >> www.PerformAir.com >> >> >> >> > > -- > Regards, > > > Syed Ammad Ali > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From atikoo at bloomberg.net Mon Jun 7 13:56:39 2021 From: atikoo at bloomberg.net (Ajay Tikoo (BLOOMBERG/ 120 PARK)) Date: Mon, 7 Jun 2021 13:56:39 -0000 Subject: =?UTF-8?B?UmU6IFtvcHNdIHJhYmJpdG1xIHF1ZXVlcyBmb3Igbm92YSB2ZXJzaW9uZWQgbm90aWZpYw==?= =?UTF-8?B?YXRpb25zIHF1ZXVlcyBrZWVwIGZpbGxpbmcgdXA=?= Message-ID: <60BE259700B103CE00390001_0_33274@msllnjpmsgsv06> Thank you, Christopher. From: cmccarth at mathworks.com At: 06/04/21 11:17:23 UTC-4:00To: openstack-discuss at lists.openstack.org Subject: Re: [ops] rabbitmq queues for nova versioned notifications queues keep filling up Hi Ajay, We work around this by setting a TTL on our notifications queues via RabbitMQ policy definition. We include the following in our definitions.json for RabbitMQ: "policies":[ {"vhost": "/", "name": "notifications-ttl", "pattern": "^(notifications|versioned_notifications)\\.", "apply-to": "queues", "definition": {"message-ttl":600000}, "priority":0} ] This expires messages in the notifications and versioned_notifications queues after 10 minutes, which seems to work well for us. I believe we initially picked up this workaround from this[1] bug report. Hope this helps, - Chris -- Christopher McCarthy MathWorks cmccarth at mathworks.com [1] https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1737170 Date: Wed, 2 Jun 2021 22:39:54 -0000 From: "Ajay Tikoo (BLOOMBERG/ 120 PARK)" To: openstack-discuss at lists.openstack.org Subject: [ops] rabbitmq queues for nova versioned notifications queues keep filling up Message-ID: <60B808BA00D0068401D80001_0_3025859 at msclnypmsgsv04> Content-Type: text/plain; charset="utf-8" I am not sure if this is the right channel/format to post this question, so my apologies in advance if this is not the right place. We are using Openstack Rocky. Watcher needs versioned notifications to be enabled. However after enabling versioned notifications, the queues for versioned_notifications (info and error) keep filling up Based on the updates the the Watchers cluster data model, it appears that Watcher is consuming messages, but they still linger in these queues. So with nova versioned notifications disabled, Watcher is unable to update the cluster data model (between rebuild intervals), and with them enabled, it keeps filling up the MQ queues. What is the best way to resolve this? Thank you, Ajay Tikoo -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Jun 7 15:50:20 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 07 Jun 2021 08:50:20 -0700 Subject: [mogan] Should we retire x/mogan? In-Reply-To: References: Message-ID: On Mon, Jun 7, 2021, at 5:31 AM, Martin Kopec wrote: > Hi all, > > x/mogan project has been inactive for some time now. It causes sanity > issues in Tempest due to which it's excluded from the sanity check [1] > and a review which should help to resolve them [2] is left untouched > with failing gates - which also shows that the project is not > maintained. Plus there haven't been any real changes done in the past 3 > years [3]. > > I'm bringing this up to start a discussion about the future of the project. > Should it be retired? Is it used? Are there any plans with it? Projects are in the x/ namespace because they weren't officially part of OpenStack. I think that puts us in a weird position to decide it should be abandoned. That said if the original maintainers chime in I suppose that is one possibility. As an idea, why not exclude all x/* projects from the project list in generate-tempest-plugins-list.py by default, then explicitly add the ones you know you care about instead? Then you don't have to maintain these lists unless state changes in something you care about and that might be something you want to take action on. > > [1] > https://opendev.org/openstack/tempest/src/commit/663787ee794df54e7ded41e5f3e8ae246e9b4288/tools/generate-tempest-plugins-list.py#L59 > [2] https://review.opendev.org/c/x/mogan/+/767718 > [3] https://opendev.org/x/mogan/commits/branch/master > > Regards, > -- > Martin Kopec > Senior Software Quality Engineer > Red Hat EMEA > > > From DHilsbos at performair.com Mon Jun 7 16:17:36 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Mon, 7 Jun 2021 16:17:36 +0000 Subject: [ops][victoria] Instance Hostname Metadata Message-ID: <0670B960225633449A24709C291A5252511AE7DA@COM01.performair.local> All; Is there an instance metadata value that will set and / or change the instance hostname? Thank you, Dominic L. Hilsbos, MBA Vice President - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From johnsomor at gmail.com Mon Jun 7 16:30:52 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 7 Jun 2021 09:30:52 -0700 Subject: [oslo][taskflow][tooz][infra] Proposal to retire #openstack-state-management IRC channel Message-ID: Hello OpenStack community, The recent need to update various pointers to OpenStack IRC channels raised a question about the continued need for the #openstack-state-management channel[1]. This channel is for discussions of the OpenStack state management libraries such as TaskFlow and Tooz. Both libraries fall under the Oslo project. These projects are both in a maintenance phase and discussions in the #openstack-state-management channel have been few and far between. Today, at the Oslo IRC meeting, we discussed retiring this channel and updating the IRC channel information to the main Oslo IRC channel #openstack-oslo[2]. The intent of this change is to help users find a larger group of people that may be able to help answer questions as well as reduce the number of channels people and bots need to monitor. If you have any questions or concerns about the plan to consolidate this channel into the main Oslo IRC channel, please let us know. I plan to work on updating the documentation update patches (grin) to point to the Oslo channel and will work with OpenDev/Infra to retire the #openstack-state-management channel. Michael [1] https://review.opendev.org/c/openstack/taskflow/+/793992 [2] https://meetings.opendev.org/meetings/oslo/2021/oslo.2021-06-07-15.00.log.html From peiyongz at gmail.com Mon Jun 7 05:51:05 2021 From: peiyongz at gmail.com (Pete Zhang) Date: Sun, 6 Jun 2021 22:51:05 -0700 Subject: Getting error during install openstack-nova-scheduler Message-ID: I hit the following errors and would like to know the fix. Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gro.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_latencystats.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx5.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_member.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_nfp.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_tap.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_pci.so.2()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-traits >= 0.16.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pdump.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vdev_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-resource-classes >= 0.4.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_failsafe.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_ring.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_ixgbe.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bitratestats.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_stack.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vdev.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_qede.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vhost.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_metrics.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_i40e.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pci.so.1()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Error: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gro.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_latencystats.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx5.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_member.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_nfp.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_tap.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_pci.so.2()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-traits >= 0.16.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pdump.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vdev_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-resource-classes >= 0.4.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_failsafe.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_ring.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_ixgbe.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bitratestats.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_stack.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vdev.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_qede.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vhost.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_metrics.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_i40e.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pci.so.1()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon Jun 7 17:04:41 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 7 Jun 2021 11:04:41 -0600 Subject: [tripleo][ci] ovb jobs Message-ID: 0/ Update on the OVB jobs across all centos-stream-8 branches. OVB jobs should be SUCCESSFUL If your overcloud has the rpm hostname.3.20-6.el8 and NOT 3.20-7.el8. e.g. [1] . The hostname package is being fixed via centos packaging. Related Change: https://git.centos.org/rpms/hostname/c/e097d2aac3e76eebbaac3ee4c2b95f575f3798fa?branch=c8s Related Bugs: https://bugs.launchpad.net/tripleo/+bug/1930849 https://bugzilla.redhat.com/show_bug.cgi?id=1965897 https://bugzilla.redhat.com/show_bug.cgi?id=1956378 The CI team is putting in a temporary patch to force any OVB job to BUILD the overcloud images vs. pulling the prebuilt images until new overcloud images are rebuilt and promoted at this time [2] Thanks to Sandeep, Arx, Yatin and MIchele!!! [1] https://logserver.rdoproject.org/61/33961/6/check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/9b98429/logs/overcloud-controller-0/var/log/extra/package-list-installed.txt.gz https://logserver.rdoproject.org/42/795042/1/openstack-check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/4b6d711/logs/overcloud-controller-0/var/log/extra/package-list-installed.txt.gz [2] https://review.rdoproject.org/r/c/rdo-jobs/+/34022 https://review.rdoproject.org/r/c/config/+/34023/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Mon Jun 7 18:07:55 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 7 Jun 2021 11:07:55 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler Message-ID: I hit this error when installing “openstack-nova-scheduler” of release train.Anyone knows the issue/fix? What is the librte? is it another rpm i can download somewhere? or what is the best channel/DL to post this question, thx.Here is what I did. 1. I did this in a test box. 2. I have puppet-modules installed on the box 3. I have openstack-release-train’s rpms on the box and built a local-repo for puppet to install Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gro.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_latencystats.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx5.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_member.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_nfp.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_tap.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_pci.so.2()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-traits >= 0.16.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pdump.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vdev_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-resource-classes >= 0.4.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_failsafe.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_ring.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_ixgbe.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bitratestats.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_stack.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vdev.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_qede.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vhost.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_metrics.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_i40e.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pci.so.1()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Error: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyongz at gmail.com Mon Jun 7 18:01:26 2021 From: peiyongz at gmail.com (Pete Zhang) Date: Mon, 7 Jun 2021 11:01:26 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler Message-ID: I hit this error when installing “openstack-nova-scheduler” of release train.Anyone knows the issue/fix? What is the librte? is it another rpm i can download somewhere? or what is the best channel/DL to post this question, thx.Here is what I did. 1. I did this in a test box. 2. I have puppet-modules installed on the box 3. I have openstack-release-train’s rpms on the box and built a local-repo for puppet to install Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gro.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_latencystats.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx5.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_member.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_nfp.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_tap.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_pci.so.2()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-traits >= 0.16.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pdump.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vdev_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-resource-classes >= 0.4.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_failsafe.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_ring.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_ixgbe.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bitratestats.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_stack.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vdev.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_qede.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vhost.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_metrics.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_i40e.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pci.so.1()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Error: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Jun 7 18:52:42 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 7 Jun 2021 20:52:42 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> On 6/7/21 8:07 PM, Pete Zhang wrote: > > I hit this error when installing “openstack-nova-scheduler” of release > train.Anyone knows the issue/fix? > What is the librte? is it another rpm i can download somewhere? > or what is the best channel/DL to post this question, thx.Here is what I > did. > > 1. I did this in a test box. > 2. I have puppet-modules installed on the box > 3. I have openstack-release-train’s rpms on the box and built a > local-repo for puppet to install > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > Requires: librte_mempool_bucket.so.1()(64bit) > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Hi, I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though librte is from dpdk. It's likely a bug if nova-scheduler depends on openvswitch (but it's probably not a bug if OVS depends on dpdk if it was compiled with dpdk support). Cheers, Thomas Goirand (zigo) From juliaashleykreger at gmail.com Mon Jun 7 19:14:59 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 7 Jun 2021 12:14:59 -0700 Subject: [RDO] Re: Getting error during install openstack-nova-scheduler In-Reply-To: References: Message-ID: Greetings Pete, I'm going to guess your issue may actually be with RDO packaging dependencies than with the nova project itself. I guess there is there is a dependency issue for Centos7? Are any RDO contributors aware of this? I suspect you need Centos Extra enabled as a couple of the required files/libraries are sourced from packages in extras, such openvswitch itself and dpdk. -Julia On Mon, Jun 7, 2021 at 10:09 AM Pete Zhang wrote: > > > I hit the following errors and would like to know the fix. > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_bucket.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_vmbus.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mbuf.so.4()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_mlx4.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4()(64bit) > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: libsodium.so.23()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_gso.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_netvsc.so.1()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-tooz >= 1.58.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_bnxt.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_gro.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_latencystats.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_mlx5.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_member.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_nfp.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_tap.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_pci.so.2()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-os-traits >= 0.16.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pdump.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-os-resource-classes >= 0.4.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ring.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_failsafe.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_ring.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bitratestats.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_stack.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_vdev.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_qede.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_vhost.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_metrics.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_i40e.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pci.so.1()(64bit) > > You could try using --skip-broken to work around the problem > > You could try running: rpm -Va --nofiles --nodigest > > Error: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_bucket.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_vmbus.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mbuf.so.4()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_mlx4.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4()(64bit) > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: libsodium.so.23()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_gso.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_netvsc.so.1()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-tooz >= 1.58.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_bnxt.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_gro.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_latencystats.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_mlx5.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_member.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_nfp.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_tap.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_pci.so.2()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-os-traits >= 0.16.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pdump.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-os-resource-classes >= 0.4.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ring.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_failsafe.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_ring.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bitratestats.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_stack.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_vdev.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_qede.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_vhost.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_metrics.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_i40e.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pci.so.1()(64bit) > > You could try using --skip-broken to work around the problem > > You could try running: rpm -Va --nofiles --nodigest From mrunge at matthias-runge.de Mon Jun 7 19:37:51 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Mon, 7 Jun 2021 21:37:51 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: > On 6/7/21 8:07 PM, Pete Zhang wrote: > > > > I hit this error when installing “openstack-nova-scheduler” of release > > train.Anyone knows the issue/fix? > > What is the librte? is it another rpm i can download somewhere? > > or what is the best channel/DL to post this question, thx.Here is what I > > did. > > > > 1. I did this in a test box. > > 2. I have puppet-modules installed on the box > > 3. I have openstack-release-train’s rpms on the box and built a > > local-repo for puppet to install > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_bucket.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > Hi, > > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though > librte is from dpdk. It's likely a bug if nova-scheduler depends on > openvswitch (but it's probably not a bug if OVS depends on dpdk if it > was compiled with dpdk support). Packages ending with el7 are probably a bit aged already. You may want to switch to something more recent. RDO is only updating the latest release. I don't know where you got the other packages from, but I can see there is no direct dependency from openstack-nova-scheduler to openvswitch[1]. On the other side, the openvswitch build indeed requires librte[2]. RDO describes the used repositories[3], and you may want to enable CentOS extras. [1] https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec [2] https://cbs.centos.org/koji/rpminfo?rpmID=173673 [3] https://www.rdoproject.org/documentation/repositories/ -- Matthias Runge From cboylan at sapwetik.org Mon Jun 7 21:00:03 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 07 Jun 2021 14:00:03 -0700 Subject: [ops][victoria] Instance Hostname Metadata In-Reply-To: <0670B960225633449A24709C291A5252511AE7DA@COM01.performair.local> References: <0670B960225633449A24709C291A5252511AE7DA@COM01.performair.local> Message-ID: On Mon, Jun 7, 2021, at 9:17 AM, DHilsbos at performair.com wrote: > All; > > Is there an instance metadata value that will set and / or change the > instance hostname? Yes, there are two keys: "hostname" and "name", https://docs.openstack.org/nova/latest/user/metadata.html#openstack-format-metadata. I'm not completely sure what the difference is between the two, but it looks like hostname may be more of an fqdn and name is a hostname? You then need a tool like cloud-init or glean to set the name. Glean only operates on the config drive which doesn't update post creation which means it won't handle name changes. I'm not sure if name changes are something that cloud-init can watch out for and update on the instance. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From peiyong.zhang at salesforce.com Mon Jun 7 21:27:50 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 7 Jun 2021 14:27:50 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler Message-ID: Julie, The original email is too long and requires moderator approval. So I have a new email thread instead. The openstack-vswitch is required (>=11.0.0 < 12.0.0) by openstack-neutron (v15.0.0, from openstack-release-train, the release we chose). I downloaded openstack-vswitch-11.0.0 from https://forge.puppet.com/modules/openstack/vswitch/11.0.0. Where I can download the missing *librte and its dependencies*? I don't think we have a yum-repo for Centos Extra so I might need to have those dependencies downloaded as well. Thanks a lot! Pete -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jun 7 23:24:05 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 07 Jun 2021 18:24:05 -0500 Subject: [mogan] Should we retire x/mogan? In-Reply-To: References: Message-ID: <179e8ca57e3.10c36bf3f315912.8329947087135243517@ghanshyammann.com> ---- On Mon, 07 Jun 2021 10:50:20 -0500 Clark Boylan wrote ---- > On Mon, Jun 7, 2021, at 5:31 AM, Martin Kopec wrote: > > Hi all, > > > > x/mogan project has been inactive for some time now. It causes sanity > > issues in Tempest due to which it's excluded from the sanity check [1] > > and a review which should help to resolve them [2] is left untouched > > with failing gates - which also shows that the project is not > > maintained. Plus there haven't been any real changes done in the past 3 > > years [3]. > > > > I'm bringing this up to start a discussion about the future of the project. > > Should it be retired? Is it used? Are there any plans with it? > > Projects are in the x/ namespace because they weren't officially part of OpenStack. I think that puts us in a weird position to decide it should be abandoned. That said if the original maintainers chime in I suppose that is one possibility. > > As an idea, why not exclude all x/* projects from the project list in generate-tempest-plugins-list.py by default, then explicitly add the ones you know you care about instead? Then you don't have to maintain these lists unless state changes in something you care about and that might be something you want to take action on. Actually we want to cover all the tempest plugins as part of sanity check not just the one under OpenStack governance so that we do not break them as Tempest is used in much wider space than just OpenStack. As these x/ namespace plugins are failing we will keep adding them in inactive plugins exclusive list. If we find any inactive plugins from OpenStack namespeace then we can start the discussion over retiring the plugins. -gmann > > > > > [1] > > https://opendev.org/openstack/tempest/src/commit/663787ee794df54e7ded41e5f3e8ae246e9b4288/tools/generate-tempest-plugins-list.py#L59 > > [2] https://review.opendev.org/c/x/mogan/+/767718 > > [3] https://opendev.org/x/mogan/commits/branch/master > > > > Regards, > > -- > > Martin Kopec > > Senior Software Quality Engineer > > Red Hat EMEA > > > > > > > > From gmann at ghanshyammann.com Mon Jun 7 23:53:23 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 07 Jun 2021 18:53:23 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 10th at 1500 UTC Message-ID: <179e8e52bd5.e35e022f316095.4772252122737314526@ghanshyammann.com> Hello Everyone, NOTE: TC MEETINGS WILL BE HELD IN #openstack-tc CHANNEL ON OFTC NETWORK (NOT FREENODE) Technical Committee's next weekly meeting is scheduled for June 10th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, June 9th , at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From peiyong.zhang at salesforce.com Tue Jun 8 00:23:10 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 7 Jun 2021 17:23:10 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: Matthias, These steps, "install python-nova", "install openstack-nova-scheduler" need python-openvswitch-2.11 which in turn looking for libopenvswitch which is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm . And I have this copy installed on my local repo. *Trying to figure out which rpm has the librte_*.* BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. Which has rpms for train, stein, rocky and queens. Is there a similar site for later releases like Ussuri or Victoria? Pete On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge wrote: > On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: > > On 6/7/21 8:07 PM, Pete Zhang wrote: > > > > > > I hit this error when installing “openstack-nova-scheduler” of release > > > train.Anyone knows the issue/fix? > > > What is the librte? is it another rpm i can download somewhere? > > > or what is the best channel/DL to post this question, thx.Here is what > I > > > did. > > > > > > 1. I did this in a test box. > > > 2. I have puppet-modules installed on the box > > > 3. I have openstack-release-train’s rpms on the box and built a > > > local-repo for puppet to install > > > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install > openstack-nova-scheduler' > > > Error: Execution of '/bin/yum -d 0 -e 0 -y install > openstack-nova-scheduler' returned 1: Error: Package: > 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 > (local_openstack-tnrp) > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > Hi, > > > > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though > > librte is from dpdk. It's likely a bug if nova-scheduler depends on > > openvswitch (but it's probably not a bug if OVS depends on dpdk if it > > was compiled with dpdk support). > > Packages ending with el7 are probably a bit aged already. You may want > to switch to something more recent. RDO is only updating the latest > release. > I don't know where you got the other packages from, but I can see there > is no direct dependency from openstack-nova-scheduler to > openvswitch[1]. On the other side, the openvswitch build indeed requires > librte[2]. > > RDO describes the used repositories[3], and you may want to enable > CentOS extras. > > [1] > https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ > [2] > https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ > [3] > https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ > > -- > Matthias Runge > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Jun 8 01:32:41 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 7 Jun 2021 19:32:41 -0600 Subject: [tripleo][ci] ovb jobs In-Reply-To: References: Message-ID: ah.. one note on this.. One issue is still under investigation from vexxhost. If you get a RETRY message from zuul, you most likely hit the following bug. https://bugs.launchpad.net/tripleo/+bug/1930273 On Mon, Jun 7, 2021 at 11:04 AM Wesley Hayutin wrote: > 0/ > > Update on the OVB jobs across all centos-stream-8 branches. > OVB jobs should be SUCCESSFUL If your overcloud has the rpm hostname.3.20-6.el8 > and NOT 3.20-7.el8. e.g. [1] . > > The hostname package is being fixed via centos packaging. > > Related Change: > > https://git.centos.org/rpms/hostname/c/e097d2aac3e76eebbaac3ee4c2b95f575f3798fa?branch=c8s > > Related Bugs: > https://bugs.launchpad.net/tripleo/+bug/1930849 > https://bugzilla.redhat.com/show_bug.cgi?id=1965897 > https://bugzilla.redhat.com/show_bug.cgi?id=1956378 > > The CI team is putting in a temporary patch to force any OVB job to BUILD > the overcloud images vs. pulling the prebuilt images until new overcloud > images are rebuilt and promoted at this time [2] > > Thanks to Sandeep, Arx, Yatin and MIchele!!! > > [1] > https://logserver.rdoproject.org/61/33961/6/check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/9b98429/logs/overcloud-controller-0/var/log/extra/package-list-installed.txt.gz > > https://logserver.rdoproject.org/42/795042/1/openstack-check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/4b6d711/logs/overcloud-controller-0/var/log/extra/package-list-installed.txt.gz > > [2] > https://review.rdoproject.org/r/c/rdo-jobs/+/34022 > https://review.rdoproject.org/r/c/config/+/34023/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Tue Jun 8 06:15:33 2021 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 8 Jun 2021 11:45:33 +0530 Subject: [RDO] Re: Getting error during install openstack-nova-scheduler In-Reply-To: References: Message-ID: Hi Pete, Julia, On Tue, Jun 8, 2021 at 12:50 AM Julia Kreger wrote: > > Greetings Pete, > > I'm going to guess your issue may actually be with RDO packaging > dependencies than with the nova project itself. I guess there is there > is a dependency issue for Centos7? Are any RDO contributors aware of > this? I suspect you need Centos Extra enabled as a couple of the > required files/libraries are sourced from packages in extras, such > openvswitch itself and dpdk. > Yes, correct the issue is not related to nova itself, but dependencies and repos. >From Error I see a local repo is used for Train release, which looks missing deps in that repo. Most of the missing deps are provided by dpdk which comes from CentOS Extras repos. So fixing that local repo or using OpenStack CentOS repos along with CentOS base repos directly you shouldn't see the issue. On a CentOS node you can install train repos with "yum install centos-release-openstack-train", other CentOS repos need to be kept enabled to avoid such deps issues. > -Julia > > On Mon, Jun 7, 2021 at 10:09 AM Pete Zhang wrote: > > > > > > I hit the following errors and would like to know the fix. > > > > > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vmbus.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx4.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4()(64bit) > > > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: libsodium.so.23()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gso.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-tooz >= 1.58.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_bnxt.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gro.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_latencystats.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx5.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_member.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_nfp.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_tap.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_pci.so.2()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-traits >= 0.16.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pdump.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-resource-classes >= 0.4.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_failsafe.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_ring.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bitratestats.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_stack.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vdev.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_qede.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vhost.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_metrics.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_i40e.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pci.so.1()(64bit) > > > > You could try using --skip-broken to work around the problem > > > > You could try running: rpm -Va --nofiles --nodigest > > > > Error: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vmbus.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx4.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4()(64bit) > > > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: libsodium.so.23()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gso.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-tooz >= 1.58.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_bnxt.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gro.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_latencystats.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx5.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_member.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_nfp.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_tap.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_pci.so.2()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-traits >= 0.16.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pdump.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-resource-classes >= 0.4.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_failsafe.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_ring.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bitratestats.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_stack.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vdev.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_qede.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vhost.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_metrics.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_i40e.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pci.so.1()(64bit) > > > > You could try using --skip-broken to work around the problem > > > > You could try running: rpm -Va --nofiles --nodigest > Thanks and Regards Yatin Karel From pierre at stackhpc.com Tue Jun 8 08:18:28 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 8 Jun 2021 10:18:28 +0200 Subject: [CLOUDKITTY] Fix tests cases broken by flask >=2.0.1 In-Reply-To: References: Message-ID: Thanks a lot Rafael for fixing this gate blocker! On Tue, 1 Jun 2021 at 15:55, Rafael Weingärtner wrote: > > Hello guys, > I was reviewing the patch https://review.opendev.org/c/openstack/cloudkitty/+/793790, and decided to propose an alternative patch (https://review.opendev.org/c/openstack/cloudkitty/+/793973). > > Could you guys review it? > > The idea I am proposing is that, instead of mocking the root object ("flask.request"), we address the issue by mocking only the needed methods and attributes. This facilitates the understanding of the unit test, and also helps people to pin-point problems right away as the mocked attributes/methods are clearly seen in the unit test. > > -- > Rafael Weingärtner From pierre at stackhpc.com Tue Jun 8 08:41:32 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 8 Jun 2021 10:41:32 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: RDO packages for Ussuri and Victoria are available CentOS 8 [1] and CentOS Stream 8 [2]. [1] http://mirror.centos.org/centos/8/cloud/x86_64/ [2] http://mirror.centos.org/centos/8-stream/cloud/x86_64/ On Tue, 8 Jun 2021 at 02:24, Pete Zhang wrote: > > Matthias, > > These steps, "install python-nova", "install openstack-nova-scheduler" need python-openvswitch-2.11 which in turn looking for libopenvswitch which is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm. And I have this copy installed on my local repo. > > Trying to figure out which rpm has the librte_*. > > BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. Which has rpms for train, stein, rocky and queens. > Is there a similar site for later releases like Ussuri or Victoria? > > Pete > > > On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge wrote: >> >> On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: >> > On 6/7/21 8:07 PM, Pete Zhang wrote: >> > > >> > > I hit this error when installing “openstack-nova-scheduler” of release >> > > train.Anyone knows the issue/fix? >> > > What is the librte? is it another rpm i can download somewhere? >> > > or what is the best channel/DL to post this question, thx.Here is what I >> > > did. >> > > >> > > 1. I did this in a test box. >> > > 2. I have puppet-modules installed on the box >> > > 3. I have openstack-release-train’s rpms on the box and built a >> > > local-repo for puppet to install >> > > >> > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' >> > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > > Requires: librte_mempool_bucket.so.1()(64bit) >> > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) >> > >> > Hi, >> > >> > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though >> > librte is from dpdk. It's likely a bug if nova-scheduler depends on >> > openvswitch (but it's probably not a bug if OVS depends on dpdk if it >> > was compiled with dpdk support). >> >> Packages ending with el7 are probably a bit aged already. You may want >> to switch to something more recent. RDO is only updating the latest >> release. >> I don't know where you got the other packages from, but I can see there >> is no direct dependency from openstack-nova-scheduler to >> openvswitch[1]. On the other side, the openvswitch build indeed requires >> librte[2]. >> >> RDO describes the used repositories[3], and you may want to enable >> CentOS extras. >> >> [1] https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ >> [2] https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ >> [3] https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ >> >> -- >> Matthias Runge >> > > > -- > From katkumar at in.ibm.com Tue Jun 8 09:50:32 2021 From: katkumar at in.ibm.com (Katari Kumar) Date: Tue, 8 Jun 2021 09:50:32 +0000 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Tue Jun 8 10:00:57 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 08 Jun 2021 12:00:57 +0200 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: References: Message-ID: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> On Tuesday, 8 June 2021 11:50:32 CEST Katari Kumar wrote: > > Hi, > > our 3rd party CI (IBM Storage CI, based on zuul v2) uses devstack-gate > scripts to install openstack via devstack and run tempest suite on the > storage. > It works with wallaby but fails on latest master as the devstack project > dropped bionic support. > We are currently trying to use ubuntu focal, but facing issues in devstack > gate script. > I understand that all 3rdparty drivers should migrate to Zuul v3 to avoid > such issues. As devstack-gate is not used in Zuul V3 , i see no activity in > devstack-gate to support latest versions. > But as there are many existing Zuul v2 users, devstack-gate should continue > to support latest projects. This has been communicated several times: devstack-gate should have been dropped in ussuri already according the original plan. The plan was delayed a bit because we had a few relevant legacy jobs around, but the last bits have been merged recently and there are no further plans to support devstack-gate for xena. On your specific issue: I think we had a few focal-based legacy jobs in victoria before dropping them, so you may probably tune the jobs to work with devstack-gate. But this won't work when Xena is branched. So please prioritize the migration to Zuul v3, rather than trying to patch an unsupported software stack. During last PTG, in the Cinder session a 3rd party CI shared their experience with the migration using Software Factory as Zuul distribution, you can find the recording here: https://www.youtube.com/watch?v=hVLpPBldn7g&t=426 https://wiki.openstack.org/wiki/ CinderXenaPTGSummary#Using_Software_Factory_for_Cinder_Third_Party_CI -- Luigi From rafaelweingartner at gmail.com Tue Jun 8 11:08:16 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 8 Jun 2021 08:08:16 -0300 Subject: [CLOUDKITTY] Fix tests cases broken by flask >=2.0.1 In-Reply-To: References: Message-ID: Glad to help! On Tue, Jun 8, 2021 at 5:19 AM Pierre Riteau wrote: > Thanks a lot Rafael for fixing this gate blocker! > > On Tue, 1 Jun 2021 at 15:55, Rafael Weingärtner > wrote: > > > > Hello guys, > > I was reviewing the patch > https://review.opendev.org/c/openstack/cloudkitty/+/793790, and decided > to propose an alternative patch ( > https://review.opendev.org/c/openstack/cloudkitty/+/793973). > > > > Could you guys review it? > > > > The idea I am proposing is that, instead of mocking the root object > ("flask.request"), we address the issue by mocking only the needed methods > and attributes. This facilitates the understanding of the unit test, and > also helps people to pin-point problems right away as the mocked > attributes/methods are clearly seen in the unit test. > > > > -- > > Rafael Weingärtner > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Tue Jun 8 12:07:28 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 8 Jun 2021 17:07:28 +0500 Subject: [nova][glance] Instance Password Reset Message-ID: Hi, I am trying to enable guest password reset for windows and linux guests. Is it possible to do it when the instance is running ? I am using wallaby release. By searching, I have found that qemu-guest-agent is required to reset the password. But I didn't see the the image property for it. https://opendev.org/openstack/glance/src/branch/stable/wallaby/doc/source/admin/useful-image-properties.rst On other link I have found hw_qemu_guest_agent property for image this will help in this but looks too old and seems to be deprepacted. https://wiki.openstack.org/wiki/VirtDriverImageProperties My objective is to reset the linux and windows guest password or inject new key pair. Need your help, how to achieve it. - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernandoperches at gmail.com Tue Jun 8 12:11:40 2021 From: fernandoperches at gmail.com (Fernando Ferraz) Date: Tue, 8 Jun 2021 09:11:40 -0300 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> Message-ID: Hello, The NetApp CI for Cinder also relies on Zuul v2. We were able to recently move our jobs to focal, but dropping devstack-gate is a big concern considering our team size and schedule. Luigi, could you clarify what would immediately break after xena is branched? Fernando Em ter., 8 de jun. de 2021 às 07:05, Luigi Toscano escreveu: > On Tuesday, 8 June 2021 11:50:32 CEST Katari Kumar wrote: > > > > Hi, > > > > our 3rd party CI (IBM Storage CI, based on zuul v2) uses devstack-gate > > scripts to install openstack via devstack and run tempest suite on the > > storage. > > It works with wallaby but fails on latest master as the devstack project > > dropped bionic support. > > We are currently trying to use ubuntu focal, but facing issues in > devstack > > gate script. > > I understand that all 3rdparty drivers should migrate to Zuul v3 to avoid > > such issues. As devstack-gate is not used in Zuul V3 , i see no activity > in > > devstack-gate to support latest versions. > > But as there are many existing Zuul v2 users, devstack-gate should > continue > > to support latest projects. > > This has been communicated several times: devstack-gate should have been > dropped in ussuri already according the original plan. The plan was > delayed a > bit because we had a few relevant legacy jobs around, but the last bits > have > been merged recently and there are no further plans to support > devstack-gate > for xena. > > On your specific issue: I think we had a few focal-based legacy jobs in > victoria before dropping them, so you may probably tune the jobs to work > with > devstack-gate. But this won't work when Xena is branched. > > So please prioritize the migration to Zuul v3, rather than trying to patch > an > unsupported software stack. During last PTG, in the Cinder session a 3rd > party > CI shared their experience with the migration using Software Factory as > Zuul > distribution, you can find the recording here: > > https://www.youtube.com/watch?v=hVLpPBldn7g&t=426 > > https://wiki.openstack.org/wiki/ > CinderXenaPTGSummary#Using_Software_Factory_for_Cinder_Third_Party_CI > > > > -- > Luigi > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Tue Jun 8 12:42:21 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 08 Jun 2021 14:42:21 +0200 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> Message-ID: <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> On Tuesday, 8 June 2021 14:11:40 CEST Fernando Ferraz wrote: > Hello, > > The NetApp CI for Cinder also relies on Zuul v2. We were able to > recently move our jobs to focal, but dropping devstack-gate is a big > concern considering our team size and schedule. > Luigi, could you clarify what would immediately break after xena is > branched? > For example grenade jobs won't work anymore because there won't be any new entry related to stable/xena added here to devstack-vm-gate-wrap.sh: https://opendev.org/openstack/devstack-gate/src/branch/master/devstack-vm-gate-wrap.sh#L335 I understand that grenade testing is probably not relevant for 3rd party CIs (it should be, but that's a different discussion), but the main point is that devstack-gate is already now in almost-maintenance mode. The minimum amount of fixed that have been merged have been used to keep working the very few legacy jobs defined on opendev.org, and that number is basically 0 at this point. This mean that there are a ton of potential breakages happening anytime, and the focal change is just one (and each one of you, CI owner, had to fix it on your own). Others may come anytime and they won't be detected nor investigated anymore because we don't have de-facto legacy jobs around since wallaby. To summarize: if you use Zuul v2, you have been running for a long while on an unsupported software stack. The last tiny bits which could be used on both zuulv2 and zuulv3 in legacy mode to easy the transition are unsupported too. This problem, I believe, has been communicated periodically by the various team and the time to migrate is... last month. Please hurry up! Ciao -- Luigi From smooney at redhat.com Tue Jun 8 13:30:12 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 08 Jun 2021 14:30:12 +0100 Subject: [ops] [oslo] [nova] rabbitmq queues for nova versioned notifications queues keep filling up In-Reply-To: <60BE259700B103CE00390001_0_33274@msllnjpmsgsv06> References: <60BE259700B103CE00390001_0_33274@msllnjpmsgsv06> Message-ID: <05786003e021db2132418c0d561bcd5da2795ed9.camel@redhat.com> On Mon, 2021-06-07 at 13:56 +0000, Ajay Tikoo (BLOOMBERG/ 120 PARK) wrote: > Thank you, Christopher. > > From: cmccarth at mathworks.com At: 06/04/21 11:17:23 UTC-4:00To: openstack-discuss at lists.openstack.org > Subject: Re: [ops] rabbitmq queues for nova versioned notifications queues keep filling up > > > > Hi Ajay, > > We work around this by setting a TTL on our notifications queues via RabbitMQ policy definition. We include the following in our definitions.json for RabbitMQ: > > "policies":[ > {"vhost": "/", "name": "notifications-ttl", "pattern": "^(notifications|versioned_notifications)\\.", "apply-to": "queues", "definition": {"message-ttl":600000}, "priority":0} ] > adding the oslo and nova tag as im wonderign if ^ should be configurable via oslo.messagaing or nova automatically. perhaps we have a configuration option fo notification experation but i think this could be a useful feature to add if its not already present. we have a  rabbit_transient_queues_ttl https://docs.openstack.org/nova/latest/configuration/config.html#oslo_messaging_rabbit.rabbit_transient_queues_ttl but im not sure tha tthat is applied to notification queues by default. im wondering if that is a bug that we shoudl correct? the default value is 1800 sec which is 30 mins which seams reasonable. while longer then the 10 mins chris is usign it's better then infinity. > This expires messages in the notifications and versioned_notifications queues after 10 minutes, which seems to work well for us. I believe we initially picked up this workaround from this[1] bug report. > > Hope this helps, > > - Chris > > -- > Christopher McCarthy > MathWorks > cmccarth at mathworks.com > > [1] https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1737170 > > > Date: Wed, 2 Jun 2021 22:39:54 -0000 > From: "Ajay Tikoo (BLOOMBERG/ 120 PARK)" > To: openstack-discuss at lists.openstack.org > Subject: [ops] rabbitmq queues for nova versioned notifications queues > keep filling up > Message-ID: <60B808BA00D0068401D80001_0_3025859 at msclnypmsgsv04> > Content-Type: text/plain; charset="utf-8" > > I am not sure if this is the right channel/format to post this question, so my apologies in advance if this is not the right place. > > We are using Openstack Rocky. Watcher needs versioned notifications to be enabled. However after enabling versioned notifications, the queues for versioned_notifications (info and error) keep filling up Based on the updates the the Watchers cluster data model, it appears that Watcher is consuming messages, but they still linger in these queues. So with nova versioned notifications disabled, Watcher is unable to update the cluster data model (between rebuild intervals), and with them enabled, it keeps filling up the MQ queues. What is the best way to resolve this? > > Thank you, > Ajay Tikoo > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jun 8 13:34:31 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 08 Jun 2021 14:34:31 +0100 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: On Mon, 2021-06-07 at 20:52 +0200, Thomas Goirand wrote: > On 6/7/21 8:07 PM, Pete Zhang wrote: > > > > I hit this error when installing “openstack-nova-scheduler” of release > > train.Anyone knows the issue/fix? > > What is the librte? is it another rpm i can download somewhere? > > or what is the best channel/DL to post this question, thx.Here is what I > > did. > > > > 1. I did this in a test box. > > 2. I have puppet-modules installed on the box > > 3. I have openstack-release-train’s rpms on the box and built a > > local-repo for puppet to install > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_bucket.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > Hi, > > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though > librte is from dpdk. It's likely a bug if nova-scheduler depends on > openvswitch (but it's probably not a bug if OVS depends on dpdk if it > was compiled with dpdk support). ya that is a define bug the scheduler has no dependency on ovs or dpdk > > Cheers, > > Thomas Goirand (zigo) > From smooney at redhat.com Tue Jun 8 13:39:40 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 08 Jun 2021 14:39:40 +0100 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: On Tue, 2021-06-08 at 10:41 +0200, Pierre Riteau wrote: > RDO packages for Ussuri and Victoria are available CentOS 8 [1] and > CentOS Stream 8 [2]. > > [1] http://mirror.centos.org/centos/8/cloud/x86_64/ > [2] http://mirror.centos.org/centos/8-stream/cloud/x86_64/ > > On Tue, 8 Jun 2021 at 02:24, Pete Zhang wrote: > > > > Matthias, > > > > These steps, "install python-nova", "install openstack-nova-scheduler" need python-openvswitch-2.11 which in turn looking for libopenvswitch which is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm. And I have this copy installed on my local repo. ya so openstack-nova-scheduler does not required python-openvswitch-2.11 os-vif required python-openvswitch for the python bidnigns but os-vif is not needed by the scheduler. and even then the python binding do not need librte_* librte_* is an optional depency of openvswitch. redhat choose to build in dpdk supprot into the ovs-vswitchd binday rahter then shiping a seperate package but openvswitch shoudl not be a mandatory install requirement of any nova rpm. nor should librte_* espically the contoler services. > > > > Trying to figure out which rpm has the librte_*. > > > > BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. Which has rpms for train, stein, rocky and queens. > > Is there a similar site for later releases like Ussuri or Victoria? > > > > Pete > > > > > > On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge wrote: > > > > > > On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: > > > > On 6/7/21 8:07 PM, Pete Zhang wrote: > > > > > > > > > > I hit this error when installing “openstack-nova-scheduler” of release > > > > > train.Anyone knows the issue/fix? > > > > > What is the librte? is it another rpm i can download somewhere? > > > > > or what is the best channel/DL to post this question, thx.Here is what I > > > > > did. > > > > > > > > > > 1. I did this in a test box. > > > > > 2. I have puppet-modules installed on the box > > > > > 3. I have openstack-release-train’s rpms on the box and built a > > > > > local-repo for puppet to install > > > > > > > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > > > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > > > > > Hi, > > > > > > > > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though > > > > librte is from dpdk. It's likely a bug if nova-scheduler depends on > > > > openvswitch (but it's probably not a bug if OVS depends on dpdk if it > > > > was compiled with dpdk support). > > > > > > Packages ending with el7 are probably a bit aged already. You may want > > > to switch to something more recent. RDO is only updating the latest > > > release. > > > I don't know where you got the other packages from, but I can see there > > > is no direct dependency from openstack-nova-scheduler to > > > openvswitch[1]. On the other side, the openvswitch build indeed requires > > > librte[2]. > > > > > > RDO describes the used repositories[3], and you may want to enable > > > CentOS extras. > > > > > > [1] https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ > > > [2] https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ > > > [3] https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ > > > > > > -- > > > Matthias Runge > > > > > > > > > -- > > > From smooney at redhat.com Tue Jun 8 14:04:59 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 08 Jun 2021 15:04:59 +0100 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: <6ee439b13fac2fbea15de4aeaca8067586856357.camel@redhat.com> just looking into the rdo packaging python-nova depend on python-os-vif which depends on python-ovsdbapp which depens on python3-openvswitch https://github.com/rdo-packages/ovsdbapp-distgit/blob/rpm-master/python-ovsdbapp.spec#L51 i would have too double check if os-vif is technially requried for any contol plane service but i belive we only use it within the compute service currently. python3-openvswitch appears to only required libopenvswitch https://cbs.centos.org/koji/rpminfo?rpmID=183064 not the full ovs package the problem is that apprently libopenvswitch is not packaged seperatly and is provided by the main openvswitch package which is not correct. https://cbs.centos.org/koji/rpminfo?rpmID=183069 that is pulling in dpdk. it looks like there are no rhle 8 build of dpdk from what im seeing quickly but dpdk is what provides those missing libs https://cbs.centos.org/koji/rpminfo?rpmID=138108 i think the correct packaging fix woudl be do have libopenvswitch be provided by a spereate package e.g. an openvswich-common or similar that did not have the depencies on dpdk. althernitvaly we coudl package dpdk on centos 8 but really you shoudl not need to install it to install the nova scheduler. On Tue, 2021-06-08 at 14:39 +0100, Sean Mooney wrote: > On Tue, 2021-06-08 at 10:41 +0200, Pierre Riteau wrote: > > RDO packages for Ussuri and Victoria are available CentOS 8 [1] and > > CentOS Stream 8 [2]. > > > > [1] http://mirror.centos.org/centos/8/cloud/x86_64/ > > [2] http://mirror.centos.org/centos/8-stream/cloud/x86_64/ > > > > On Tue, 8 Jun 2021 at 02:24, Pete Zhang wrote: > > > > > > Matthias, > > > > > > These steps, "install python-nova", "install openstack-nova-scheduler" need python-openvswitch-2.11 which in turn looking for libopenvswitch which is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm. And I have this copy installed on my local repo. > ya so openstack-nova-scheduler does not required python-openvswitch-2.11 > > os-vif required python-openvswitch for the python bidnigns but os-vif is not needed by the scheduler. > and even then the python binding do not need librte_* > > librte_* is an optional depency of openvswitch. redhat choose to build in dpdk supprot into the ovs-vswitchd binday rahter then shiping a seperate > package but openvswitch shoudl not be a mandatory install requirement of any nova rpm. nor should librte_* espically the contoler services. > > > > > > > Trying to figure out which rpm has the librte_*. > > > > > > BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. Which has rpms for train, stein, rocky and queens. > > > Is there a similar site for later releases like Ussuri or Victoria? > > > > > > Pete > > > > > > > > > On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge wrote: > > > > > > > > On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: > > > > > On 6/7/21 8:07 PM, Pete Zhang wrote: > > > > > > > > > > > > I hit this error when installing “openstack-nova-scheduler” of release > > > > > > train.Anyone knows the issue/fix? > > > > > > What is the librte? is it another rpm i can download somewhere? > > > > > > or what is the best channel/DL to post this question, thx.Here is what I > > > > > > did. > > > > > > > > > > > > 1. I did this in a test box. > > > > > > 2. I have puppet-modules installed on the box > > > > > > 3. I have openstack-release-train’s rpms on the box and built a > > > > > > local-repo for puppet to install > > > > > > > > > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > > > > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > > > > > > > Hi, > > > > > > > > > > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though > > > > > librte is from dpdk. It's likely a bug if nova-scheduler depends on > > > > > openvswitch (but it's probably not a bug if OVS depends on dpdk if it > > > > > was compiled with dpdk support). > > > > > > > > Packages ending with el7 are probably a bit aged already. You may want > > > > to switch to something more recent. RDO is only updating the latest > > > > release. > > > > I don't know where you got the other packages from, but I can see there > > > > is no direct dependency from openstack-nova-scheduler to > > > > openvswitch[1]. On the other side, the openvswitch build indeed requires > > > > librte[2]. > > > > > > > > RDO describes the used repositories[3], and you may want to enable > > > > CentOS extras. > > > > > > > > [1] https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ > > > > [2] https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ > > > > [3] https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ > > > > > > > > -- > > > > Matthias Runge > > > > > > > > > > > > > -- > > > > > > > From gmann at ghanshyammann.com Tue Jun 8 14:14:35 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 08 Jun 2021 09:14:35 -0500 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> Message-ID: <179ebf99f29.d451fbf0365691.4329366033312889323@ghanshyammann.com> ---- On Tue, 08 Jun 2021 07:42:21 -0500 Luigi Toscano wrote ---- > On Tuesday, 8 June 2021 14:11:40 CEST Fernando Ferraz wrote: > > Hello, > > > > The NetApp CI for Cinder also relies on Zuul v2. We were able to > > recently move our jobs to focal, but dropping devstack-gate is a big > > concern considering our team size and schedule. > > Luigi, could you clarify what would immediately break after xena is > > branched? > > > > For example grenade jobs won't work anymore because there won't be any new > entry related to stable/xena added here to devstack-vm-gate-wrap.sh: > > https://opendev.org/openstack/devstack-gate/src/branch/master/devstack-vm-gate-wrap.sh#L335 > > I understand that grenade testing is probably not relevant for 3rd party CIs > (it should be, but that's a different discussion), but the main point is that > devstack-gate is already now in almost-maintenance mode. The minimum amount of > fixed that have been merged have been used to keep working the very few legacy > jobs defined on opendev.org, and that number is basically 0 at this point. > > This mean that there are a ton of potential breakages happening anytime, and > the focal change is just one (and each one of you, CI owner, had to fix it on > your own). Others may come anytime and they won't be detected nor investigated > anymore because we don't have de-facto legacy jobs around since wallaby. > > To summarize: if you use Zuul v2, you have been running for a long while on an > unsupported software stack. The last tiny bits which could be used on both > zuulv2 and zuulv3 in legacy mode to easy the transition are unsupported too. > > This problem, I believe, has been communicated periodically by the various > team and the time to migrate is... last month. Please hurry up! Yes, we have done this migration in Victoria release cycle with two community-wide goals together with the direction of moving all the CI from devstack gate from wallaby itself. But by seeing few jobs and especially 3rd party CI, we extended the devstack-gate support for wallaby release [1]. So we extended the support for one more release until stable/wallaby. NOTE: supporting a extra release extend the devstack-gate support until that release until that become EOL, as we need to support that release stable CI. So it is not just a one more cycle support but even longer time of 1 year or more. Now extended the support for Xena cycle also seems very difficult by seeing very less number of contributor or less bandwidth of current core members in devstack-gate. I will plan to officially declare the devstack-gate deprecation with team but please move your CI/CD to latest Focal and to zuulv3 ASAP. 1. https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html 2. https://governance.openstack.org/tc/goals/selected/victoria/native-zuulv3-jobs.html [1] https://review.opendev.org/c/openstack/devstack-gate/+/778129 https://review.opendev.org/c/openstack/devstack-gate/+/785010 -gmann > > > Ciao > -- > Luigi > > > > From fungi at yuggoth.org Tue Jun 8 14:42:10 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 8 Jun 2021 14:42:10 +0000 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> Message-ID: <20210608144210.qxmahyw3qozygvcd@yuggoth.org> On 2021-06-08 14:42:21 +0200 (+0200), Luigi Toscano wrote: [...] > To summarize: if you use Zuul v2, you have been running for a long > while on an unsupported software stack. The last tiny bits which > could be used on both zuulv2 and zuulv3 in legacy mode to easy the > transition are unsupported too. [...] For very large definitions of "long while." The last official 2.x release of Zuul was in September of 2017, so it's been EOL going on 4 years already. I'm not sure how much more warning people need that they should upgrade? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Tue Jun 8 14:45:45 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 8 Jun 2021 14:45:45 +0000 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: References: Message-ID: <20210608144545.vhghtk6p7mgkmkw6@yuggoth.org> On 2021-06-08 09:50:32 +0000 (+0000), Katari Kumar wrote: [...] > But as there are many existing Zuul v2 users, devstack-gate should > continue to support latest projects. Community software is developed and supported by its users, and devstack-gate is no exception. The people who were maintaining it no longer have any use for it. If you're using it, then it's up to you to keep it working (perhaps with the help of others who are also using it). But in my biased opinion, your time is probably better spent upgrading than trying to limp along with abandonware. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From whayutin at redhat.com Tue Jun 8 15:19:07 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 8 Jun 2021 09:19:07 -0600 Subject: [tripleo] Changing TripleO's release model Message-ID: Greetings TripleO community! At the most recent TripleO community meetings we have discussed formally changing the OpenStack release model for TripleO [1]. The previous released projects can be found here [2]. TripleO has previously released with release-type[‘trailing’, ‘cycle-with-intermediary’]. To quote the release model doc: ‘Trailing deliverables trail the release, so they cannot, by definition, be independent. They need to pick between cycle-with-rc or cycle-with-intermediary models.’ We are proposing to update the release-model to ‘independent’. This would give the TripleO community more flexibility in when we choose to cut a release. In turn this would mean less backporting, less upstream and 3rd party resources used by potentially some future releases. To quote the release model doc: ‘Some projects opt to completely bypass the 6-month cycle and release independently. For example, that is the case of projects that support the development infrastructure. The “independent” model describes such projects.’ The discussion here is to merely inform the greater community with regards to the proposal and conversations regarding the release model. This thread is NOT meant to discuss previous releases or their supported status, merely changing the release model here [3] [0] https://etherpad.opendev.org/p/tripleo-meeting-items [1] https://releases.openstack.org/reference/release_models.html [2] https://releases.openstack.org/teams/tripleo.html [3] https://opendev.org/openstack/releases/src/branch/master/deliverables/xena -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Mon Jun 7 20:56:56 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 7 Jun 2021 13:56:56 -0700 Subject: [RDO] Re: Getting error during install openstack-nova-scheduler In-Reply-To: References: Message-ID: Julia, The openstack-vswitch is required (>=11.0.0 < 12.0.0) by openstack-neutron (v15.0.0, from openstack-release-train, the release we chose). I downloaded openstack-vswitch-11.0.0 from https://forge.puppet.com/modules/openstack/vswitch/11.0.0. Any idea where I can download the missing librtb? thanks. Pete On Mon, Jun 7, 2021 at 12:20 PM Julia Kreger wrote: > Greetings Pete, > > I'm going to guess your issue may actually be with RDO packaging > dependencies than with the nova project itself. I guess there is there > is a dependency issue for Centos7? Are any RDO contributors aware of > this? I suspect you need Centos Extra enabled as a couple of the > required files/libraries are sourced from packages in extras, such > openvswitch itself and dpdk. > > -Julia > > On Mon, Jun 7, 2021 at 10:09 AM Pete Zhang wrote: > > > > > > I hit the following errors and would like to know the fix. > > > > > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install > openstack-nova-scheduler' > > > > Error: Execution of '/bin/yum -d 0 -e 0 -y install > openstack-nova-scheduler' returned 1: Error: Package: > 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vmbus.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx4.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4()(64bit) > > > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: libsodium.so.23()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gso.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-tooz >= 1.58.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_bnxt.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gro.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_latencystats.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx5.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_member.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_nfp.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_tap.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_pci.so.2()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-traits >= 0.16.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pdump.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-resource-classes >= 0.4.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_failsafe.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_ring.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bitratestats.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_stack.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vdev.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_qede.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vhost.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_metrics.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_i40e.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pci.so.1()(64bit) > > > > You could try using --skip-broken to work around the problem > > > > You could try running: rpm -Va --nofiles --nodigest > > > > Error: > /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: > change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 > -y install openstack-nova-scheduler' returned 1: Error: Package: > 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vmbus.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx4.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4()(64bit) > > > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: libsodium.so.23()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gso.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-tooz >= 1.58.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_bnxt.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gro.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_latencystats.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx5.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_member.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_nfp.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_tap.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_pci.so.2()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-traits >= 0.16.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pdump.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-resource-classes >= 0.4.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_failsafe.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_ring.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bitratestats.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_stack.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vdev.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_qede.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vhost.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_metrics.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_i40e.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pci.so.1()(64bit) > > > > You could try using --skip-broken to work around the problem > > > > You could try running: rpm -Va --nofiles --nodigest > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Mon Jun 7 21:22:32 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 7 Jun 2021 14:22:32 -0700 Subject: [RDO] Re: Getting error during install openstack-nova-scheduler In-Reply-To: References: Message-ID: Correct: librte On Mon, Jun 7, 2021 at 1:56 PM Pete Zhang wrote: > Julia, > > The openstack-vswitch is required (>=11.0.0 < 12.0.0) by openstack-neutron > (v15.0.0, from openstack-release-train, the release we chose). > I downloaded openstack-vswitch-11.0.0 from > https://forge.puppet.com/modules/openstack/vswitch/11.0.0. > > Any idea where I can download the missing librtb? thanks. > > Pete > > On Mon, Jun 7, 2021 at 12:20 PM Julia Kreger > wrote: > >> Greetings Pete, >> >> I'm going to guess your issue may actually be with RDO packaging >> dependencies than with the nova project itself. I guess there is there >> is a dependency issue for Centos7? Are any RDO contributors aware of >> this? I suspect you need Centos Extra enabled as a couple of the >> required files/libraries are sourced from packages in extras, such >> openvswitch itself and dpdk. >> >> -Julia >> >> On Mon, Jun 7, 2021 at 10:09 AM Pete Zhang wrote: >> > >> > >> > I hit the following errors and would like to know the fix. >> > >> > >> > >> > Debug: Executing: '/bin/yum -d 0 -e 0 -y install >> openstack-nova-scheduler' >> > >> > Error: Execution of '/bin/yum -d 0 -e 0 -y install >> openstack-nova-scheduler' returned 1: Error: Package: >> 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_bucket.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_vmbus.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mbuf.so.4()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_mlx4.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4()(64bit) >> > >> > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: libsodium.so.23()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_18.11)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ring.so.2(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_gso.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_netvsc.so.1()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-tooz >= 1.58.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_bnxt.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_gro.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_latencystats.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_mlx5.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_member.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_nfp.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_tap.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_17.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_pci.so.2()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-os-traits >= 0.16.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pdump.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-os-resource-classes >= 0.4.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ring.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2(DPDK_18.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_failsafe.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_ring.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_ixgbe.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bitratestats.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_stack.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_vdev.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_qede.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_vhost.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_metrics.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_i40e.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pci.so.1()(64bit) >> > >> > You could try using --skip-broken to work around the problem >> > >> > You could try running: rpm -Va --nofiles --nodigest >> > >> > Error: >> /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: >> change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 >> -y install openstack-nova-scheduler' returned 1: Error: Package: >> 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_bucket.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_vmbus.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mbuf.so.4()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_mlx4.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4()(64bit) >> > >> > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: libsodium.so.23()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_18.11)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ring.so.2(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_gso.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_netvsc.so.1()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-tooz >= 1.58.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_bnxt.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_gro.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_latencystats.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_mlx5.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_member.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_nfp.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_tap.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_17.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_pci.so.2()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-os-traits >= 0.16.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pdump.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-os-resource-classes >= 0.4.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ring.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2(DPDK_18.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_failsafe.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_ring.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_ixgbe.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bitratestats.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_stack.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_vdev.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_qede.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_vhost.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_metrics.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_i40e.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pci.so.1()(64bit) >> > >> > You could try using --skip-broken to work around the problem >> > >> > You could try running: rpm -Va --nofiles --nodigest >> >> > > -- > > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From katkumar at in.ibm.com Tue Jun 8 09:42:26 2021 From: katkumar at in.ibm.com (Katari Kumar) Date: Tue, 8 Jun 2021 09:42:26 +0000 Subject: 3rd party CI failures with devstack 'master' using devstack-gate Message-ID: An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Jun 8 16:46:26 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 08 Jun 2021 18:46:26 +0200 Subject: [nova][placement] Weekly meeting moves to #openstack-nova Message-ID: Hi, As we agreed on the today's meeting[1] we will try out to hold our meetings on the project channel. I've update the agenda page on the Wiki and proposed the patch to update the official meeting schedule page [2]. So next week we will use #openstack-nova for the weekly meeting. cheers, gibi [1] https://meetings.opendev.org/meetings/nova/2021/nova.2021-06-08-16.00.log.html#l-124 [2] https://review.opendev.org/c/opendev/irc-meetings/+/795377 From amoralej at redhat.com Tue Jun 8 16:58:45 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 8 Jun 2021 18:58:45 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: Hi, Sorry for arriving late. On Tue, Jun 8, 2021 at 2:26 AM Pete Zhang wrote: > Matthias, > > These steps, "install python-nova", "install openstack-nova-scheduler" > need python-openvswitch-2.11 which in turn looking for libopenvswitch which > is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm > . And I have this copy > installed on my local repo. > > I just tested installing openstack-nova-scheduler on a fresh centos7 system and worked fine installing openvswitch-2.12.0-el7. # yum install "*-train" # yum install openstack-nova-scheduler Just make sure you have the *extras* repo enabled, which should be by default. librte_* is provided in dpdk package which is in the extras repo. You shouldn't need any local repo. > *Trying to figure out which rpm has the librte_*.* > > BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. > Which has rpms for train, stein, rocky and queens. > Is there a similar site for later releases like Ussuri or Victoria? > > Train was the last version released for CentOS 7. Ussuri, Victoria and Wallaby are released for CentOS Linux 8 and CentOS Stream 8: http://mirror.centos.org/centos/8-stream/cloud/x86_64/ http://mirror.centos.org/centos/8/cloud/x86_64/ You can enable the repos by just installing centos-release-openstack-[ussuri,victoria,wallaby]. That should be enough. Regards, Alfredo Pete > > > > On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge > wrote: > >> On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: >> > On 6/7/21 8:07 PM, Pete Zhang wrote: >> > > >> > > I hit this error when installing “openstack-nova-scheduler” of release >> > > train.Anyone knows the issue/fix? >> > > What is the librte? is it another rpm i can download somewhere? >> > > or what is the best channel/DL to post this question, thx.Here is >> what I >> > > did. >> > > >> > > 1. I did this in a test box. >> > > 2. I have puppet-modules installed on the box >> > > 3. I have openstack-release-train’s rpms on the box and built a >> > > local-repo for puppet to install >> > > >> > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install >> openstack-nova-scheduler' >> > > Error: Execution of '/bin/yum -d 0 -e 0 -y install >> openstack-nova-scheduler' returned 1: Error: Package: >> 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > > Requires: librte_mempool_bucket.so.1()(64bit) >> > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 >> (local_openstack-tnrp) >> > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) >> > >> > Hi, >> > >> > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though >> > librte is from dpdk. It's likely a bug if nova-scheduler depends on >> > openvswitch (but it's probably not a bug if OVS depends on dpdk if it >> > was compiled with dpdk support). >> >> Packages ending with el7 are probably a bit aged already. You may want >> to switch to something more recent. RDO is only updating the latest >> release. >> I don't know where you got the other packages from, but I can see there >> is no direct dependency from openstack-nova-scheduler to >> openvswitch[1]. On the other side, the openvswitch build indeed requires >> librte[2]. >> >> RDO describes the used repositories[3], and you may want to enable >> CentOS extras. >> >> [1] >> https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ >> [2] >> https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ >> [3] >> https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ >> >> -- >> Matthias Runge >> >> > > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Tue Jun 8 17:10:14 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 8 Jun 2021 19:10:14 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: On Tue, Jun 8, 2021 at 6:58 PM Alfredo Moralejo Alonso wrote: > Hi, > > Sorry for arriving late. > > On Tue, Jun 8, 2021 at 2:26 AM Pete Zhang > wrote: > >> Matthias, >> >> These steps, "install python-nova", "install openstack-nova-scheduler" >> need python-openvswitch-2.11 which in turn looking for libopenvswitch which >> is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm >> . And I have this copy >> installed on my local repo. >> >> > I just tested installing openstack-nova-scheduler on a fresh centos7 > system and worked fine installing openvswitch-2.12.0-el7. > > # yum install "*-train" > # yum install openstack-nova-scheduler > > Just make sure you have the *extras* repo enabled, which should be by > default. librte_* is provided in dpdk package which is in the extras repo. > You shouldn't need any local repo. > > BTW, extras repo is enabled by default in centos repos config, but you can enable it with: # yum-config-manager --enable extras > >> *Trying to figure out which rpm has the librte_*.* >> >> BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. >> Which has rpms for train, stein, rocky and queens. >> Is there a similar site for later releases like Ussuri or Victoria? >> >> > Train was the last version released for CentOS 7. Ussuri, Victoria and > Wallaby are released for CentOS Linux 8 and CentOS Stream 8: > > http://mirror.centos.org/centos/8-stream/cloud/x86_64/ > http://mirror.centos.org/centos/8/cloud/x86_64/ > > You can enable the repos by just installing > centos-release-openstack-[ussuri,victoria,wallaby]. That should be enough. > > Regards, > > Alfredo > > > Pete >> >> > > > >> >> On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge >> wrote: >> >>> On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: >>> > On 6/7/21 8:07 PM, Pete Zhang wrote: >>> > > >>> > > I hit this error when installing “openstack-nova-scheduler” of >>> release >>> > > train.Anyone knows the issue/fix? >>> > > What is the librte? is it another rpm i can download somewhere? >>> > > or what is the best channel/DL to post this question, thx.Here is >>> what I >>> > > did. >>> > > >>> > > 1. I did this in a test box. >>> > > 2. I have puppet-modules installed on the box >>> > > 3. I have openstack-release-train’s rpms on the box and built a >>> > > local-repo for puppet to install >>> > > >>> > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install >>> openstack-nova-scheduler' >>> > > Error: Execution of '/bin/yum -d 0 -e 0 -y install >>> openstack-nova-scheduler' returned 1: Error: Package: >>> 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >>> > > Requires: librte_mempool_bucket.so.1()(64bit) >>> > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 >>> (local_openstack-tnrp) >>> > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) >>> > >>> > Hi, >>> > >>> > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though >>> > librte is from dpdk. It's likely a bug if nova-scheduler depends on >>> > openvswitch (but it's probably not a bug if OVS depends on dpdk if it >>> > was compiled with dpdk support). >>> >>> Packages ending with el7 are probably a bit aged already. You may want >>> to switch to something more recent. RDO is only updating the latest >>> release. >>> I don't know where you got the other packages from, but I can see there >>> is no direct dependency from openstack-nova-scheduler to >>> openvswitch[1]. On the other side, the openvswitch build indeed requires >>> librte[2]. >>> >>> RDO describes the used repositories[3], and you may want to enable >>> CentOS extras. >>> >>> [1] >>> https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ >>> [2] >>> https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ >>> [3] >>> https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ >>> >>> -- >>> Matthias Runge >>> >>> >> >> -- >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Jun 8 17:12:11 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 08 Jun 2021 10:12:11 -0700 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: <179ebf99f29.d451fbf0365691.4329366033312889323@ghanshyammann.com> References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> <179ebf99f29.d451fbf0365691.4329366033312889323@ghanshyammann.com> Message-ID: <6fc4dc79-2083-4cf3-9ca8-ef6e1dd0ca5d@www.fastmail.com> On Tue, Jun 8, 2021, at 7:14 AM, Ghanshyam Mann wrote: > ---- On Tue, 08 Jun 2021 07:42:21 -0500 Luigi Toscano > wrote ---- > > On Tuesday, 8 June 2021 14:11:40 CEST Fernando Ferraz wrote: > > > Hello, > > > > > > The NetApp CI for Cinder also relies on Zuul v2. We were able to > > > recently move our jobs to focal, but dropping devstack-gate is a > big > > > concern considering our team size and schedule. > > > Luigi, could you clarify what would immediately break after xena is > > > branched? > > > > > > > For example grenade jobs won't work anymore because there won't be > any new > > entry related to stable/xena added here to devstack-vm-gate-wrap.sh: > > > > > https://opendev.org/openstack/devstack-gate/src/branch/master/devstack-vm-gate-wrap.sh#L335 > > > > I understand that grenade testing is probably not relevant for 3rd > party CIs > > (it should be, but that's a different discussion), but the main > point is that > > devstack-gate is already now in almost-maintenance mode. The minimum > amount of > > fixed that have been merged have been used to keep working the very > few legacy > > jobs defined on opendev.org, and that number is basically 0 at this > point. > > > > This mean that there are a ton of potential breakages happening > anytime, and > > the focal change is just one (and each one of you, CI owner, had to > fix it on > > your own). Others may come anytime and they won't be detected nor > investigated > > anymore because we don't have de-facto legacy jobs around since > wallaby. > > > > To summarize: if you use Zuul v2, you have been running for a long > while on an > > unsupported software stack. The last tiny bits which could be used > on both > > zuulv2 and zuulv3 in legacy mode to easy the transition are > unsupported too. > > > > This problem, I believe, has been communicated periodically by the > various > > team and the time to migrate is... last month. Please hurry up! > > Yes, we have done this migration in Victoria release cycle with two > community-wide goals together > with the direction of moving all the CI from devstack gate from wallaby > itself. But by seeing few jobs > and especially 3rd party CI, we extended the devstack-gate support for > wallaby release [1]. So we > extended the support for one more release until stable/wallaby. > > NOTE: supporting a extra release extend the devstack-gate support until > that release until that become EOL, > as we need to support that release stable CI. So it is not just a one > more cycle support but even longer > time of 1 year or more. > > Now extended the support for Xena cycle also seems very difficult by > seeing very less number of > contributor or less bandwidth of current core members in devstack-gate. > > I will plan to officially declare the devstack-gate deprecation with > team but please move your CI/CD to > latest Focal and to zuulv3 ASAP. These changes have started to go up [2]. I want to clarify a few things though. As far as I can remember we have never required any specific CI system or setup. What we have done are required basic behaviors from the CI system. Things like respond to "recheck", post logs in a publicly accessible location and report them back, have contacts available so we can contact you if things break, and so on. What this means is that some third party CI system are likely running Jenkins. I know others that ran some homegrown thing that watched the Gerrit event stream. We recommend Zuul and now Zuulv3 or newer because it is a tool that we understand and can provide some assistance with. Those that choose not to use the recommended tools are likely to need to invest in their own tooling and debugging. For devstack-gate we will not accept new patches to keep it running against master, but need to keep it around for older stable branches. If those that are running their own set of tools want to keep devstack-gate alive for modern openstack then forking it is likely the best path forward. > > 1. > https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > 2. > https://governance.openstack.org/tc/goals/selected/victoria/native-zuulv3-jobs.html > > > [1] > https://review.opendev.org/c/openstack/devstack-gate/+/778129 > https://review.opendev.org/c/openstack/devstack-gate/+/785010 [2] https://review.opendev.org/q/topic:%22deprecate-devstack-gate%22+(status:open%20OR%20status:merged) From lyarwood at redhat.com Tue Jun 8 19:03:46 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 8 Jun 2021 20:03:46 +0100 Subject: [nova] destroy_secrets being added to nova.virt.driver.ComputeDriver.{cleanup, destroy} Message-ID: Hello all, I'm looking to introduce and likely backport an optional kwarg to the signature of the cleanup and destroy virt driver methods below: virt: Add destroy_secrets kwarg to destroy and cleanup https://review.opendev.org/c/openstack/nova/+/794252 While this is optional for any callers any out of tree driver implementing either method will need to add this kwarg. This is part of the following bugfix series where I am attempting to avoid secrets from being destroyed during a hard reboot within the libvirt driver: https://review.opendev.org/q/topic:bug/1905701+status:open+branch:master Hopefully this is trivial but if there are any concerns or issues with this then please let me know! Cheers, Lee From sbaker at redhat.com Tue Jun 8 23:52:22 2021 From: sbaker at redhat.com (Steve Baker) Date: Wed, 9 Jun 2021 11:52:22 +1200 Subject: [baremetal-sig][ironic] Tue June 8, 2021, 2pm UTC: The Ironic Python Agent Builder In-Reply-To: <4af9f9ed-dd59-0463-ec41-aa2f2905aafc@cern.ch> References: <4af9f9ed-dd59-0463-ec41-aa2f2905aafc@cern.ch> Message-ID: <67f365a7-4fed-d0c8-e42e-9bbf70305602@redhat.com> On 7/06/21 6:41 pm, Arne Wiebalck wrote: > Dear all, > > The Bare Metal SIG will meet tomorrow Tue June 8, 2021, > at 2pm UTC on zoom. > > The meeting will feature a "topic-of-the-day" presentation > by Dmitry Tantsur (dtantsur) with an > >   "Introduction to the Ironic Python Agent Builder" > > As usual, all details on https://etherpad.opendev.org/p/bare-metal-sig > The recording of this presentation is now available: https://www.youtube.com/watch?v=1L1Ld7skgDw cheers From dsneddon at redhat.com Wed Jun 9 00:42:45 2021 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 8 Jun 2021 17:42:45 -0700 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: Message-ID: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Thanks for making the announcement. Can you clarify how the feature-freeze dates will be communicated to the greater community of contributors? - Dan Sneddon > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: >  > Greetings TripleO community! > > At the most recent TripleO community meetings we have discussed formally changing the OpenStack release model for TripleO [1]. The previous released projects can be found here [2]. TripleO has previously released with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > To quote the release model doc: > ‘Trailing deliverables trail the release, so they cannot, by definition, be independent. They need to pick between cycle-with-rc or cycle-with-intermediary models.’ > > We are proposing to update the release-model to ‘independent’. This would give the TripleO community more flexibility in when we choose to cut a release. In turn this would mean less backporting, less upstream and 3rd party resources used by potentially some future releases. > > To quote the release model doc: > ‘Some projects opt to completely bypass the 6-month cycle and release independently. For example, that is the case of projects that support the development infrastructure. The “independent” model describes such projects.’ > > The discussion here is to merely inform the greater community with regards to the proposal and conversations regarding the release model. This thread is NOT meant to discuss previous releases or their supported status, merely changing the release model here [3] > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > [1] https://releases.openstack.org/reference/release_models.html > [2] https://releases.openstack.org/teams/tripleo.html > [3] https://opendev.org/openstack/releases/src/branch/master/deliverables/xena -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Jun 9 01:59:56 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 8 Jun 2021 19:59:56 -0600 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> References: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> Message-ID: Seeing no objections.... Congrats Sandeep :) On Fri, Jun 4, 2021 at 2:31 AM Gaël Chamoulaud wrote: > Of course, a big +1! > > On 02/Jun/2021 14:17, Marios Andreou wrote: > > Hello all > > > > Having discussed this with some members of the tripleo ci team > > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > > tripleo-quickstart and tripleo-quickstart-extras). > > > > Sandeep joined the team about 1.5 years ago and has from the start > > demonstrated his eagerness to learn and an excellent work ethic, > > having made many useful code submissions [1] and code reviews [2] to > > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > > > Please reply to this mail with a +1 or -1 for objections in the usual > > manner. If there are no objections we can declare it official in a few > > days > > > > regards, marios > > > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > > > > Best Regards, > Gaël > > -- > Gaël Chamoulaud - (He/Him/His) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Wed Jun 9 05:03:32 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Wed, 9 Jun 2021 10:33:32 +0530 Subject: [horizon] Weekly meeting move to #openstack-horizon channel Message-ID: Hello everyone, As discussed during the last weekly meeting, I have purposed a patch[1] to move our weekly meeting from openstack-meeting-alt to the openstack-horizon channel. So from today onwards, our weekly meeting will be on the #openstack-horizon channel (OFTC n/w). See you at the meeting. Thanks & regards, Vishal Manchanda [1] https://review.opendev.org/c/opendev/irc-meetings/+/795110 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeepggn93 at gmail.com Wed Jun 9 07:11:22 2021 From: sandeepggn93 at gmail.com (Sandeep Yadav) Date: Wed, 9 Jun 2021 12:41:22 +0530 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> Message-ID: Thank you all, I'll do my best to use my powers wisely. On Wed, Jun 9, 2021 at 7:37 AM Wesley Hayutin wrote: > Seeing no objections.... > > Congrats Sandeep :) > > On Fri, Jun 4, 2021 at 2:31 AM Gaël Chamoulaud > wrote: > >> Of course, a big +1! >> >> On 02/Jun/2021 14:17, Marios Andreou wrote: >> > Hello all >> > >> > Having discussed this with some members of the tripleo ci team >> > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >> > ysandeep) for core on the tripleo-ci repos (tripleo-ci, >> > tripleo-quickstart and tripleo-quickstart-extras). >> > >> > Sandeep joined the team about 1.5 years ago and has from the start >> > demonstrated his eagerness to learn and an excellent work ethic, >> > having made many useful code submissions [1] and code reviews [2] to >> > the CI repos and beyond. Thanks Sandeep and keep up the good work! >> > >> > Please reply to this mail with a +1 or -1 for objections in the usual >> > manner. If there are no objections we can declare it official in a few >> > days >> > >> > regards, marios >> > >> > [1] https://review.opendev.org/q/owner:sandeepyadav93 >> > [2] >> https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 >> > >> > >> >> Best Regards, >> Gaël >> >> -- >> Gaël Chamoulaud - (He/Him/His) >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Wed Jun 9 08:01:44 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 9 Jun 2021 10:01:44 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon wrote: > Thanks for making the announcement. Can you clarify how the feature-freeze > dates will be communicated to the greater community of contributors? > > - Dan Sneddon > > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: > >  > > Greetings TripleO community! > > At the most recent TripleO community meetings we have discussed formally > changing the OpenStack release model for TripleO [1]. The previous > released projects can be found here [2]. TripleO has previously released > with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > To quote the release model doc: > > ‘Trailing deliverables trail the release, so they cannot, by definition, > be independent. They need to pick between cycle-with-rc > > or cycle-with-intermediary > > models.’ > > We are proposing to update the release-model to ‘independent’. This would > give the TripleO community more flexibility in when we choose to cut a > release. In turn this would mean less backporting, less upstream and 3rd > party resources used by potentially some future releases. > > What does this change mean in terms of branches and compatibility for OpenStack stable releases?. > To quote the release model doc: > > ‘Some projects opt to completely bypass the 6-month cycle and release > independently. For example, that is the case of projects that support the > development infrastructure. The “independent” model describes such > projects.’ > > The discussion here is to merely inform the greater community with regards > to the proposal and conversations regarding the release model. This thread > is NOT meant to discuss previous releases or their supported status, merely > changing the release model here [3] > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > > [1] https://releases.openstack.org/reference/release_models.html > > [2] https://releases.openstack.org/teams/tripleo.html > > [3] > https://opendev.org/openstack/releases/src/branch/master/deliverables/xena > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Jun 9 09:02:13 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 9 Jun 2021 12:02:13 +0300 Subject: [tripleo] Changing TripleO's release model In-Reply-To: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: On Wednesday, June 9, 2021, Dan Sneddon wrote: > Thanks for making the announcement. Can you clarify how the feature-freeze > dates will be communicated to the greater community of contributors? > if you mean tripleo contributors then in the usual manner i.e. this mailing list, the IRC meeting etc if you mean the other openstack projects then that hasnt really ever applied since tripleo always has trailed the openstack release. the main thing this buys us is the ability to skip creating a particular branch (assuming we go ahead... for example not creating stable/Y when that time comes) or creating it *much* later than the trailing release model allows us which is 6 months if i recall correctly. In which case again feature freeze wouldnt apply since that stable branch would already have been created by the openstack projects regards , marios > > - Dan Sneddon > > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: > >  > > Greetings TripleO community! > > At the most recent TripleO community meetings we have discussed formally > changing the OpenStack release model for TripleO [1]. The previous > released projects can be found here [2]. TripleO has previously released > with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > To quote the release model doc: > > ‘Trailing deliverables trail the release, so they cannot, by definition, > be independent. They need to pick between cycle-with-rc > > or cycle-with-intermediary > > models.’ > > We are proposing to update the release-model to ‘independent’. This would > give the TripleO community more flexibility in when we choose to cut a > release. In turn this would mean less backporting, less upstream and 3rd > party resources used by potentially some future releases. > > To quote the release model doc: > > ‘Some projects opt to completely bypass the 6-month cycle and release > independently. For example, that is the case of projects that support the > development infrastructure. The “independent” model describes such > projects.’ > > The discussion here is to merely inform the greater community with regards > to the proposal and conversations regarding the release model. This thread > is NOT meant to discuss previous releases or their supported status, merely > changing the release model here [3] > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > > [1] https://releases.openstack.org/reference/release_models.html > > [2] https://releases.openstack.org/teams/tripleo.html > > [3] https://opendev.org/openstack/releases/src/branch/master/ > deliverables/xena > > -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Jun 9 09:05:48 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 9 Jun 2021 11:05:48 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: Le mer. 9 juin 2021 à 10:05, Alfredo Moralejo Alonso a écrit : > > > On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon wrote: > >> Thanks for making the announcement. Can you clarify how the >> feature-freeze dates will be communicated to the greater community of >> contributors? >> > Feature freeze doesn't exist in the independent model. The independent model isn't bound with the openstack series. Usually the independent model is more suitable for stable projects, deliverables that are in use outside openstack or for deliverables that aren't related to openstack series (e.g openstack/pbr). >> - Dan Sneddon >> >> On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: >> >>  >> >> Greetings TripleO community! >> >> At the most recent TripleO community meetings we have discussed formally >> changing the OpenStack release model for TripleO [1]. The previous >> released projects can be found here [2]. TripleO has previously released >> with release-type[‘trailing’, ‘cycle-with-intermediary’]. >> >> To quote the release model doc: >> >> ‘Trailing deliverables trail the release, so they cannot, by definition, >> be independent. They need to pick between cycle-with-rc >> >> or cycle-with-intermediary >> >> models.’ >> >> We are proposing to update the release-model to ‘independent’. This >> would give the TripleO community more flexibility in when we choose to cut >> a release. In turn this would mean less backporting, less upstream and 3rd >> party resources used by potentially some future releases. >> >> How do you plan to manage the different versions of OSP without upstream branches? Backports can be done by defining downstream branches for OSP, however, that will introduce a gymnastic to filter and select the changes to backport, the sealing between OSP versions will be more difficult to manage downstream. That leads us to the next question. How to deal with RDO? If I'm right RDO is branch based ( https://www.rdoproject.org/what/repos/), that will force some updates here too. That will also impact tools like packstack ( https://www.rdoproject.org/install/packstack/). > >> > What does this change mean in terms of branches and compatibility for > OpenStack stable releases?. > The independent release model means no stable branches anymore for deliverables that follow this model (e.g openstack/pbr). The deliverables that stick to this model aren't no longer coordinated by the openstack series (A, B, C..., Ussuri, Victoria, Wallaby, Xena, Y, Z). However, we should notice that the independent model is different from branchless. Branches can be created by project owners, however, these branches won't be released by the release team, only the master branch will be released. So, maybe you could emulate the OSP versions upstream (some sort of stable branches) and then backport patches to them, however, you'll have to release them by yourself. > >> To quote the release model doc: >> >> ‘Some projects opt to completely bypass the 6-month cycle and release >> independently. For example, that is the case of projects that support the >> development infrastructure. The “independent” model describes such >> projects.’ >> >> The discussion here is to merely inform the greater community with >> regards to the proposal and conversations regarding the release model. >> This thread is NOT meant to discuss previous releases or their supported >> status, merely changing the release model here [3] >> >> >> [0] https://etherpad.opendev.org/p/tripleo-meeting-items >> >> [1] https://releases.openstack.org/reference/release_models.html >> >> [2] https://releases.openstack.org/teams/tripleo.html >> >> [3] >> https://opendev.org/openstack/releases/src/branch/master/deliverables/xena >> >> -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Jun 9 09:06:13 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 9 Jun 2021 12:06:13 +0300 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: On Wednesday, June 9, 2021, Alfredo Moralejo Alonso wrote: > > > On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon wrote: > >> Thanks for making the announcement. Can you clarify how the >> feature-freeze dates will be communicated to the greater community of >> contributors? >> >> - Dan Sneddon >> >> On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: >> >>  >> >> Greetings TripleO community! >> >> At the most recent TripleO community meetings we have discussed formally >> changing the OpenStack release model for TripleO [1]. The previous >> released projects can be found here [2]. TripleO has previously released >> with release-type[‘trailing’, ‘cycle-with-intermediary’]. >> >> To quote the release model doc: >> >> ‘Trailing deliverables trail the release, so they cannot, by definition, >> be independent. They need to pick between cycle-with-rc >> >> or cycle-with-intermediary >> >> models.’ >> >> We are proposing to update the release-model to ‘independent’. This >> would give the TripleO community more flexibility in when we choose to cut >> a release. In turn this would mean less backporting, less upstream and 3rd >> party resources used by potentially some future releases. >> >> > What does this change mean in terms of branches and compatibility for > OpenStack stable releases?. > > as i wrote to Dan just now the main thing is that we may delay or even skip a particular branch. For compatibility I guess it means we would have to rely on git tags so perhaps making consistently frequent (eg monthly? or more?) releases for all the tripleo repos. You could then call a particular range of tags as being compatible with stable/Y for example. Does it sound sane/doable from an rdo package build perspective? regards, marios > To quote the release model doc: >> >> ‘Some projects opt to completely bypass the 6-month cycle and release >> independently. For example, that is the case of projects that support the >> development infrastructure. The “independent” model describes such >> projects.’ >> >> The discussion here is to merely inform the greater community with >> regards to the proposal and conversations regarding the release model. >> This thread is NOT meant to discuss previous releases or their supported >> status, merely changing the release model here [3] >> >> >> [0] https://etherpad.opendev.org/p/tripleo-meeting-items >> >> [1] https://releases.openstack.org/reference/release_models.html >> >> [2] https://releases.openstack.org/teams/tripleo.html >> >> [3] https://opendev.org/openstack/releases/src/branch/master/ >> deliverables/xena >> >> -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Jun 9 09:13:10 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 9 Jun 2021 12:13:10 +0300 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> Message-ID: thanks all for voting ... yes said I would add him in yesterday's irc meeting but weshay beat me to it ;) I just checked and see ysandeep is now in the core reviewers group https://review.opendev.org/admin/groups/0319cee8020840a3016f46359b076fa6b6ea831a,members ysandeep go +2 all the CI things ! regards, marios On Wednesday, June 9, 2021, Wesley Hayutin wrote: > Seeing no objections.... > > Congrats Sandeep :) > > On Fri, Jun 4, 2021 at 2:31 AM Gaël Chamoulaud > wrote: > >> Of course, a big +1! >> >> On 02/Jun/2021 14:17, Marios Andreou wrote: >> > Hello all >> > >> > Having discussed this with some members of the tripleo ci team >> > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >> > ysandeep) for core on the tripleo-ci repos (tripleo-ci, >> > tripleo-quickstart and tripleo-quickstart-extras). >> > >> > Sandeep joined the team about 1.5 years ago and has from the start >> > demonstrated his eagerness to learn and an excellent work ethic, >> > having made many useful code submissions [1] and code reviews [2] to >> > the CI repos and beyond. Thanks Sandeep and keep up the good work! >> > >> > Please reply to this mail with a +1 or -1 for objections in the usual >> > manner. If there are no objections we can declare it official in a few >> > days >> > >> > regards, marios >> > >> > [1] https://review.opendev.org/q/owner:sandeepyadav93 >> > [2] https://www.stackalytics.io/report/contribution?module= >> tripleo-group&project_type=openstack&days=180 >> > >> > >> >> Best Regards, >> Gaël >> >> -- >> Gaël Chamoulaud - (He/Him/His) >> > -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Wed Jun 9 11:27:25 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 9 Jun 2021 13:27:25 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: On Wed, Jun 9, 2021 at 11:06 AM Marios Andreou wrote: > > > On Wednesday, June 9, 2021, Alfredo Moralejo Alonso > wrote: > >> >> >> On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon wrote: >> >>> Thanks for making the announcement. Can you clarify how the >>> feature-freeze dates will be communicated to the greater community of >>> contributors? >>> >>> - Dan Sneddon >>> >>> On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: >>> >>>  >>> >>> Greetings TripleO community! >>> >>> At the most recent TripleO community meetings we have discussed formally >>> changing the OpenStack release model for TripleO [1]. The previous >>> released projects can be found here [2]. TripleO has previously >>> released with release-type[‘trailing’, ‘cycle-with-intermediary’]. >>> >>> To quote the release model doc: >>> >>> ‘Trailing deliverables trail the release, so they cannot, by >>> definition, be independent. They need to pick between cycle-with-rc >>> >>> or cycle-with-intermediary >>> >>> models.’ >>> >>> We are proposing to update the release-model to ‘independent’. This >>> would give the TripleO community more flexibility in when we choose to cut >>> a release. In turn this would mean less backporting, less upstream and 3rd >>> party resources used by potentially some future releases. >>> >>> >> What does this change mean in terms of branches and compatibility for >> OpenStack stable releases?. >> >> > > > as i wrote to Dan just now the main thing is that we may delay or even > skip a particular branch. For compatibility I guess it means we would have > to rely on git tags so perhaps making consistently frequent (eg monthly? or > more?) releases for all the tripleo repos. You could then call a particular > range of tags as being compatible with stable/Y for example. Does it sound > sane/doable from an rdo package build perspective? > > For me it's fine if TripleO provides a list of tags which are able to deploy and coinstall with a certain OpenStack release, let's say stable/Y, i don't see much problem on that, we'd need to figure out how to express that as code. The actual problem I see is how to maintain that working overtime during the maintenance phase of release Y without stable/Y branches or CI jobs for old releases. Let's assume that at GA time for Y you provide a list of tags for tripleo projects coming from the master branch. How will you manage a bug affecting to tripleo on release Y or introduced by any changing factor (OS updates, etc...)?, will master branch be kept tested and compatible with both master and stable/Y (as branchless project do, i.e. tempest)?. Note that frequent releases on master branches will not help to support Y release if a change in the branch depends on changes done in more recent releases. >From RDO, we don't require all packages to have stable branches (we include independent or branchless packages in the distro) but we want to provide a validated combination of packages working for a certain synchronized release and with the mechanism to fix it if it breaks during the maintenance period. I'm not sure how tripleo can do this without branches or maintaining backwards compatibility in master or other branches. > > regards, marios > > > > >> To quote the release model doc: >>> >>> ‘Some projects opt to completely bypass the 6-month cycle and release >>> independently. For example, that is the case of projects that support the >>> development infrastructure. The “independent” model describes such >>> projects.’ >>> >>> The discussion here is to merely inform the greater community with >>> regards to the proposal and conversations regarding the release model. >>> This thread is NOT meant to discuss previous releases or their supported >>> status, merely changing the release model here [3] >>> >>> >>> [0] https://etherpad.opendev.org/p/tripleo-meeting-items >>> >>> [1] https://releases.openstack.org/reference/release_models.html >>> >>> [2] https://releases.openstack.org/teams/tripleo.html >>> >>> [3] >>> https://opendev.org/openstack/releases/src/branch/master/deliverables/xena >>> >>> > > -- > _sent from my mobile - sorry for spacing spelling etc_ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 9 11:49:38 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 09 Jun 2021 12:49:38 +0100 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> On Wed, 2021-06-09 at 12:06 +0300, Marios Andreou wrote: > On Wednesday, June 9, 2021, Alfredo Moralejo Alonso > wrote: > > > > > > > On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon wrote: > > > > > Thanks for making the announcement. Can you clarify how the > > > feature-freeze dates will be communicated to the greater community of > > > contributors? > > > > > > - Dan Sneddon > > > > > > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: > > > > > >  > > > > > > Greetings TripleO community! > > > > > > At the most recent TripleO community meetings we have discussed formally > > > changing the OpenStack release model for TripleO [1]. The previous > > > released projects can be found here [2]. TripleO has previously released > > > with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > > > > > To quote the release model doc: > > > > > > ‘Trailing deliverables trail the release, so they cannot, by definition, > > > be independent. They need to pick between cycle-with-rc > > > > > > or cycle-with-intermediary > > > > > > models.’ > > > > > > We are proposing to update the release-model to ‘independent’. This > > > would give the TripleO community more flexibility in when we choose to cut > > > a release. In turn this would mean less backporting, less upstream and 3rd > > > party resources used by potentially some future releases. > > > > > > > > What does this change mean in terms of branches and compatibility for > > OpenStack stable releases?. > > > > > > > as i wrote to Dan just now the main thing is that we may delay or even skip > a particular branch. For compatibility I guess it means we would have to > rely on git tags so perhaps making consistently frequent (eg monthly? or > more?) releases for all the tripleo repos. You could then call a particular > range of tags as being compatible with stable/Y for example. Does it sound > sane/doable from an rdo package build perspective? > too me this feels like we are leaking downstream product lifecycle into upstream. even if redhat is overwhelmingly the majority contibutor of reviews and commits to ooo im not sure that changing the upstream lifestyle to align more closely with our product life cycle is the correct thing to do. at least while tripleo is still in the Openstack namespaces and not the x namespaces. Skipping upstream release is really quite a radical departure form the project original goals. i think it would also be counter productive to our downstream efforts to move our testing close to upstream. if ooo was to lose the ablity to test master for example we would not be able to use ooo in our downstream ci to test feature that we plan to release osp n+1 that are develop during an upstream cycle that wont be productised. i do not work on ooo so at the end of the day this wont affect me much but to me skipping releases seam counter intuitive given the previous efforts to make ooo more usable for development and ci. Moving to independent to decouple the lifecycle seams more reasonable if the underlying goal is not to skip releases. you can release when ready rather then scrambling or wating for a deadline. personally i think moving in the other direction so that ooo can release sooner not later would make the project more appealing as the delay in support of a release is often considered a detractor for tripleo vs other openstack installers. i would hope that this change would not have any effect on the rdo packaging of non ooo packages. the rdo packages are used by other instalation methods (the puppet moduels for example) including i belive some of the larger chineese providers that have written there own installers. i think it would be damaging to centos if rdo was to skip upstream version of say nova. what might need to change is the packaging of ooo itself in rdo. tl;dr im not against the idea of ooo moving to independent model but i would hope that it will not affect RDO's packaging of non ooo projects and that ooo can still be used for ci of master and stable branches of for example nova. regards sean > > regards, marios > > > > > > To quote the release model doc: > > > > > > ‘Some projects opt to completely bypass the 6-month cycle and release > > > independently. For example, that is the case of projects that support the > > > development infrastructure. The “independent” model describes such > > > projects.’ > > > > > > The discussion here is to merely inform the greater community with > > > regards to the proposal and conversations regarding the release model. > > > This thread is NOT meant to discuss previous releases or their supported > > > status, merely changing the release model here [3] > > > > > > > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > > > > > > [1] https://releases.openstack.org/reference/release_models.html > > > > > > [2] https://releases.openstack.org/teams/tripleo.html > > > > > > [3] https://opendev.org/openstack/releases/src/branch/master/ > > > deliverables/xena > > > > > > > From amoralej at redhat.com Wed Jun 9 13:17:37 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 9 Jun 2021 15:17:37 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: On Wed, Jun 9, 2021 at 1:49 PM Sean Mooney wrote: > On Wed, 2021-06-09 at 12:06 +0300, Marios Andreou wrote: > > On Wednesday, June 9, 2021, Alfredo Moralejo Alonso > > > wrote: > > > > > > > > > > > On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon > wrote: > > > > > > > Thanks for making the announcement. Can you clarify how the > > > > feature-freeze dates will be communicated to the greater community of > > > > contributors? > > > > > > > > - Dan Sneddon > > > > > > > > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin > wrote: > > > > > > > >  > > > > > > > > Greetings TripleO community! > > > > > > > > At the most recent TripleO community meetings we have discussed > formally > > > > changing the OpenStack release model for TripleO [1]. The previous > > > > released projects can be found here [2]. TripleO has previously > released > > > > with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > > > > > > > To quote the release model doc: > > > > > > > > ‘Trailing deliverables trail the release, so they cannot, by > definition, > > > > be independent. They need to pick between cycle-with-rc > > > > < > https://releases.openstack.org/reference/release_models.html#cycle-with-rc > > > > > > or cycle-with-intermediary > > > > < > https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary > > > > > > models.’ > > > > > > > > We are proposing to update the release-model to ‘independent’. This > > > > would give the TripleO community more flexibility in when we choose > to cut > > > > a release. In turn this would mean less backporting, less upstream > and 3rd > > > > party resources used by potentially some future releases. > > > > > > > > > > > What does this change mean in terms of branches and compatibility for > > > OpenStack stable releases?. > > > > > > > > > > > > as i wrote to Dan just now the main thing is that we may delay or even > skip > > a particular branch. For compatibility I guess it means we would have to > > rely on git tags so perhaps making consistently frequent (eg monthly? or > > more?) releases for all the tripleo repos. You could then call a > particular > > range of tags as being compatible with stable/Y for example. Does it > sound > > sane/doable from an rdo package build perspective? > > > too me this feels like we are leaking downstream product lifecycle into > upstream. > even if redhat is overwhelmingly the majority contibutor of reviews and > commits to > ooo im not sure that changing the upstream lifestyle to align more closely > with our product life > cycle is the correct thing to do. > > at least while tripleo is still in the Openstack namespaces and not the x > namespaces. > Skipping upstream release is really quite a radical departure form the > project original goals. > i think it would also be counter productive to our downstream efforts to > move our testing close to upstream. > if ooo was to lose the ablity to test master for example we would not be > able to use ooo in our downstream ci to test > feature that we plan to release osp n+1 that are develop during an > upstream cycle that wont be productised. > > i do not work on ooo so at the end of the day this wont affect me much but > to me skipping releases seam counter intuitive > given the previous efforts to make ooo more usable for development and ci. > Moving to independent > to decouple the lifecycle seams more reasonable if the underlying goal is > not to skip releases. you can release when ready > rather then scrambling or wating for a deadline. personally i think moving > in the other direction so that ooo can release sooner > not later would make the project more appealing as the delay in support of > a release is often considered a detractor for tripleo vs > other openstack installers. > > i would hope that this change would not have any effect on the rdo > packaging of non ooo packages. > the rdo packages are used by other instalation methods (the puppet moduels > for example) including i belive some of the larger chineese providers that > have written there own installers. i think it would be damaging to centos > if rdo was to skip upstream version of say nova. what might need to change > is the packaging of ooo itself in rdo. > > tl;dr im not against the idea of ooo moving to independent model but i > would hope that it will not affect RDO's packaging of non ooo projects and > that > ooo can still be used for ci of master and stable branches of for example > nova. > > RDO has no plans on skipping releases or any other changes affecting non-tripleo packages. The impact of this change (unclear at this point) should only affect the packages for those repos. Note that RDO aims at being used and useful for other users and deployment tools as Puppet modules, Kolla, or others willing to work in CentOS and we'd like to maintain the collaboration with them as needed. Regards, Alfredo > regards > sean > > > > > regards, marios > > > > > > > > > > > To quote the release model doc: > > > > > > > > ‘Some projects opt to completely bypass the 6-month cycle and release > > > > independently. For example, that is the case of projects that > support the > > > > development infrastructure. The “independent” model describes such > > > > projects.’ > > > > > > > > The discussion here is to merely inform the greater community with > > > > regards to the proposal and conversations regarding the release > model. > > > > This thread is NOT meant to discuss previous releases or their > supported > > > > status, merely changing the release model here [3] > > > > > > > > > > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > > > > > > > > [1] https://releases.openstack.org/reference/release_models.html > > > > > > > > [2] https://releases.openstack.org/teams/tripleo.html > > > > > > > > [3] https://opendev.org/openstack/releases/src/branch/master/ > > > > deliverables/xena > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 9 13:57:03 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 09 Jun 2021 14:57:03 +0100 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: <4450b0fa40e3decf06f365c10be5e370af923e3b.camel@redhat.com> On Wed, 2021-06-09 at 15:17 +0200, Alfredo Moralejo Alonso wrote: > On Wed, Jun 9, 2021 at 1:49 PM Sean Mooney wrote: > > > On Wed, 2021-06-09 at 12:06 +0300, Marios Andreou wrote: > > > On Wednesday, June 9, 2021, Alfredo Moralejo Alonso > > > > > wrote: > > > > > > > > > > > > > > > On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon > > wrote: > > > > > > > > > Thanks for making the announcement. Can you clarify how the > > > > > feature-freeze dates will be communicated to the greater community of > > > > > contributors? > > > > > > > > > > - Dan Sneddon > > > > > > > > > > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin > > wrote: > > > > > > > > > >  > > > > > > > > > > Greetings TripleO community! > > > > > > > > > > At the most recent TripleO community meetings we have discussed > > formally > > > > > changing the OpenStack release model for TripleO [1]. The previous > > > > > released projects can be found here [2]. TripleO has previously > > released > > > > > with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > > > > > > > > > To quote the release model doc: > > > > > > > > > > ‘Trailing deliverables trail the release, so they cannot, by > > definition, > > > > > be independent. They need to pick between cycle-with-rc > > > > > < > > https://releases.openstack.org/reference/release_models.html#cycle-with-rc > > > > > > > > or cycle-with-intermediary > > > > > < > > https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary > > > > > > > > models.’ > > > > > > > > > > We are proposing to update the release-model to ‘independent’. This > > > > > would give the TripleO community more flexibility in when we choose > > to cut > > > > > a release. In turn this would mean less backporting, less upstream > > and 3rd > > > > > party resources used by potentially some future releases. > > > > > > > > > > > > > > What does this change mean in terms of branches and compatibility for > > > > OpenStack stable releases?. > > > > > > > > > > > > > > > > > as i wrote to Dan just now the main thing is that we may delay or even > > skip > > > a particular branch. For compatibility I guess it means we would have to > > > rely on git tags so perhaps making consistently frequent (eg monthly? or > > > more?) releases for all the tripleo repos. You could then call a > > particular > > > range of tags as being compatible with stable/Y for example. Does it > > sound > > > sane/doable from an rdo package build perspective? > > > > > too me this feels like we are leaking downstream product lifecycle into > > upstream. > > even if redhat is overwhelmingly the majority contibutor of reviews and > > commits to > > ooo im not sure that changing the upstream lifestyle to align more closely > > with our product life > > cycle is the correct thing to do. > > > > at least while tripleo is still in the Openstack namespaces and not the x > > namespaces. > > Skipping upstream release is really quite a radical departure form the > > project original goals. > > i think it would also be counter productive to our downstream efforts to > > move our testing close to upstream. > > if ooo was to lose the ablity to test master for example we would not be > > able to use ooo in our downstream ci to test > > feature that we plan to release osp n+1 that are develop during an > > upstream cycle that wont be productised. > > > > i do not work on ooo so at the end of the day this wont affect me much but > > to me skipping releases seam counter intuitive > > given the previous efforts to make ooo more usable for development and ci. > > Moving to independent > > to decouple the lifecycle seams more reasonable if the underlying goal is > > not to skip releases. you can release when ready > > rather then scrambling or wating for a deadline. personally i think moving > > in the other direction so that ooo can release sooner > > not later would make the project more appealing as the delay in support of > > a release is often considered a detractor for tripleo vs > > other openstack installers. > > > > i would hope that this change would not have any effect on the rdo > > packaging of non ooo packages. > > the rdo packages are used by other instalation methods (the puppet moduels > > for example) including i belive some of the larger chineese providers that > > have written there own installers. i think it would be damaging to centos > > if rdo was to skip upstream version of say nova. what might need to change > > is the packaging of ooo itself in rdo. > > > > tl;dr im not against the idea of ooo moving to independent model but i > > would hope that it will not affect RDO's packaging of non ooo projects and > > that > > ooo can still be used for ci of master and stable branches of for example > > nova. > > > > > > RDO has no plans on skipping releases or any other changes affecting > non-tripleo packages. The impact of this change (unclear at this point) > should only affect the packages for those repos. ack > > Note that RDO aims at being used and useful for other users and deployment > tools as Puppet modules, Kolla, or others willing to work in CentOS and > we'd like to maintain the collaboration with them as needed. ya that is what i was expecting. thanks for confirming. provided the possible change in ooo direction does not negatively impact the other consumes of rdo i dont really have an objection to ooo changing how they work if peolel think it will make there lives and there customer live simpler in the long run. as i said i do not work on or use ooo frequently but i have consumed the output of rdo via kolla in the past and while i typeically prefer using the source install i know many do use the centos binary install variant using the rdo packages. > > Regards, > > Alfredo > > > > regards > > sean > > > > > > > > regards, marios > > > > > > > > > > > > > > > > To quote the release model doc: > > > > > > > > > > ‘Some projects opt to completely bypass the 6-month cycle and release > > > > > independently. For example, that is the case of projects that > > support the > > > > > development infrastructure. The “independent” model describes such > > > > > projects.’ > > > > > > > > > > The discussion here is to merely inform the greater community with > > > > > regards to the proposal and conversations regarding the release > > model. > > > > > This thread is NOT meant to discuss previous releases or their > > supported > > > > > status, merely changing the release model here [3] > > > > > > > > > > > > > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > > > > > > > > > > [1] https://releases.openstack.org/reference/release_models.html > > > > > > > > > > [2] https://releases.openstack.org/teams/tripleo.html > > > > > > > > > > [3] https://opendev.org/openstack/releases/src/branch/master/ > > > > > deliverables/xena > > > > > > > > > > > > > > > > > > > From james.slagle at gmail.com Wed Jun 9 13:58:29 2021 From: james.slagle at gmail.com (James Slagle) Date: Wed, 9 Jun 2021 09:58:29 -0400 Subject: [tripleo] Changing TripleO's release model In-Reply-To: <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: On Wed, Jun 9, 2021 at 7:54 AM Sean Mooney wrote: > too me this feels like we are leaking downstream product lifecycle into > upstream. > even if redhat is overwhelmingly the majority contibutor of reviews and > commits to > ooo im not sure that changing the upstream lifestyle to align more closely > with our product life > cycle is the correct thing to do. > I wouldn't characterize it as "leaking". Instead, we are aiming to accurately reflect what we intend to support as a community (not as any company), based on who we have working on this project. Unfortunately the reality is that no one (community or otherwise) should use TripleO from rocky/stein/ussuri/victoria other than for dev/test in my opinion. There's no upgrade path from any of those releases, and the community isn't working on one. However, that is not clearly represented by the project. While we don't intend to retrofit this policy to past branches, part of proposing this change is to help clear that up going forward. > at least while tripleo is still in the Openstack namespaces and not the x > namespaces. > I don't think I understand. At least..."what"? How is the OpenStack namespace related to release models? How does the namespace (which is a construct of how git repositories are organized aiui), have a relation to what is included in an OpenStack release? > Skipping upstream release is really quite a radical departure form the > project original goals. > I disagree with how you remember the history, and I think this is an overstatement. > i think it would also be counter productive to our downstream efforts to > move our testing close to upstream. > if ooo was to lose the ablity to test master for example we would not be > able to use ooo in our downstream ci to test > feature that we plan to release osp n+1 that are develop during an > upstream cycle that wont be productised. > I don't follow the premise. How is it counterproductive to move our testing close to upstream? We'd always continue to test master. When it comes time for OpenStack to branch, such as to create stable/xena in all the service projects, TripleO may choose not to branch, and I think at that point, TripleO would no longer have CI jobs running on stable/xena of those service projects. > > i do not work on ooo so at the end of the day this wont affect me much but > to me skipping releases seam counter intuitive > given the previous efforts to make ooo more usable for development and ci. > Moving to independent > to decouple the lifecycle seams more reasonable if the underlying goal is > not to skip releases. you can release when ready > rather then scrambling or wating for a deadline. I think the "when ready" is part of the problem here. For example, one might look at when we released stable/victoria and claim TripleO was ready. However, when TripleO victoria was released, you could not upgrade from ussuri to victoria. Likewise, you can't upgrade from victoria to wallaby. Were we really ready to release? My take is that we shouldn't have released at all. I think it sends a false signal. An alternative to this entire proposal would be to double down on making TripleO more fully support each OpenStack stable branch. That would mean adding update/upgrade jobs for each stable branch, and doing the development work to actually implement that support (if it's not tested, it's broken), as well as likely adding other missing jobs instead of de-emphasizing testing on these branches. AIUI, we do not want to be adding additional CI jobs of that scale upstream, especially given the complaints about node usage from TripleO from earlier in this cycle. And, we do not have the humans to develop and maintain this additional work. > personally i think moving in the other direction so that ooo can release > sooner > not later would make the project more appealing as the delay in support of > a release is often considered a detractor for tripleo vs > other openstack installers. > I think moving to the independent model does enable us to consider releasing sooner. > > i would hope that this change would not have any effect on the rdo > packaging of non ooo packages. > the rdo packages are used by other instalation methods (the puppet moduels > for example) including i belive some of the larger chineese providers that > have written there own installers. i think it would be damaging to centos > if rdo was to skip upstream version of say nova. what might need to change > is the packaging of ooo itself in rdo. > > tl;dr im not against the idea of ooo moving to independent model but i > would hope that it will not affect RDO's packaging of non ooo projects and > that > ooo can still be used for ci of master and stable branches of for example > nova. > We'd continue to always CI master. Not all stable branches would remain covered by TripleO. For example, if TripleO didn't branch and release for xena, you wouldn't have TripleO jobs on nova stable/xena patches. Those jobs don't provide any meaningful feedback for TripleO. Perhaps they do for nova as you are backporting a change through every branch, and you're final destination is a branch where TripleO is expected to be working, such as wallaby. You would want to know if the change broke on xena for example, or if it were something on wallaby. I can see how that would be useful for nova. However, part of what we're saying is that TripleO is trying to simplify what we are supporting and testing, so we can get better at supporting the releases that are most important to our community. Yes, there is some downstream influence here, in the same way that TripleO doesn't support deploying with Ubuntu, because it is less important to our (TripleO) community. I think that's ok, and I see nothing wrong with it. If the service projects (such as nova) want to commit additional resources and the upstream CI node count can handle the increase that properly supporting each stable branch implies, then I think we can weigh that option as well. However, the current status quo of more or less just checking the boxes on each stable branch is wrong and sends a false message in my opinion. That's a big part of what we're trying to correct. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed Jun 9 14:16:16 2021 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 9 Jun 2021 08:16:16 -0600 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: On Wed, Jun 9, 2021 at 8:06 AM James Slagle wrote: > > > On Wed, Jun 9, 2021 at 7:54 AM Sean Mooney wrote: > >> too me this feels like we are leaking downstream product lifecycle into >> upstream. >> even if redhat is overwhelmingly the majority contibutor of reviews and >> commits to >> ooo im not sure that changing the upstream lifestyle to align more >> closely with our product life >> cycle is the correct thing to do. >> > > I wouldn't characterize it as "leaking". Instead, we are aiming to > accurately reflect what we intend to support as a community (not as any > company), based on who we have working on this project. > > Unfortunately the reality is that no one (community or otherwise) should > use TripleO from rocky/stein/ussuri/victoria other than for dev/test in my > opinion. There's no upgrade path from any of those releases, and the > community isn't working on one. However, that is not clearly represented by > the project. While we don't intend to retrofit this policy to past > branches, part of proposing this change is to help clear that up going > forward. > > >> at least while tripleo is still in the Openstack namespaces and not the x >> namespaces. >> > > I don't think I understand. At least..."what"? How is the OpenStack > namespace related to release models? How does the namespace (which is a > construct of how git repositories are organized aiui), have a relation to > what is included in an OpenStack release? > > >> Skipping upstream release is really quite a radical departure form the >> project original goals. >> > > I disagree with how you remember the history, and I think this is an > overstatement. > > >> i think it would also be counter productive to our downstream efforts to >> move our testing close to upstream. >> > if ooo was to lose the ablity to test master for example we would not be >> able to use ooo in our downstream ci to test >> feature that we plan to release osp n+1 that are develop during an >> upstream cycle that wont be productised. >> > > I don't follow the premise. How is it counterproductive to move our > testing close to upstream? > We'd always continue to test master. When it comes time for OpenStack to > branch, such as to create stable/xena in all the service projects, TripleO > may choose not to branch, and I think at that point, TripleO would no > longer have CI jobs running on stable/xena of those service projects. > > >> >> i do not work on ooo so at the end of the day this wont affect me much >> but to me skipping releases seam counter intuitive >> given the previous efforts to make ooo more usable for development and >> ci. Moving to independent >> to decouple the lifecycle seams more reasonable if the underlying goal is >> not to skip releases. you can release when ready >> rather then scrambling or wating for a deadline. > > > I think the "when ready" is part of the problem here. For example, one > might look at when we released stable/victoria and claim TripleO was ready. > However, when TripleO victoria was released, you could not upgrade from > ussuri to victoria. Likewise, you can't upgrade from victoria to wallaby. > Were we really ready to release? My take is that we shouldn't have released > at all. I think it sends a false signal. > > An alternative to this entire proposal would be to double down on making > TripleO more fully support each OpenStack stable branch. That would mean > adding update/upgrade jobs for each stable branch, and doing the > development work to actually implement that support (if it's not tested, > it's broken), as well as likely adding other missing jobs instead of > de-emphasizing testing on these branches. > > AIUI, we do not want to be adding additional CI jobs of that scale > upstream, especially given the complaints about node usage from TripleO > from earlier in this cycle. And, we do not have the humans to develop and > maintain this additional work. > > >> personally i think moving in the other direction so that ooo can release >> sooner >> not later would make the project more appealing as the delay in support >> of a release is often considered a detractor for tripleo vs >> other openstack installers. >> > > I think moving to the independent model does enable us to consider > releasing sooner. > I think there's also something here that we should highlight in that it's desirable to be able to update the tripleo deployment process outside of openstack. By switching to independent and focusing on extracting the version specifics, we could allow for folks to leverage newer tripleo functionality (e.g. when we switched from mistral/zaqar to just ansible) without having to upgrade their openstack as well. > > > >> >> i would hope that this change would not have any effect on the rdo >> packaging of non ooo packages. >> the rdo packages are used by other instalation methods (the puppet >> moduels for example) including i belive some of the larger chineese >> providers that >> have written there own installers. i think it would be damaging to centos >> if rdo was to skip upstream version of say nova. what might need to change >> is the packaging of ooo itself in rdo. >> >> tl;dr im not against the idea of ooo moving to independent model but i >> would hope that it will not affect RDO's packaging of non ooo projects and >> that >> ooo can still be used for ci of master and stable branches of for example >> nova. >> > > We'd continue to always CI master. > > Not all stable branches would remain covered by TripleO. For example, if > TripleO didn't branch and release for xena, you wouldn't have TripleO jobs > on nova stable/xena patches. Those jobs don't provide any meaningful > feedback for TripleO. Perhaps they do for nova as you are backporting a > change through every branch, and you're final destination is a branch where > TripleO is expected to be working, such as wallaby. You would want to know > if the change broke on xena for example, or if it were something on > wallaby. I can see how that would be useful for nova. > > However, part of what we're saying is that TripleO is trying to simplify > what we are supporting and testing, so we can get better at supporting the > releases that are most important to our community. Yes, there is some > downstream influence here, in the same way that TripleO doesn't support > deploying with Ubuntu, because it is less important to our (TripleO) > community. I think that's ok, and I see nothing wrong with it. > > If the service projects (such as nova) want to commit additional resources > and the upstream CI node count can handle the increase that properly > supporting each stable branch implies, then I think we can weigh that > option as well. However, the current status quo of more or less just > checking the boxes on each stable branch is wrong and sends a false message > in my opinion. That's a big part of what we're trying to correct. > > -- > -- James Slagle > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 9 16:13:45 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 09 Jun 2021 17:13:45 +0100 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: <2a9826418077d18c20e7ad718d7a99f4bffa8515.camel@redhat.com> On Wed, 2021-06-09 at 09:58 -0400, James Slagle wrote: > On Wed, Jun 9, 2021 at 7:54 AM Sean Mooney wrote: > > > too me this feels like we are leaking downstream product lifecycle into > > upstream. > > even if redhat is overwhelmingly the majority contibutor of reviews and > > commits to > > ooo im not sure that changing the upstream lifestyle to align more closely > > with our product life > > cycle is the correct thing to do. > > > > I wouldn't characterize it as "leaking". Instead, we are aiming to > accurately reflect what we intend to support as a community (not as any > company), based on who we have working on this project. > > Unfortunately the reality is that no one (community or otherwise) should > use TripleO from rocky/stein/ussuri/victoria other than for dev/test in my > opinion. There's no upgrade path from any of those releases, and the > community isn't working on one. However, that is not clearly represented by > the project. While we don't intend to retrofit this policy to past > branches, part of proposing this change is to help clear that up going > forward. if that is the general consensus yes i think it would be good to update the docs to highlight that. i had assumed that the goal of ooo was to have all upstream releases be production ready independent of if that is productized downstream. > > > > at least while tripleo is still in the Openstack namespaces and not the x > > namespaces. > > > > I don't think I understand. At least..."what"? How is the OpenStack > namespace related to release models? How does the namespace (which is a > construct of how git repositories are organized aiui), have a relation to > what is included in an OpenStack release? i was more thinking that perhaps if ooo was moving away form the corodianted releases it might make more sense for it to move to a spereate top level project like starlinx so rather then x/ namespace maybe a top level tripleo namesapace to better refalect that while it deploy openstack it does not follow the ame release cadance. coupled with the fact that ooo already those not follow the normal stable backport policies that other openstack proejct follow and the recent dicussing about opting out of the global requirement process it just felt like ooo was moving away form being a part of openstack to being a related porjects like starlingx. > > > > Skipping upstream release is really quite a radical departure form the > > project original goals. > > > > I disagree with how you remember the history, and I think this is an > overstatement. > > > > i think it would also be counter productive to our downstream efforts to > > move our testing close to upstream. > > > if ooo was to lose the ablity to test master for example we would not be > > able to use ooo in our downstream ci to test > > feature that we plan to release osp n+1 that are develop during an > > upstream cycle that wont be productised. > > > > I don't follow the premise. How is it counterproductive to move our testing > close to upstream? > We'd always continue to test master. When it comes time for OpenStack to > branch, such as to create stable/xena in all the service projects, TripleO > may choose not to branch, and I think at that point, TripleO would no > longer have CI jobs running on stable/xena of those service projects. so for other project i guess the impact of that would be removing the triplo job form our our ci piplines for the stable branch. for nova we do not currently hav eany ooo based ci jobs but we had breifly discussed having ooo-standalone job to do some centos/ooo based testing at one point. destack is generally the correct tool to test change to nova but if ooo was to skip stable release i think it would mean it would not be a candiate for other project to use in there ci as an alternivie. that is a valid choice for the ooo to make but it effectlivy means we will never have a ooo voting jobs in nova if ooo does start skiping upstream releases. > > > > > > i do not work on ooo so at the end of the day this wont affect me much but > > to me skipping releases seam counter intuitive > > given the previous efforts to make ooo more usable for development and ci. > > Moving to independent > > to decouple the lifecycle seams more reasonable if the underlying goal is > > not to skip releases. you can release when ready > > rather then scrambling or wating for a deadline. > > > I think the "when ready" is part of the problem here. For example, one > might look at when we released stable/victoria and claim TripleO was ready. > However, when TripleO victoria was released, you could not upgrade from > ussuri to victoria. Likewise, you can't upgrade from victoria to wallaby. > Were we really ready to release? My take is that we shouldn't have released > at all. I think it sends a false signal. well honestly if i was using ooo as my deployment tool i would have expected upgrade support to be completed prior to the reslease yes but i think this is not an artifact of the release cycle but rather an artifact that upgrade support is not part of the DOD of enabling a new feature in ooo. e.g. if master was required to be alwasy upgradable for previous release we would not have this issue. that is a lot more work though. > > An alternative to this entire proposal would be to double down on making > TripleO more fully support each OpenStack stable branch. That would mean > adding update/upgrade jobs for each stable branch, and doing the > development work to actually implement that support (if it's not tested, > it's broken), as well as likely adding other missing jobs instead of > de-emphasizing testing on these branches. yes. personally i would prefer if we went in this direction, again i know why from a downstream persepctive we may not want to do this but in my personal capasity if i was evaluating a deployment tool n to n+1 update in place of upstream release would be part of my minimum viable product. > AIUI, we do not want to be adding additional CI jobs of that scale > upstream, especially given the complaints about node usage from TripleO > from earlier in this cycle. And, we do not have the humans to develop and > maintain this additional work. ack i know we are both human an machine constrained to go this path. actully supporting multinode standalone and upgrades of standalone deployments would have signinficnatly reduced the ci time and resouces required but we also dont have the resouces to eanable that :( we have been trying on and off to enable that for https://opendev.org/openstack/whitebox-tempest-plugin so that we can better test changes before they hit downstream ci but i think that idea has more or less stalled out. > > > > personally i think moving in the other direction so that ooo can release > > sooner > > not later would make the project more appealing as the delay in support of > > a release is often considered a detractor for tripleo vs > > other openstack installers. > > > > I think moving to the independent model does enable us to consider > releasing sooner. > > > > > > i would hope that this change would not have any effect on the rdo > > packaging of non ooo packages. > > the rdo packages are used by other instalation methods (the puppet moduels > > for example) including i belive some of the larger chineese providers that > > have written there own installers. i think it would be damaging to centos > > if rdo was to skip upstream version of say nova. what might need to change > > is the packaging of ooo itself in rdo. > > > > tl;dr im not against the idea of ooo moving to independent model but i > > would hope that it will not affect RDO's packaging of non ooo projects and > > that > > ooo can still be used for ci of master and stable branches of for example > > nova. > > > > We'd continue to always CI master. > > Not all stable branches would remain covered by TripleO. For example, if > TripleO didn't branch and release for xena, you wouldn't have TripleO jobs > on nova stable/xena patches. Those jobs don't provide any meaningful > feedback for TripleO. Perhaps they do for nova as you are backporting a > change through every branch, and you're final destination is a branch where > TripleO is expected to be working, such as wallaby. You would want to know > if the change broke on xena for example, or if it were something on > wallaby. I can see how that would be useful for nova. yes today we use devstack to validate that which honestly is enough 99% of the time where it can be chalanging is if the issue we are fixing is centos/ooo specific granted we dont currently have ooo based ci on the project i work on so its not really a decrese in capability.  we had discussed possible using upstream ooo to start early validation of new feature after the completion of an upstream cycle form a branch that would not be the bases of an downstream relesase. we might be able to get the same effect just using master but the idea was to test earlier. > > However, part of what we're saying is that TripleO is trying to simplify > what we are supporting and testing, so we can get better at supporting the > releases that are most important to our community. Yes, there is some > downstream influence here, in the same way that TripleO doesn't support > deploying with Ubuntu, because it is less important to our (TripleO) > community. I think that's ok, and I see nothing wrong with it. ack, that is fair and im not really concerned by that. i think its correct to serve the need of the comunity that consume the project. > > If the service projects (such as nova) want to commit additional resources > and the upstream CI node count can handle the increase that properly > supporting each stable branch implies, then I think we can weigh that > option as well. However, the current status quo of more or less just > checking the boxes on each stable branch is wrong and sends a false message > in my opinion. That's a big part of what we're trying to correct. > ack. honestly without multinode standalone and upgrade support for the same i dont actully think it would add much value above just having a devstack centos job when we are concerned about cenost/rhel specific things. the reason i reponed initally was mainly propted by the implication that a ooo change in direction woudl nessatatye rdo to also change its release cycle but that is not what was being proposed or what will happen so you can consider my question/comments more or less adressed. From erin at openstack.org Wed Jun 9 16:26:04 2021 From: erin at openstack.org (Erin Disney) Date: Wed, 9 Jun 2021 11:26:04 -0500 Subject: OpenInfra Live - June 10th at 9am CT (1400 UTC) Message-ID: <78AE71C3-FE37-40CF-8053-A6DF742AA2DD@openstack.org> Hi everyone, This week’s OpenInfra Live episode is brought to you by the OpenStack Community. Keeping up with new OpenStack releases can be a challenge. In this continuation of the May 20th OpenInfra Live episode, a panel of large scale OpenStack infrastructure operators from Blizzard Entertainment, OVHcloud, Workday, Vexxhost and CERN join us again to further discuss upgrades. Episode: Upgrades in Large Scale OpenStack Infrastructure: The Discussion Date and time: Thursday, June 10th at 9am CT (1400 UTC) You can watch us live on: YouTube: https://www.youtube.com/watch?v=C2fSy005lDs LinkedIn: https://www.linkedin.com/feed/update/urn:li:ugcPost:6806241782301626368/ Facebook: https://www.facebook.com/openinfradev/videos/474846136944392 WeChat: recording will be posted on OpenStack WeChat after the live stream Speakers: Belmiro Moreira (CERN), Arnaud Morin (OVH). Mohammed Naser (Vexxhost), Imtiaz Chowdhury (Workday), Joshua Slater (Blizzard) First Upgrades OpenInfra Live Episode: https://www.youtube.com/watch?v=yf5iFiCg_Tw Thanks, Erin Erin Disney Event Marketing Open Infrastructure Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Wed Jun 9 16:28:25 2021 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Wed, 9 Jun 2021 17:28:25 +0100 Subject: [glance] How to limit access to particular store In-Reply-To: <49F175A2-A993-424B-97BF-F4EFB8129321@poczta.onet.pl> References: <49F175A2-A993-424B-97BF-F4EFB8129321@poczta.onet.pl> Message-ID: On Fri, Jun 4, 2021 at 1:56 PM at wrote: > Hi, > I have Glance with multi-store config and I want one store (not default) > to be read-only for everyone except cloud Admin. How can I do it? Is there > any way to limit store names visibility (which are visible i.e. in > properties section of "openstack image show IMAGE_NAME" output)? > Best regards > Adam Tomas > > Hi Adam, Such limitations are not possible at the moment. The only way to really do this if needed is to expose that "admin only" storage as a local web server and use the http store with locations api exposed to said users only. - jokke -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Wed Jun 9 19:14:17 2021 From: adivya1.singh at gmail.com (Adivya Singh) Date: Thu, 10 Jun 2021 00:44:17 +0530 Subject: Regarding Floating IP not reachbale Message-ID: Hello Team, I need a hint , where to check, as often my floating IP are not reachable in Openstack , it uses OVS based networking , and most of the time if i changed the l3 agent router it start working, I can ping the gateway from the qrouter namespace but can not ping the actual floating IP Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Jun 9 20:01:47 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 9 Jun 2021 17:01:47 -0300 Subject: [cinder] Bug deputy report for week of 2021-06-09 Message-ID: Hello, Sorry for the late report. This is a bug report from 2021-25-05 to 2021-09-06. You're welcome to join the next Cinder Bug Meeting next week. Weekly on Wednesday at 1500 UTC on #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Critical:- High: - https://bugs.launchpad.net/cinder/+bug/1929678 'Attachment_create api leaks reserved attachments'. Assigned to Felix Huettner. Medium:- Low: - https://bugs.launchpad.net/cinder/+bug/1931003 'Add support for Pacific to RBD driver'. Assigned to Jon Bernard. - https://bugs.launchpad.net/cinder/+bug/1930526 ' Block Storage API V3 (CURRENT) in cinder - wrong URL for backup-detail'. Unassigned. - https://bugs.launchpad.net/cinder/+bug/1928947 ' Block Storage API V2 (DEPRECATED) - Still see v2 endpoints after disabling per documentation'. Unassigned. - https://bugs.launchpad.net/cinder/+bug/1930773 ' Block Storage API V3 (CURRENT) in cinder - remove Optional flag for key_size and cipher'. Assigned to Sofia Enriquez - https://bugs.launchpad.net/os-brick/+bug/1928331 ' Some operating systems use / lib / udev / SCSI_ ID to get the SCI_ WWN error'. Unassigned. Low:- Incomplete: - https://bugs.launchpad.net/cinder/+bug/1929810 'SVC: unable to create data volume using Compressed template'. Unassigned. Cheers, Sofia -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Wed Jun 9 20:07:11 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Wed, 9 Jun 2021 22:07:11 +0200 Subject: [all][stable][release] Ocata - End of Life In-Reply-To: References: Message-ID: Hi, As I wrote in my previous mails, Ocata is not really maintained and gates are borken. So now the Ocata-EOL transitioning patches [1] have been generated for the rest of the projects that were not transitioned yet. Those teams who don't want to / cannot maintain their stable/ocata branch anymore: * please review the transition patch and +1 it and * clean up the unnecessary zuul jobs If a patch is approved, the last patch on stable/ocata will be tagged with *ocata-eol* tag. This can be checked out after stable/ocata is deleted. [1] https://review.opendev.org/q/topic:ocata-eol Thanks, Előd On 2021. 04. 20. 21:31, Előd Illés wrote: > Hi, > > Sorry, this will be long :) as there are 3 topics around old stable > branches and 'End of Life'. > > 1. Deletion of ocata-eol tagged branches > > With the introduction of Extended Maintenance process [1][2] some cycles > ago, the 'End of Life' (EOL) process also changed: > * branches were no longer EOL tagged and "mass-deleted" at the end of >   maintenance phase > * EOL'ing became a project decision > * if a project decides to cease maintenance of a branch that is in >   Extended Maintenance, then they can tag their branch with $series-eol > > However, the EOL-tagging process was not automated or redefined > process-wise, so that meant the branches that were tagged as EOL were > not deleted. Now (after some changing in tooling) Release Management > team finally will start to delete EOL-tagged branches. > > In this mail I'm sending a *WARNING* to consumers of old stable > branches, especially *ocata*, as we will start deleting the > *ocata-eol* tagged branches in a *week*. (And also newer *-eol branches > later on) > > > 2. Ocata branch > > Beyond the 1st topic we must clarify the future of Ocata stable branch > in general: tempest jobs became broken about ~ a year ago. That means > that projects had two ways forward: > > a. drop tempest testing to unblock gate > b. simply don't support ocata branch anymore > > As far as I see the latter one happened and stable/ocata became > unmaintained probably for every projects. > > So my questions are regarding this: > * Is any project still using/maintaining their stable/ocata branch? > * If not: can Release Team initiate a mass-EOL-tagging of stable/ocata? > > > 3. The 'next' old stable branches > > Some projects still support their Pike, Queens and Rocky branches. > These branches use Xenial and py2.7 and both are out of support. This > results broken gates time to time. Especially nowadays. These issues > suggest that these branches are closer and closer to being unmaintained. > So I call the attention of interested parties, who are for example > still consuming these stable branches and using them downstream to put > effort on maintaining the branches and their CI/gates. > > It is a good practice for stable maintainers to check if there are > failures in their projects' periodic-stable jobs [3], as those are > good indicators of the health of their stable branches. And if there > are, then try to fix it as soon as possible. > > > [1] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html > [2] > https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > [3] > http://lists.openstack.org/pipermail/openstack-stable-maint/2021-April/date.html > > > Thanks, > > Előd > > > From skaplons at redhat.com Wed Jun 9 20:51:33 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 09 Jun 2021 22:51:33 +0200 Subject: Regarding Floating IP not reachbale In-Reply-To: References: Message-ID: <4503925.7tzfGOeFMO@p1> Hi, Dnia środa, 9 czerwca 2021 21:14:17 CEST Adivya Singh pisze: > Hello Team, > > I need a hint , where to check, as often my floating IP are not reachable > in Openstack , it uses OVS based networking , and most of the time if i > changed the > l3 agent router it start working, I can ping the gateway from the qrouter > namespace but can not ping the actual floating IP > > Regards > Adivya Singh First of all You need to know what type of router are You using: DVR, DVR-HA, HA or Legacy. Depending on that You can look in different places why FIP is not working. For HA or Legacy type, please start pinging FIP from outside and try to check, e.g. with tcpdump if packets are visible in qrouter namespace on qg- and qr- interfaces. If that is correct, try to check if You can ping Your fixed IP address from that qrouter namespace. With that test You should be able to figure out if problem is somewhere in the external network (the one from which FIP is) or the tenant network. For DVR routers, test should be similar but there is also fip- namespace on the compute node and You should start checking there. Details about different scenarios are also described in the https://docs.openstack.org/ neutron/latest/admin/deploy-ovs.html[1] - maybe that will be useful for You. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gmann at ghanshyammann.com Thu Jun 10 00:18:18 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 09 Jun 2021 19:18:18 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 10th at 1500 UTC In-Reply-To: <179e8e52bd5.e35e022f316095.4772252122737314526@ghanshyammann.com> References: <179e8e52bd5.e35e022f316095.4772252122737314526@ghanshyammann.com> Message-ID: <179f348b4c6.f26451f4454213.7818415117551889676@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on June 10th at 1500 UTC in #openstack-tc IRC OFTC channel. -https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Xena tracker ** https://etherpad.opendev.org/p/tc-xena-tracker * Migration from 'Freenode' to 'OFTC' (gmann) ** https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc * Recommendation on moving the meeting channel to project channel ** https://review.opendev.org/c/openstack/project-team-guide/+/794839 * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Mon, 07 Jun 2021 18:53:23 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > NOTE: TC MEETINGS WILL BE HELD IN #openstack-tc CHANNEL ON OFTC NETWORK (NOT FREENODE) > > Technical Committee's next weekly meeting is scheduled for June 10th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, June 9th , at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > > -gmann > > From melwittt at gmail.com Thu Jun 10 03:27:08 2021 From: melwittt at gmail.com (melanie witt) Date: Wed, 9 Jun 2021 20:27:08 -0700 Subject: [nova][gate] openstack-tox-pep8 job broken Message-ID: Hi all, The openstack-tox-pep8 job is currently failing with the following error: > nova/crypto.py:39:1: error: Library stubs not installed for "paramiko" (or incompatible with Python 3.8) > nova/crypto.py:39:1: note: Hint: "python3 -m pip install types-paramiko" > nova/crypto.py:39:1: note: (or run "mypy --install-types" to install all missing stub packages) > nova/crypto.py:39:1: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports > Found 1 error in 1 file (checked 23 source files) > ERROR: InvocationError for command /usr/bin/bash tools/mypywrap.sh (exited with code 1) Please hold your rechecks until the fix merges: https://review.opendev.org/c/openstack/nova/+/795533 Cheers, -melanie From skaplons at redhat.com Thu Jun 10 06:44:27 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 10 Jun 2021 08:44:27 +0200 Subject: [neutron] Drivers meeting - agenda for 11.06.2021 Message-ID: <24134040.SoyZhvRMfH@p1> Hi, Agenda for our tomorrow's meeting is at [1]. We have to discuss some details about existing spec [2] and one new RFE [3]. [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda[1] [2] https://review.opendev.org/c/openstack/neutron-specs/+/783791[2] [3] https://bugs.launchpad.net/neutron/+bug/1931100[3] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda [2] https://review.opendev.org/c/openstack/neutron-specs/+/783791 [3] https://bugs.launchpad.net/neutron/+bug/1931100 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From bkslash at poczta.onet.pl Thu Jun 10 07:23:41 2021 From: bkslash at poczta.onet.pl (bkslash) Date: Thu, 10 Jun 2021 09:23:41 +0200 Subject: [glance] How to limit access to particular store In-Reply-To: References: Message-ID: Hi Erno, thank you for your answer. In the mean time I've figured out 2 other "workarounds": 1. I'll make a local file store (based on LVM) with LVM volume of the size that I need for all my "public" (and protected) images, so there will be no more space to put any customer images. If I'll need additional space I'll extend the volume/filesystem to fit my new images. Customer images will go to other, default store. 2. Ofcourse modifying filesystem permissions to RO on store folder would also do the trick, but it should be changed back to RW each time I have to modify my images. I think it would be useful to have the ability to block (via oslo.policy) reading some informations (i.e. list stores etc.) and make stores read-only... Best regards Adam Tomaś > On 9 Jun 2021, at 18:28, Erno Kuvaja wrote: > >  >> On Fri, Jun 4, 2021 at 1:56 PM at wrote: >> Hi, >> I have Glance with multi-store config and I want one store (not default) to be read-only for everyone except cloud Admin. How can I do it? Is there any way to limit store names visibility (which are visible i.e. in properties section of "openstack image show IMAGE_NAME" output)? >> Best regards >> Adam Tomas >> > Hi Adam, > > Such limitations are not possible at the moment. The only way to really do this if needed is to expose that "admin only" storage as a local web server and use the http store with locations api exposed to said users only. > > - jokke -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Thu Jun 10 09:16:28 2021 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 10 Jun 2021 11:16:28 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: <4450b0fa40e3decf06f365c10be5e370af923e3b.camel@redhat.com> References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> <4450b0fa40e3decf06f365c10be5e370af923e3b.camel@redhat.com> Message-ID: On 6/9/21 3:57 PM, Sean Mooney wrote: > On Wed, 2021-06-09 at 15:17 +0200, Alfredo Moralejo Alonso wrote: >> On Wed, Jun 9, 2021 at 1:49 PM Sean Mooney wrote: >> >>> On Wed, 2021-06-09 at 12:06 +0300, Marios Andreou wrote: >>>> On Wednesday, June 9, 2021, Alfredo Moralejo Alonso >>> >>>> wrote: >>>> >>>>> >>>>> >>>>> On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon >>> wrote: >>>>> >>>>>> Thanks for making the announcement. Can you clarify how the >>>>>> feature-freeze dates will be communicated to the greater community of >>>>>> contributors? >>>>>> >>>>>> - Dan Sneddon >>>>>> >>>>>> On Jun 8, 2021, at 8:21 AM, Wesley Hayutin >>> wrote: >>>>>> >>>>>>  >>>>>> >>>>>> Greetings TripleO community! >>>>>> >>>>>> At the most recent TripleO community meetings we have discussed >>> formally >>>>>> changing the OpenStack release model for TripleO [1]. The previous >>>>>> released projects can be found here [2]. TripleO has previously >>> released >>>>>> with release-type[‘trailing’, ‘cycle-with-intermediary’]. >>>>>> >>>>>> To quote the release model doc: >>>>>> >>>>>> ‘Trailing deliverables trail the release, so they cannot, by >>> definition, >>>>>> be independent. They need to pick between cycle-with-rc >>>>>> < >>> https://releases.openstack.org/reference/release_models.html#cycle-with-rc >>>> >>>>>> or cycle-with-intermediary >>>>>> < >>> https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary >>>> >>>>>> models.’ >>>>>> >>>>>> We are proposing to update the release-model to ‘independent’. This >>>>>> would give the TripleO community more flexibility in when we choose >>> to cut >>>>>> a release. In turn this would mean less backporting, less upstream >>> and 3rd >>>>>> party resources used by potentially some future releases. >>>>>> >>>>>> >>>>> What does this change mean in terms of branches and compatibility for >>>>> OpenStack stable releases?. >>>>> >>>>> >>>> >>>> >>>> as i wrote to Dan just now the main thing is that we may delay or even >>> skip >>>> a particular branch. For compatibility I guess it means we would have to >>>> rely on git tags so perhaps making consistently frequent (eg monthly? or >>>> more?) releases for all the tripleo repos. You could then call a >>> particular >>>> range of tags as being compatible with stable/Y for example. Does it >>> sound >>>> sane/doable from an rdo package build perspective? >>>> >>> too me this feels like we are leaking downstream product lifecycle into >>> upstream. >>> even if redhat is overwhelmingly the majority contibutor of reviews and >>> commits to >>> ooo im not sure that changing the upstream lifestyle to align more closely >>> with our product life >>> cycle is the correct thing to do. >>> >>> at least while tripleo is still in the Openstack namespaces and not the x >>> namespaces. >>> Skipping upstream release is really quite a radical departure form the >>> project original goals. >>> i think it would also be counter productive to our downstream efforts to >>> move our testing close to upstream. >>> if ooo was to lose the ablity to test master for example we would not be >>> able to use ooo in our downstream ci to test >>> feature that we plan to release osp n+1 that are develop during an >>> upstream cycle that wont be productised. >>> >>> i do not work on ooo so at the end of the day this wont affect me much but >>> to me skipping releases seam counter intuitive >>> given the previous efforts to make ooo more usable for development and ci. >>> Moving to independent >>> to decouple the lifecycle seams more reasonable if the underlying goal is >>> not to skip releases. you can release when ready >>> rather then scrambling or wating for a deadline. personally i think moving >>> in the other direction so that ooo can release sooner >>> not later would make the project more appealing as the delay in support of >>> a release is often considered a detractor for tripleo vs >>> other openstack installers. >>> >>> i would hope that this change would not have any effect on the rdo >>> packaging of non ooo packages. >>> the rdo packages are used by other instalation methods (the puppet moduels >>> for example) including i belive some of the larger chineese providers that >>> have written there own installers. i think it would be damaging to centos >>> if rdo was to skip upstream version of say nova. what might need to change >>> is the packaging of ooo itself in rdo. >>> >>> tl;dr im not against the idea of ooo moving to independent model but i >>> would hope that it will not affect RDO's packaging of non ooo projects and >>> that >>> ooo can still be used for ci of master and stable branches of for example >>> nova. >>> >>> >> >> RDO has no plans on skipping releases or any other changes affecting >> non-tripleo packages. The impact of this change (unclear at this point) >> should only affect the packages for those repos. > ack >> >> Note that RDO aims at being used and useful for other users and deployment >> tools as Puppet modules, Kolla, or others willing to work in CentOS and >> we'd like to maintain the collaboration with them as needed. > ya that is what i was expecting. thanks for confirming. > provided the possible change in ooo direction does not negatively impact the other > consumes of rdo i dont really have an objection to ooo changing how they work if peolel think it will > make there lives and there customer live simpler in the long run. I'm sceptical about if that makes lives simpler. As we learned earlier in the topic, "stable" tags would still require maintenance branches to be managed manually (with only a 3rd side CI available for that?). And manual solving of drifting dependencies collisions (since no more appropriate requirements-checks automation for independently released tripleo?). Finally, openstack puppet modules and puppet-tripleo look too much specific to OpenStack configuration options, that may drift a lot from release to a release, to be independently released. > > as i said i do not work on or use ooo frequently but i have consumed the output of rdo > via kolla in the past and while i typeically prefer using the source install i know many > do use the centos binary install variant using the rdo packages. > >> >> Regards, >> >> Alfredo >> >> >>> regards >>> sean >>> >>>> >>>> regards, marios >>>> >>>> >>>> >>>> >>>>> To quote the release model doc: >>>>>> >>>>>> ‘Some projects opt to completely bypass the 6-month cycle and release >>>>>> independently. For example, that is the case of projects that >>> support the >>>>>> development infrastructure. The “independent” model describes such >>>>>> projects.’ >>>>>> >>>>>> The discussion here is to merely inform the greater community with >>>>>> regards to the proposal and conversations regarding the release >>> model. >>>>>> This thread is NOT meant to discuss previous releases or their >>> supported >>>>>> status, merely changing the release model here [3] >>>>>> >>>>>> >>>>>> [0] https://etherpad.opendev.org/p/tripleo-meeting-items >>>>>> >>>>>> [1] https://releases.openstack.org/reference/release_models.html >>>>>> >>>>>> [2] https://releases.openstack.org/teams/tripleo.html >>>>>> >>>>>> [3] https://opendev.org/openstack/releases/src/branch/master/ >>>>>> deliverables/xena >>>>>> >>>>>> >>>> >>> >>> >>> > > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Thu Jun 10 09:22:05 2021 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 10 Jun 2021 11:22:05 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: <5f01946e-5576-0097-62ee-822967ba0a27@redhat.com> On 6/9/21 3:58 PM, James Slagle wrote: > > > On Wed, Jun 9, 2021 at 7:54 AM Sean Mooney > wrote: > > too me this feels like we are leaking downstream product lifecycle > into upstream. > even if redhat is overwhelmingly the majority contibutor of reviews > and commits to > ooo im not sure that changing the upstream lifestyle to align more > closely with our product life > cycle is the correct thing to do. > > > I wouldn't characterize it as "leaking". Instead, we are aiming to > accurately reflect what we intend to support as a community (not as any > company), based on who we have working on this project. > > Unfortunately the reality is that no one (community or otherwise) should > use TripleO from rocky/stein/ussuri/victoria other than for dev/test in > my opinion. There's no upgrade path from any of those releases, and the > community isn't working on one. However, that is not clearly represented > by the project. While we don't intend to retrofit this policy to past > branches, part of proposing this change is to help clear that up going > forward. >   > > at least while tripleo is still in the Openstack namespaces and not > the x namespaces. > > > I don't think I understand. At least..."what"? How is the OpenStack > namespace related to release models? How does the namespace (which is a > construct of how git repositories are organized aiui), have a relation > to what is included in an OpenStack release? >   > > Skipping upstream release is really quite a radical departure form > the project original goals. > > > I disagree with how you remember the history, and I think this is an > overstatement. >   > > i think it would also be counter productive to our downstream > efforts to move our testing close to upstream. > > if ooo was to lose the ablity to test master for example we would > not be able to use ooo in our downstream ci to test > feature that we plan to release osp n+1 that are develop during an > upstream cycle that wont be productised. > > > I don't follow the premise. How is it counterproductive to move our > testing close to upstream? > We'd always continue to test master. When it comes time for OpenStack to > branch, such as to create stable/xena in all the service projects, > TripleO may choose not to branch, and I think at that point, TripleO > would no longer have CI jobs running on stable/xena of those service > projects. Since TripleO does not follow the stable branch policy, isn't the same possible as well today without switching to the independent release model? >   > > > i do not work on ooo so at the end of the day this wont affect me > much but to me skipping releases seam counter intuitive > given the previous efforts to make ooo more usable for development > and ci. Moving to independent > to decouple the lifecycle seams more reasonable if the underlying > goal is not to skip releases. you can release when ready > rather then scrambling or wating for a deadline. > > > I think the "when ready" is part of the problem here. For example, one > might look at when we released stable/victoria and claim TripleO was > ready. However, when TripleO victoria was released, you could not > upgrade from ussuri to victoria. Likewise, you can't upgrade from > victoria to wallaby. Were we really ready to release? My take is that we > shouldn't have released at all. I think it sends a false signal. > > An alternative to this entire proposal would be to double down on making > TripleO more fully support each OpenStack stable branch. That would mean > adding update/upgrade jobs for each stable branch, and doing the > development work to actually implement that support (if it's not tested, > it's broken), as well as likely adding other missing jobs instead of > de-emphasizing testing on these branches. > > AIUI, we do not want to be adding additional CI jobs of that scale > upstream, especially given the complaints about node usage from TripleO > from earlier in this cycle. And, we do not have the humans to develop > and maintain this additional work. >   > > personally i think moving in the other direction so that ooo can > release sooner > not later would make the project more appealing as the delay in > support of a release is often considered a detractor for tripleo vs > other openstack installers. > > > I think moving to the independent model does enable us to consider > releasing sooner. >   > > > i would hope that this change would not have any effect on the rdo > packaging of non ooo packages. > the rdo packages are used by other instalation methods (the puppet > moduels for example) including i belive some of the larger chineese > providers that > have written there own installers. i think it would be damaging to > centos if rdo was to skip upstream version of say nova. what might > need to change > is the packaging of ooo itself in rdo. > > tl;dr im not against the idea of ooo moving to independent model but > i would hope that it will not affect RDO's packaging of non ooo > projects and that > ooo can still be used for ci of master and stable branches of for > example nova. > > > We'd continue to always CI master. > > Not all stable branches would remain covered by TripleO. For example, if > TripleO didn't branch and release for xena, you wouldn't have TripleO > jobs on nova stable/xena patches. Those jobs don't provide any > meaningful feedback for TripleO. Perhaps they do for nova as you are > backporting a change through every branch, and you're final destination > is a branch where TripleO is expected to be working, such as wallaby. > You would want to know if the change broke on xena for example, or if it > were something on wallaby. I can see how that would be useful for nova. > > However, part of what we're saying is that TripleO is trying to simplify > what we are supporting and testing, so we can get better at supporting > the releases that are most important to our community. Yes, there is > some downstream influence here, in the same way that TripleO doesn't > support deploying with Ubuntu, because it is less important to our > (TripleO) community. I think that's ok, and I see nothing wrong with it. > > If the service projects (such as nova) want to commit additional > resources and the upstream CI node count can handle the increase that > properly supporting each stable branch implies, then I think we can > weigh that option as well. However, the current status quo of more or > less just checking the boxes on each stable branch is wrong and sends a > false message in my opinion. That's a big part of what we're trying to > correct. > > -- > -- James Slagle > -- -- Best regards, Bogdan Dobrelya, Irc #bogdando From dpeacock at redhat.com Thu Jun 10 12:35:15 2021 From: dpeacock at redhat.com (David Peacock) Date: Thu, 10 Jun 2021 08:35:15 -0400 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> Message-ID: +1 from me but I have no power here; symbolic approval of Sandeep! On Wed, Jun 9, 2021 at 5:20 AM Marios Andreou wrote: > thanks all for voting ... yes said I would add him in yesterday's irc > meeting but weshay beat me to it ;) > > I just checked and see ysandeep is now in the core reviewers group > https://review.opendev.org/admin/groups/0319cee8020840a3016f46359b076fa6b6ea831a,members > > ysandeep go +2 all the CI things ! > > regards, marios > > On Wednesday, June 9, 2021, Wesley Hayutin wrote: > >> Seeing no objections.... >> >> Congrats Sandeep :) >> >> On Fri, Jun 4, 2021 at 2:31 AM Gaël Chamoulaud >> wrote: >> >>> Of course, a big +1! >>> >>> On 02/Jun/2021 14:17, Marios Andreou wrote: >>> > Hello all >>> > >>> > Having discussed this with some members of the tripleo ci team >>> > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >>> > ysandeep) for core on the tripleo-ci repos (tripleo-ci, >>> > tripleo-quickstart and tripleo-quickstart-extras). >>> > >>> > Sandeep joined the team about 1.5 years ago and has from the start >>> > demonstrated his eagerness to learn and an excellent work ethic, >>> > having made many useful code submissions [1] and code reviews [2] to >>> > the CI repos and beyond. Thanks Sandeep and keep up the good work! >>> > >>> > Please reply to this mail with a +1 or -1 for objections in the usual >>> > manner. If there are no objections we can declare it official in a few >>> > days >>> > >>> > regards, marios >>> > >>> > [1] https://review.opendev.org/q/owner:sandeepyadav93 >>> > [2] >>> https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 >>> > >>> > >>> >>> Best Regards, >>> Gaël >>> >>> -- >>> Gaël Chamoulaud - (He/Him/His) >>> >> > > -- > _sent from my mobile - sorry for spacing spelling etc_ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Thu Jun 10 14:04:27 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Thu, 10 Jun 2021 23:04:27 +0900 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? Message-ID: Hi All, I've been working on bug 1926693[1], and am lost about the reasonable solutions we expect. Ideally I'd need to bring this topic in the team meeting but because of the timezone gap and complicated background, I'd like to gather some feedback in ml first. [1] https://bugs.launchpad.net/neutron/+bug/1926693 TL;DR Which one(or ones) would be reasonable solutions for this issue ? (1) https://review.opendev.org/c/openstack/neutron/+/763563 (2) https://review.opendev.org/c/openstack/neutron/+/788893 (3) Implement something different The issue I reported in the bug is that there is an inconsistency between nova and neutron about the way to determine a hypervisor name. Currently neutron uses socket.gethostname() (which always returns shortname) to determine a hypervisor name to search the corresponding resource provider. On the other hand, nova uses libvirt's getHostname function (if libvirt driver is used) which returns a canonical name. Canonical name can be shortname or FQDN (*1) and if FQDN is used then neutron and nova never agree. (*1) IMO this is likely to happen in real deployments. For example, TripelO uses FQDN for canonical names. Neutron already provides the resource_provider_defauly_hypervisors option to override a hypervisor name used. However because this option accepts a map between interface and hypervisor, setting this parameter requires very redundant description especially when a compute node has multiple interfaces/bridges. The following example shows how redundant the current requirement is. ~~~ [OVS] resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ br-data3:1024,1024,br-data4,1024:1024 resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain ~~~ I've submitted a change to propose a new single parameter to override the base hypervisor name but this is currently -2ed, mainly because I lacked analysis about the root cause of mismatch when I proposed this. (1) https://review.opendev.org/c/openstack/neutron/+/763563 On the other hand, I submitted a different change to neutron which implements the logic to get a hypervisor name which is fully compatible with libvirt. While this would save users from even overriding hypervisor names, I'm aware that this might break the other virt driver which depends on a different logic to generate a hypervisor name. IMO the patch is still useful considering the libvirt driver would be the most popular option now, but I'm not fully aware of the impact on the other drivers, especially because I don't know which virt driver would support the minimum QoS feature now. (2) https://review.opendev.org/c/openstack/neutron/+/788893/ In the review of (2), Sean mentioned implementing a logic to determine an appropriate resource provider(3) even if there is a mismatch about host name format, but I'm not sure how I would implement that, tbh. My current thought is to merge (1) as a quick solution first, and discuss whether we should merge (2), but I'd like to ask for some feedback about this plan (like we should NOT merge (2)). I'd appreciate your thoughts about this $topic. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Jun 10 14:35:21 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 10 Jun 2021 16:35:21 +0200 Subject: PTO - Friday 11 Message-ID: Hello, I'm on PTO tomorrow (June 11). See you on Monday -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Thu Jun 10 17:35:09 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 11 Jun 2021 01:35:09 +0800 Subject: [tc][all] Test support for TLS default Message-ID: Dear all In short, can you help to enable tls-proxy for your test jobs and fix/report the issue in [4]? Or it makes no sense for you? Here's all repositories contains jobs with tls-proxy disabled: - neutron - neutron-tempest-plugin - cinder-tempest-plugin - cyborg-tempest-plugin - ec2api-tempest-plugin - freezer-tempest-plugin - grenade - heat - js-openstack-lib - keystone - kuryr-kubernetes - masakari - murano - networking-odl - networking-sfc - python-brick-cinderclient-ext - python-neutronclient - python-zaqarclient - sahara - sahara-dashboard - sahara-tests - solum - tacker - telemetry-tempest-plugin - trove - trove-tempest-plugin - vitrage-tempest-plugin - watcher As I'm looking for y-cycle potential goals, I found the tls-proxy support is not actually ready OpenStack wide (you can find some discussion in [3]). We have multiple projects that disable tls-proxy in test jobs [1] (and stay that way for a long time). For security concerns, I'm currently collecting the missing part for this. And try to figure out if there is any infra issue for current jobs. After I attempt to enable tls-proxy for some projects to check the status. And from the test result shows ([2]), We might have bugs/test infra issues in projects. So I invite projects who still have not switched to TLS default. Please do, and help to fix/report the issue you're facing. As we definitely need some more help on figuring out the actual situation on each project. So I created an etherpad [4] to track actions or related information. Meanwhile, I will attempt to enable tls-proxy on more test jobs (and you will be able to find it in [2]). Which gives us a good chance to review the logs and see how we might get chances to fix it and enable TLS by default. [1] https://codesearch.opendev.org/?q=tls-proxy%3A%20false&i=nope&files=&excludeFiles=&repos= [2] https://review.opendev.org/q/topic:%22exame-tls-proxy%22+(status:open%20OR%20status:merged) [3] https://etherpad.opendev.org/p/community-goals [4] https://etherpad.opendev.org/p/support-tls-default *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From forums at mossakowski.ch Wed Jun 9 11:08:53 2021 From: forums at mossakowski.ch (forums at mossakowski.ch) Date: Wed, 09 Jun 2021 11:08:53 +0000 Subject: [Neutron] sriov network setup for victoria - clarification needed In-Reply-To: References: Message-ID: <5AGF_ceXEUd_hR-Qhz8SHZzx6QrTTYSwcDdd47nuU8rG8W2cX4W9WmwELNTIv4qluwtOd0vKVjav88gQwMbltmcPaV_eE8WPLHiw3iKJN4s=@mossakowski.ch> Thanks for the support! I've patched pyroute and now I'm able to attach sriov ports to a running VMs. Cheers! Piotr Mossakowski Sent from ProtonMail mobile \-------- Original Message -------- On 3 Jun 2021, 15:05, Lajos Katona < katonalala at gmail.com> wrote: > > > > Hi, > > 0.6.3 has another increase for the DEFAULT\_RCVBUF: > > https://github.com/svinota/pyroute2/issues/813 > > > > > > Regards > > Lajos Katona (lajoskatona) > > > > > Rodolfo Alonso Hernandez <[ralonsoh at redhat.com][ralonsoh_redhat.com]> ezt írta (időpont: 2021. jún. 3., Cs, 9:16): > > > > Hi Piotr: > > > > > > > > > > I think you are hitting \[1\]. As you said, each PF has 63 VFs configured. Your error looks very similar to this one reported. > > > > > > > > > > > > Try updating pyroute2 to version 0.6.2. That should contain the fix for this error. > > > > > > > > > > Regards. > > > > > > > > > > > > \[1\]https://github.com/svinota/pyroute2/issues/751 > > > > > > > > > > On Thu, Jun 3, 2021 at 12:06 AM <[forums at mossakowski.ch][forums_mossakowski.ch]> wrote: > > > > > > > Muchas gracias Alonso para tu ayuda! > > > > > > > > > > > > > > > > > > > > > > > > I've commented out the decorator line, new exception popped out, I've updated my gist: > > > > > > > > > > > > > > > https://gist.github.com/8e6272cbe7748b2c5210fab291360e0b > > > > > > > > > > > > > > > > > > > > > > > > BR, > > > > > > > > > > > > > > > Piotr Mossakowski > > > > > > > > > > > > > > > Sent from ProtonMail mobile > > > > > > > > > > > > > > > > > > \-------- Original Message -------- > > > On 31 May 2021, 18:08, Rodolfo Alonso Hernandez < [ralonsoh at redhat.com][ralonsoh_redhat.com]> wrote: > > > > > > > > > > > > > > > > > > > Hello Piotr: > > > > > > > > > > > > > > > > > > > > Maybe you should update the pyroute2 library, but this is a blind shot. > > > > > > > > > > > > > > > > > > > > What I recommend you do is to find the error you have when retrieving the interface VFs. In the same compute node, use this method \[1\] but remove the decorator \[2\]. Then, in a root shell, run python again: > > > > > > > > >>> from neutron.privileged.agent.linux import ip\_lib > > > > >>> ip\_lib.get\_link\_vfs('ens2f0', '') > > > > > > > > > > > > > > > > > > > > That will execute the pyroute2 code without the privsep decorator. You'll see what error is returning the method. > > > > > > > > > > > > > > > > > > > > Regards. > > > > > > > > > > > > > > > > > > > > > > > > \[1\][https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip\_lib.py\#L396-L410][https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L396-L410] > > > > > > > > \[2\][https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip\_lib.py\#L395][https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L395] > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Mon, May 31, 2021 at 5:50 PM <[forums at mossakowski.ch][forums_mossakowski.ch]> wrote: > > > > > > > > > > > > > Hello, > > > > > > > > > > > > > > > I have two victoria environments: > > > > > > > > > > > > > > > 1) a working one, standard setup with separate dedicated interface for sriov (pt0 and pt1) > > > > > > > > > > > > > > > 2) a broken one, where I'm trying to reuse one of already used interfaces (ens2f0 or ens2f1) for sriov. ens2f0 is used for several VLANs (mgmt and storage) and ens2f1 is a neutron external interface which I bridged for VLAN tenant networks. On both I have enabled 63 VFs, it's a standard intetl 10Gb x540 adapter. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On broken environment, when I'm trying to boot a VM with sriov port that I created before, I see this error shown on below gist: > > > > > > > > > > > > > > > https://gist.github.com/moss2k13/8e6272cbe7748b2c5210fab291360e0b > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I'm investigating this for couple days now but I'm out of ideas so I'd like to ask for your support. Is this possible to achieve what I'm trying to do on 2nd environment? To use PF as normal interface and use its VFs for sriov-agent at the same time? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > > > > > > > > > > > > Piotr Mossakowski > > > > > [ralonsoh_redhat.com]: mailto:ralonsoh at redhat.com [forums_mossakowski.ch]: mailto:forums at mossakowski.ch [https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L396-L410]: https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L396-L410 [https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L395]: https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L395 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: publickey - EmailAddress(s=forums at mossakowski.ch) - 0xDC035524.asc Type: application/pgp-keys Size: 648 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 294 bytes Desc: OpenPGP digital signature URL: From levonmelikbekjan at yahoo.de Thu Jun 10 15:21:53 2021 From: levonmelikbekjan at yahoo.de (levonmelikbekjan at yahoo.de) Date: Thu, 10 Jun 2021 17:21:53 +0200 Subject: AW: AW: Customization of nova-scheduler In-Reply-To: <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> Message-ID: <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> Hi Stephen, I'm trying to customize my nova scheduler. However, if I change the nova.conf as it is written here https://docs.openstack.org/operations-guide/de/ops-customize-compute.html, then my python file cannot be found. How can I configure it correctly? Do you have any idea? My controller node is running with CENTOS 7. I couldn't install devstack because it is only supported for CENTOS 8 version. Best regards Levon -----Ursprüngliche Nachricht----- Von: Stephen Finucane Gesendet: Montag, 31. Mai 2021 18:21 An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org Betreff: Re: AW: Customization of nova-scheduler On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > Hello Stephen, > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > Two cases should be solved! > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. What you're describing is commonly referred to as "preemptible" or "spot" instances. This topic has a long, complicated history in nova and has yet to be implemented. Searching for "preemptible instances openstack" should yield you lots of discussion on the topic along with a few proof-of-concept approaches using external services or out-of-tree modifications to nova. > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? As hinted above, this is likely to be a very difficult project given the fraught history of the idea. I don't want to dissuade you from this work but you should be aware of what you're getting into from the start. If you're serious about pursuing this, I suggest you first do some research on prior art. As noted above, there is lots of information on the internet about this. With this research done, you'll need to decide whether this is something you want to approach within nova itself, via out-of-tree extensions or via a third party project. If you're opting for integration with nova, then you'll need to think long and hard about how you would design such a system and start working on a spec (a design document) outlining your proposed solution. Details on how to write a spec are discussed at [1]. The only extension points nova offers today are scheduler filters and weighers so your options for an out-of-tree extension approach will be limited. A third party project will arguably be the easiest approach but you will be restricted to talking to nova's REST APIs which may limit the design somewhat. This Blazar spec [2] could give you some ideas on this approach (assuming it was never actually implemented, though it may well have been). > My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. > > My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. > > What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? This could potentially work, but I suspect there will be serious performance implications with this, particularly at scale. Scheduler filters are historically used for simple things like "find me a group of hosts that have this metadata attribute I set on my image". Making API calls sounds like something that would take significant time and therefore slow down the schedule process. You'd also have to decide what your heuristic for deciding which VM(s) to delete would be, since there's nothing obvious in nova that you could use. You could use something as simple as filter extra specs or something as complicated as an external service. This should be lots to get you started. Once again, do make sure you're aware of what you're getting yourself into before you start. This could get complicated very quickly :) Cheers, Stephen > I'm very happy to have found you!!! > > Thank you really much for your time! [1] https://specs.openstack.org/openstack/nova-specs/readme.html [2] https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > Best regards > Levon > > -----Ursprüngliche Nachricht----- > Von: Stephen Finucane > Gesendet: Montag, 31. Mai 2021 12:34 > An: Levon Melikbekjan ; > openstack at lists.openstack.org > Betreff: Re: Customization of nova-scheduler > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > Hello Openstack team, > > > > is it possible to customize the nova-scheduler via Python? If yes, how? > > Yes, you can provide your own filters and weighers. This is documented at [1]. > > Hope this helps, > Stephen > > [1] > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-y > our-own-filter > > > > > Best regards > > Levon > > > > From peter.matulis at canonical.com Thu Jun 10 19:51:55 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Thu, 10 Jun 2021 15:51:55 -0400 Subject: [docs] Double headings on every page In-Reply-To: References: Message-ID: Hi Stephen. Did you ever get to circle back to this? On Fri, May 14, 2021 at 7:34 AM Stephen Finucane wrote: > On Tue, 2021-05-11 at 11:14 -0400, Peter Matulis wrote: > > Hi, I'm hitting an oddity in one of my projects where the titles of all > pages show up twice. > > Example: > > > https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/wallaby/app-nova-cells.html > > Source file is here: > > > https://opendev.org/openstack/charm-deployment-guide/src/branch/master/deploy-guide/source/app-nova-cells.rst > > Does anyone see what can be causing this? It appears to happen only for > the current stable release ('wallaby') and 'latest'. > > Thanks, > Peter > > > I suspect you're bumping into issues introduced by a new version of Sphinx > or docutils (new versions of both were released recently). > > Comparing the current nova docs [1] to what you have, I see the duplicate >

element is present but hidden by the following CSS rule: > > .docs-body .section h1 { > > display: none; > > } > > > That works because we have the following HTML in the nova docs: > >
> >

Extra Specs

> > ... > >
> > > while the docs you linked are using the HTML5 semantic '
' tag: > >
> >

Nova Cells

> > ... > >
> > > So to fix this, we'll have to update the openstackdocstheme to handle > these changes. I can try to take a look at this next week but I really > wouldn't mind if someone beat me to it. > > Stephen > > [1] https://docs.openstack.org/nova/latest/configuration/extra-specs.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Jun 11 06:49:32 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 11 Jun 2021 08:49:32 +0200 Subject: [tc][all] Test support for TLS default In-Reply-To: References: Message-ID: <2745966.TcCWilPskB@p1> Hi, Dnia czwartek, 10 czerwca 2021 19:35:09 CEST Rico Lin pisze: > Dear all > > In short, > can you help to enable tls-proxy for your test jobs and fix/report the > issue in [4]? Or it makes no sense for you? > Here's all repositories contains jobs with tls-proxy disabled: > > - neutron > - neutron-tempest-plugin > - cinder-tempest-plugin > - cyborg-tempest-plugin > - ec2api-tempest-plugin > - freezer-tempest-plugin > - grenade > - heat > - js-openstack-lib > - keystone > - kuryr-kubernetes > - masakari > - murano > - networking-odl > - networking-sfc > - python-brick-cinderclient-ext > - python-neutronclient > - python-zaqarclient > - sahara > - sahara-dashboard > - sahara-tests > - solum > - tacker > - telemetry-tempest-plugin > - trove > - trove-tempest-plugin > - vitrage-tempest-plugin > - watcher > > As I'm looking for y-cycle potential goals, I found the tls-proxy support > is not actually ready OpenStack wide (you can find some discussion in [3]). > We have multiple projects that disable tls-proxy in test jobs [1] (and stay > that way for a long time). > For security concerns, I'm currently collecting the missing part for this. > And try to figure out if there is any infra issue for current jobs. > After I attempt to enable tls-proxy for some projects to check the status. > And from the test result shows ([2]), We might have bugs/test infra issues > in projects. > So I invite projects who still have not switched to TLS default. Please do, > and help to fix/report the issue you're facing. > As we definitely need some more help on figuring out the actual situation > on each project. > So I created an etherpad [4] to track actions or related information. > > Meanwhile, I will attempt to enable tls-proxy on more test jobs (and you > will be able to find it in [2]). Which gives us a good chance to review the > logs and see how we might get chances to fix it and enable TLS by default. > > > [1] > https://codesearch.opendev.org/?q=tls-proxy%3A%20false&i=nope&files=&excludeFiles=&repos= > [2] > https://review.opendev.org/q/topic:%22exame-tls-proxy%22+ (status:open%20OR%20status:merged) > [3] https://etherpad.opendev.org/p/community-goals > [4] https://etherpad.opendev.org/p/support-tls-default > > *Rico Lin* > OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, > Senior Software Engineer at EasyStack Thx Rico for that. I just sent patch for neutron-tempest-plugin and will check how it works for neutron jobs. Good thing is that in many jobs we already have it enabled for long time so I hope there will be no many issues there :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 484 bytes Desc: This is a digitally signed message part. URL: From ralonsoh at redhat.com Fri Jun 11 07:57:27 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 11 Jun 2021 09:57:27 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: Hello Takashi and Neutrinos: First of all, thank you for working on this. Currently users have the ability to override the host name using "resource_provider_hypervisors". That means this parameter is always configurable; IMO we are safe on this. The problem we have is how we should retrieve this host name if "resource_provider_hypervisors" is not provided. I think the solution could be a combination of: - A first patch providing the ability to select the hypervisor type. The default one could be "libvirt". Each driver can have a particular host name retrieval implementation. The default one will be the implemented right now: "socket.gethostname()" - https://review.opendev.org/c/openstack/neutron/+/788893, providing full compatibility for libvirt. Those are my two cents. Regards. On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami wrote: > Hi All, > > > I've been working on bug 1926693[1], and am lost about the reasonable > solutions we expect. Ideally I'd need to bring this topic in the team > meeting > but because of the timezone gap and complicated background, I'd like to > gather some feedback in ml first. > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > TL;DR > Which one(or ones) would be reasonable solutions for this issue ? > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > (3) Implement something different > > The issue I reported in the bug is that there is an inconsistency between > nova and neutron about the way to determine a hypervisor name. > Currently neutron uses socket.gethostname() (which always returns > shortname) > to determine a hypervisor name to search the corresponding resource > provider. > On the other hand, nova uses libvirt's getHostname function (if libvirt > driver is used) > which returns a canonical name. Canonical name can be shortname or FQDN > (*1) > and if FQDN is used then neutron and nova never agree. > > (*1) > IMO this is likely to happen in real deployments. For example, TripelO uses > FQDN for canonical names. > > Neutron already provides the resource_provider_defauly_hypervisors option > to override a hypervisor name used. However because this option accepts > a map between interface and hypervisor, setting this parameter requires > very redundant description especially when a compute node has multiple > interfaces/bridges. The following example shows how redundant the current > requirement is. > ~~~ > [OVS] > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ > br-data3:1024,1024,br-data4,1024:1024 > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain > ~~~ > > I've submitted a change to propose a new single parameter to override > the base hypervisor name but this is currently -2ed, mainly because > I lacked analysis about the root cause of mismatch when I proposed this. > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > On the other hand, I submitted a different change to neutron which > implements > the logic to get a hypervisor name which is fully compatible with libvirt. > While this would save users from even overriding hypervisor names, I'm > aware > that this might break the other virt driver which depends on a different > logic > to generate a hypervisor name. IMO the patch is still useful considering > the libvirt driver would be the most popular option now, but I'm not fully > aware of the impact on the other drivers, especially because I don't know > which virt driver would support the minimum QoS feature now. > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > In the review of (2), Sean mentioned implementing a logic to determine > an appropriate resource provider(3) even if there is a mismatch about > host name format, but I'm not sure how I would implement that, tbh. > > > My current thought is to merge (1) as a quick solution first, and discuss > whether > we should merge (2), but I'd like to ask for some feedback about this plan > (like we should NOT merge (2)). > > I'd appreciate your thoughts about this $topic. > > Thank you, > Takashi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Jun 11 08:34:33 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 11 Jun 2021 10:34:33 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: <2993434.SUg3sCx5Oz@p1> Hi, Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez pisze: > Hello Takashi and Neutrinos: > > First of all, thank you for working on this. > > Currently users have the ability to override the host name using > "resource_provider_hypervisors". That means this parameter is always > configurable; IMO we are safe on this. > > The problem we have is how we should retrieve this host name if > "resource_provider_hypervisors" is not provided. I think the solution could > be a combination of: > > - A first patch providing the ability to select the hypervisor type. The > default one could be "libvirt". Each driver can have a particular host name > retrieval implementation. The default one will be the implemented right > now: "socket.gethostname()" > - https://review.opendev.org/c/openstack/neutron/+/788893, providing > full compatibility for libvirt. > > Those are my two cents. We can move on with the patch https://review.opendev.org/c/openstack/neutron/+/ 763563[1] to provide new config option as it's now and additionally implement https:// review.opendev.org/c/openstack/neutron/+/788893[2] so users who are using libvirt will not need to change anything, but if someone is using other hypervisor, this will allow adjustments. Wdyt? > > Regards. > > > > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami > > wrote: > > Hi All, > > > > > > I've been working on bug 1926693[1], and am lost about the reasonable > > solutions we expect. Ideally I'd need to bring this topic in the team > > meeting > > but because of the timezone gap and complicated background, I'd like to > > gather some feedback in ml first. > > > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > > > TL;DR > > > > Which one(or ones) would be reasonable solutions for this issue ? > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > > (3) Implement something different > > > > The issue I reported in the bug is that there is an inconsistency between > > nova and neutron about the way to determine a hypervisor name. > > Currently neutron uses socket.gethostname() (which always returns > > shortname) > > to determine a hypervisor name to search the corresponding resource > > provider. > > On the other hand, nova uses libvirt's getHostname function (if libvirt > > driver is used) > > which returns a canonical name. Canonical name can be shortname or FQDN > > (*1) > > and if FQDN is used then neutron and nova never agree. > > > > (*1) > > IMO this is likely to happen in real deployments. For example, TripelO uses > > FQDN for canonical names. > > > > Neutron already provides the resource_provider_defauly_hypervisors option > > to override a hypervisor name used. However because this option accepts > > a map between interface and hypervisor, setting this parameter requires > > very redundant description especially when a compute node has multiple > > interfaces/bridges. The following example shows how redundant the current > > requirement is. > > ~~~ > > [OVS] > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ > > br-data3:1024,1024,br-data4,1024:1024 > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > > compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain > > ~~~ > > > > I've submitted a change to propose a new single parameter to override > > the base hypervisor name but this is currently -2ed, mainly because > > I lacked analysis about the root cause of mismatch when I proposed this. > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > On the other hand, I submitted a different change to neutron which > > implements > > the logic to get a hypervisor name which is fully compatible with libvirt. > > While this would save users from even overriding hypervisor names, I'm > > aware > > that this might break the other virt driver which depends on a different > > logic > > to generate a hypervisor name. IMO the patch is still useful considering > > the libvirt driver would be the most popular option now, but I'm not fully > > aware of the impact on the other drivers, especially because I don't know > > which virt driver would support the minimum QoS feature now. > > > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > > In the review of (2), Sean mentioned implementing a logic to determine > > an appropriate resource provider(3) even if there is a mismatch about > > host name format, but I'm not sure how I would implement that, tbh. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ralonsoh at redhat.com Fri Jun 11 08:46:44 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 11 Jun 2021 10:46:44 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: <2993434.SUg3sCx5Oz@p1> References: <2993434.SUg3sCx5Oz@p1> Message-ID: I agree with this idea but what https://review.opendev.org/c/openstack/neutron/+/763563 is proposing differs from what I'm saying: instead of providing the hostname (that is something we can do "resource_provider_hypervisors"), we should provide the hypervisor name (default: libvirt). On Fri, Jun 11, 2021 at 10:36 AM Slawek Kaplonski wrote: > Hi, > > Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez pisze: > > > Hello Takashi and Neutrinos: > > > > > > First of all, thank you for working on this. > > > > > > Currently users have the ability to override the host name using > > > "resource_provider_hypervisors". That means this parameter is always > > > configurable; IMO we are safe on this. > > > > > > The problem we have is how we should retrieve this host name if > > > "resource_provider_hypervisors" is not provided. I think the solution > could > > > be a combination of: > > > > > > - A first patch providing the ability to select the hypervisor type. > The > > > default one could be "libvirt". Each driver can have a particular > host name > > > retrieval implementation. The default one will be the implemented > right > > > now: "socket.gethostname()" > > > - https://review.opendev.org/c/openstack/neutron/+/788893, providing > > > full compatibility for libvirt. > > > > > > Those are my two cents. > > We can move on with the patch > https://review.opendev.org/c/openstack/neutron/+/763563 to provide new > config option as it's now and additionally implement > https://review.opendev.org/c/openstack/neutron/+/788893 so users who are > using libvirt will not need to change anything, but if someone is using > other hypervisor, this will allow adjustments. Wdyt? > > > > > > Regards. > > > > > > > > > > > > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami > > > > > > wrote: > > > > Hi All, > > > > > > > > > > > > I've been working on bug 1926693[1], and am lost about the reasonable > > > > solutions we expect. Ideally I'd need to bring this topic in the team > > > > meeting > > > > but because of the timezone gap and complicated background, I'd like to > > > > gather some feedback in ml first. > > > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > > > > > > > TL;DR > > > > > > > > Which one(or ones) would be reasonable solutions for this issue ? > > > > > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > > > > (3) Implement something different > > > > > > > > The issue I reported in the bug is that there is an inconsistency > between > > > > nova and neutron about the way to determine a hypervisor name. > > > > Currently neutron uses socket.gethostname() (which always returns > > > > shortname) > > > > to determine a hypervisor name to search the corresponding resource > > > > provider. > > > > On the other hand, nova uses libvirt's getHostname function (if libvirt > > > > driver is used) > > > > which returns a canonical name. Canonical name can be shortname or FQDN > > > > (*1) > > > > and if FQDN is used then neutron and nova never agree. > > > > > > > > (*1) > > > > IMO this is likely to happen in real deployments. For example, TripelO > uses > > > > FQDN for canonical names. > > > > > > > > Neutron already provides the resource_provider_defauly_hypervisors > option > > > > to override a hypervisor name used. However because this option accepts > > > > a map between interface and hypervisor, setting this parameter requires > > > > very redundant description especially when a compute node has multiple > > > > interfaces/bridges. The following example shows how redundant the > current > > > > requirement is. > > > > ~~~ > > > > [OVS] > > > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ > > > > br-data3:1024,1024,br-data4,1024:1024 > > > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > > > > compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain > > > > ~~~ > > > > > > > > I've submitted a change to propose a new single parameter to override > > > > the base hypervisor name but this is currently -2ed, mainly because > > > > I lacked analysis about the root cause of mismatch when I proposed > this. > > > > > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > > > > > On the other hand, I submitted a different change to neutron which > > > > implements > > > > the logic to get a hypervisor name which is fully compatible with > libvirt. > > > > While this would save users from even overriding hypervisor names, I'm > > > > aware > > > > that this might break the other virt driver which depends on a > different > > > > logic > > > > to generate a hypervisor name. IMO the patch is still useful > considering > > > > the libvirt driver would be the most popular option now, but I'm not > fully > > > > aware of the impact on the other drivers, especially because I don't > know > > > > which virt driver would support the minimum QoS feature now. > > > > > > > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > > > > > > In the review of (2), Sean mentioned implementing a logic to determine > > > > an appropriate resource provider(3) even if there is a mismatch about > > > > host name format, but I'm not sure how I would implement that, tbh. > > > > > > > > > > > > My current thought is to merge (1) as a quick solution first, and > discuss > > > > whether > > > > we should merge (2), but I'd like to ask for some feedback about this > plan > > > > (like we should NOT merge (2)). > > > > > > > > I'd appreciate your thoughts about this $topic. > > > > > > > > Thank you, > > > > Takashi > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Fri Jun 11 08:57:53 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 11 Jun 2021 08:57:53 +0000 (UTC) Subject: Domain tab References: <1268317224.5416429.1623401873029.ref@mail.yahoo.com> Message-ID: <1268317224.5416429.1623401873029@mail.yahoo.com> Hi all, I have two domains in my setup (default & ldap) If I log in as default admin I cannot see a domain tab in the identity dropdown (all cli works fine). I'm sure I had it there before and think it could be a setting in /etc/opoenstack-dashboard/local_settings.py Any pointers as to what I might be missing? Thanks in advance. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Fri Jun 11 09:19:16 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Fri, 11 Jun 2021 17:19:16 +0800 Subject: Domain tab In-Reply-To: <1268317224.5416429.1623401873029@mail.yahoo.com> References: <1268317224.5416429.1623401873029.ref@mail.yahoo.com> <1268317224.5416429.1623401873029@mail.yahoo.com> Message-ID: Hi Derek, I think that you need to give admin privilege to view the domain tab. # see domain tab in admin user openstack user list | grep admin openstack role add --domain default --user [ID from previous command] admin So in my case I have previously used: openstack user list | grep admin openstack role add --domain default --user beae2211cad94afc83173d730cce0c85 admin kind regards, On Fri, 11 Jun 2021 at 16:59, Derek O keeffe wrote: > Hi all, > > I have two domains in my setup (default & ldap) > > If I log in as default admin I cannot see a domain tab in the identity > dropdown (all cli works fine). I'm sure I had it there before and think it > could be a setting in /etc/opoenstack-dashboard/local_settings.py > > Any pointers as to what I might be missing? Thanks in advance. > > Regards, > Derek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Fri Jun 11 09:32:04 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 11 Jun 2021 09:32:04 +0000 (UTC) Subject: Domain tab In-Reply-To: References: <1268317224.5416429.1623401873029.ref@mail.yahoo.com> <1268317224.5416429.1623401873029@mail.yahoo.com> Message-ID: <28347774.8726009.1623403924216@mail.yahoo.com> Hi Tony, Thanks for that, worked first time!! Do you know if it's possible to allow the default domain admin edit the ldap members that are pulled into a domain? I can't seem to be able to see the users in that domain to change their roles through horizon. All works fine over cli. Thanks in advance. Regards,Derek On Friday 11 June 2021, 10:24:37 IST, Tony Pearce wrote: Hi Derek,  I think that you need to give admin privilege to view the domain tab.  # see domain tab in admin useropenstack user list | grep adminopenstack role add --domain default --user [ID from previous command] admin So in my case I have previously used: openstack user list | grep adminopenstack role add --domain default --user beae2211cad94afc83173d730cce0c85 admin kind regards, On Fri, 11 Jun 2021 at 16:59, Derek O keeffe wrote: Hi all, I have two domains in my setup (default & ldap) If I log in as default admin I cannot see a domain tab in the identity dropdown (all cli works fine). I'm sure I had it there before and think it could be a setting in /etc/opoenstack-dashboard/local_settings.py Any pointers as to what I might be missing? Thanks in advance. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Fri Jun 11 09:46:40 2021 From: bkslash at poczta.onet.pl (at) Date: Fri, 11 Jun 2021 11:46:40 +0200 Subject: [masakari] Compute service with name XXXXX not found. Message-ID: <6241D8B0-5DF1-4A46-9089-F2A9A7C978E5@poczta.onet.pl> Hi, I have some problem with masakari. I can create segment (from CLI and Horizon), but can't create host (the same result from Horizon and CLI). openstack segment host create XXXXX COMPUTE SSH segment_id returns BadRequest: Compute service with name XXXXX could not be found. XXXXX is the name which Horizon suggest, and it's a name of compute host. openstack compute service list returns proper list with state up/enabled on compute hosts (zone nova) Maybe I misunderstood some parameters of host create? As "type" I use COMPUTE, what value should it be? From "Binary" column of openstack compute service list? What is "control_attributes" field, because documentation lacks preceise information what value should be there and what is it use for. Tried to found some info on this error but I haven't found anything... Thanks in advance for any help. Best regards Adam Tomas From tonyppe at gmail.com Fri Jun 11 09:50:51 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Fri, 11 Jun 2021 17:50:51 +0800 Subject: Domain tab In-Reply-To: <28347774.8726009.1623403924216@mail.yahoo.com> References: <1268317224.5416429.1623401873029.ref@mail.yahoo.com> <1268317224.5416429.1623401873029@mail.yahoo.com> <28347774.8726009.1623403924216@mail.yahoo.com> Message-ID: Hi Derek, I think I understand - yes you can do that in horizon but you first need to "switch domain context" because default admin user is in domain "default". Now you can see the domains tab, you should see in there a button for setting the ldap context and then you can manage users like add to projects roles. Kind regards, Tony Pearce On Fri, 11 Jun 2021 at 17:32, Derek O keeffe wrote: > Hi Tony, > > Thanks for that, worked first time!! > > Do you know if it's possible to allow the default domain admin edit the > ldap members that are pulled into a domain? I can't seem to be able to see > the users in that domain to change their roles through horizon. All works > fine over cli. Thanks in advance. > > Regards, > Derek > > On Friday 11 June 2021, 10:24:37 IST, Tony Pearce > wrote: > > > Hi Derek, > > I think that you need to give admin privilege to view the domain tab. > > # see domain tab in admin user > openstack user list | grep admin > openstack role add --domain default --user [ID from previous command] admin > > So in my case I have previously used: > openstack user list | grep admin > openstack role add --domain default --user > beae2211cad94afc83173d730cce0c85 admin > > kind regards, > > > On Fri, 11 Jun 2021 at 16:59, Derek O keeffe > wrote: > > Hi all, > > I have two domains in my setup (default & ldap) > > If I log in as default admin I cannot see a domain tab in the identity > dropdown (all cli works fine). I'm sure I had it there before and think it > could be a setting in /etc/opoenstack-dashboard/local_settings.py > > Any pointers as to what I might be missing? Thanks in advance. > > Regards, > Derek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri Jun 11 10:20:25 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 11 Jun 2021 19:20:25 +0900 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: <2993434.SUg3sCx5Oz@p1> Message-ID: Hi Slawek and Radolfo, Thank you for your feedback. On Fri, Jun 11, 2021 at 5:47 PM Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > I agree with this idea but what > https://review.opendev.org/c/openstack/neutron/+/763563 is proposing > differs from what I'm saying: instead of providing the hostname (that is > something we can do "resource_provider_hypervisors"), we should provide the > hypervisor name (default: libvirt). > The main problem is that the logic to determine "hypervisor name" is different in each virt driver. For example libvirt driver uses canonical name while power driver uses [DEFAULT] host in nova.conf . So if we fix compatibility with one virt driver then it would break compatibility with the other driver. Because neutron is not aware of the virt driver used, it's impossible to avoid that inconsistency completely. Thank you, Takashi > > On Fri, Jun 11, 2021 at 10:36 AM Slawek Kaplonski > wrote: > >> Hi, >> >> Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez pisze: >> >> > Hello Takashi and Neutrinos: >> >> > >> >> > First of all, thank you for working on this. >> >> > >> >> > Currently users have the ability to override the host name using >> >> > "resource_provider_hypervisors". That means this parameter is always >> >> > configurable; IMO we are safe on this. >> >> > >> >> > The problem we have is how we should retrieve this host name if >> >> > "resource_provider_hypervisors" is not provided. I think the solution >> could >> >> > be a combination of: >> >> > >> >> > - A first patch providing the ability to select the hypervisor type. >> The >> >> > default one could be "libvirt". Each driver can have a particular >> host name >> >> > retrieval implementation. The default one will be the implemented >> right >> >> > now: "socket.gethostname()" >> >> > - https://review.opendev.org/c/openstack/neutron/+/788893, providing >> >> > full compatibility for libvirt. >> >> > >> >> > Those are my two cents. >> >> We can move on with the patch >> https://review.opendev.org/c/openstack/neutron/+/763563 to provide new >> config option as it's now and additionally implement >> https://review.opendev.org/c/openstack/neutron/+/788893 so users who are >> using libvirt will not need to change anything, but if someone is using >> other hypervisor, this will allow adjustments. Wdyt? >> >> > >> >> > Regards. >> >> > >> >> > >> >> > >> >> > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami >> >> > >> >> > wrote: >> >> > > Hi All, >> >> > > >> >> > > >> >> > > I've been working on bug 1926693[1], and am lost about the reasonable >> >> > > solutions we expect. Ideally I'd need to bring this topic in the team >> >> > > meeting >> >> > > but because of the timezone gap and complicated background, I'd like >> to >> >> > > gather some feedback in ml first. >> >> > > >> >> > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 >> >> > > >> >> > > TL;DR >> >> > > >> >> > > Which one(or ones) would be reasonable solutions for this issue ? >> >> > > >> >> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >> >> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 >> >> > > (3) Implement something different >> >> > > >> >> > > The issue I reported in the bug is that there is an inconsistency >> between >> >> > > nova and neutron about the way to determine a hypervisor name. >> >> > > Currently neutron uses socket.gethostname() (which always returns >> >> > > shortname) >> >> > > to determine a hypervisor name to search the corresponding resource >> >> > > provider. >> >> > > On the other hand, nova uses libvirt's getHostname function (if >> libvirt >> >> > > driver is used) >> >> > > which returns a canonical name. Canonical name can be shortname or >> FQDN >> >> > > (*1) >> >> > > and if FQDN is used then neutron and nova never agree. >> >> > > >> >> > > (*1) >> >> > > IMO this is likely to happen in real deployments. For example, >> TripelO uses >> >> > > FQDN for canonical names. >> >> > > >> >> > > Neutron already provides the resource_provider_defauly_hypervisors >> option >> >> > > to override a hypervisor name used. However because this option >> accepts >> >> > > a map between interface and hypervisor, setting this parameter >> requires >> >> > > very redundant description especially when a compute node has multiple >> >> > > interfaces/bridges. The following example shows how redundant the >> current >> >> > > requirement is. >> >> > > ~~~ >> >> > > [OVS] >> >> > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ >> >> > > br-data3:1024,1024,br-data4,1024:1024 >> >> > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ >> >> > > >> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain >> >> > > ~~~ >> >> > > >> >> > > I've submitted a change to propose a new single parameter to override >> >> > > the base hypervisor name but this is currently -2ed, mainly because >> >> > > I lacked analysis about the root cause of mismatch when I proposed >> this. >> >> > > >> >> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >> >> > > >> >> > > On the other hand, I submitted a different change to neutron which >> >> > > implements >> >> > > the logic to get a hypervisor name which is fully compatible with >> libvirt. >> >> > > While this would save users from even overriding hypervisor names, I'm >> >> > > aware >> >> > > that this might break the other virt driver which depends on a >> different >> >> > > logic >> >> > > to generate a hypervisor name. IMO the patch is still useful >> considering >> >> > > the libvirt driver would be the most popular option now, but I'm not >> fully >> >> > > aware of the impact on the other drivers, especially because I don't >> know >> >> > > which virt driver would support the minimum QoS feature now. >> >> > > >> >> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ >> >> > > >> >> > > In the review of (2), Sean mentioned implementing a logic to determine >> >> > > an appropriate resource provider(3) even if there is a mismatch about >> >> > > host name format, but I'm not sure how I would implement that, tbh. >> >> > > >> >> > > >> >> > > My current thought is to merge (1) as a quick solution first, and >> discuss >> >> > > whether >> >> > > we should merge (2), but I'd like to ask for some feedback about this >> plan >> >> > > (like we should NOT merge (2)). >> >> > > >> >> > > I'd appreciate your thoughts about this $topic. >> >> > > >> >> > > Thank you, >> >> > > Takashi >> >> >> -- >> >> Slawek Kaplonski >> >> Principal Software Engineer >> >> Red Hat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Jun 11 10:39:06 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 11 Jun 2021 12:39:06 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: <2993434.SUg3sCx5Oz@p1> Message-ID: Hello: I think I'm not explaining myself correctly. This is what I'm proposing: to provide a "hypervisor_type" variable in Neutron and implement, for each supported hypervisor, a hostname method retrieval. If we don't support the hypervisor used, the user can always provide the hostname via "resource_provider_hypervisors". Regards. On Fri, Jun 11, 2021 at 12:20 PM Takashi Kajinami wrote: > Hi Slawek and Radolfo, > > Thank you for your feedback. > > On Fri, Jun 11, 2021 at 5:47 PM Rodolfo Alonso Hernandez < > ralonsoh at redhat.com> wrote: > >> I agree with this idea but what >> https://review.opendev.org/c/openstack/neutron/+/763563 is proposing >> differs from what I'm saying: instead of providing the hostname (that is >> something we can do "resource_provider_hypervisors"), we should provide the >> hypervisor name (default: libvirt). >> > > The main problem is that the logic to determine "hypervisor name" is > different in each virt driver. > For example libvirt driver uses canonical name while power driver uses > [DEFAULT] host in nova.conf . > So if we fix compatibility with one virt driver then it would break > compatibility with the other driver. > Because neutron is not aware of the virt driver used, it's impossible to > avoid that inconsistency completely. > > > Thank you, > Takashi > > > > >> >> On Fri, Jun 11, 2021 at 10:36 AM Slawek Kaplonski >> wrote: >> >>> Hi, >>> >>> Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez >>> pisze: >>> >>> > Hello Takashi and Neutrinos: >>> >>> > >>> >>> > First of all, thank you for working on this. >>> >>> > >>> >>> > Currently users have the ability to override the host name using >>> >>> > "resource_provider_hypervisors". That means this parameter is always >>> >>> > configurable; IMO we are safe on this. >>> >>> > >>> >>> > The problem we have is how we should retrieve this host name if >>> >>> > "resource_provider_hypervisors" is not provided. I think the solution >>> could >>> >>> > be a combination of: >>> >>> > >>> >>> > - A first patch providing the ability to select the hypervisor >>> type. The >>> >>> > default one could be "libvirt". Each driver can have a particular >>> host name >>> >>> > retrieval implementation. The default one will be the implemented >>> right >>> >>> > now: "socket.gethostname()" >>> >>> > - https://review.opendev.org/c/openstack/neutron/+/788893, >>> providing >>> >>> > full compatibility for libvirt. >>> >>> > >>> >>> > Those are my two cents. >>> >>> We can move on with the patch >>> https://review.opendev.org/c/openstack/neutron/+/763563 to provide new >>> config option as it's now and additionally implement >>> https://review.opendev.org/c/openstack/neutron/+/788893 so users who >>> are using libvirt will not need to change anything, but if someone is using >>> other hypervisor, this will allow adjustments. Wdyt? >>> >>> > >>> >>> > Regards. >>> >>> > >>> >>> > >>> >>> > >>> >>> > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami >>> >>> > >>> >>> > wrote: >>> >>> > > Hi All, >>> >>> > > >>> >>> > > >>> >>> > > I've been working on bug 1926693[1], and am lost about the reasonable >>> >>> > > solutions we expect. Ideally I'd need to bring this topic in the team >>> >>> > > meeting >>> >>> > > but because of the timezone gap and complicated background, I'd like >>> to >>> >>> > > gather some feedback in ml first. >>> >>> > > >>> >>> > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 >>> >>> > > >>> >>> > > TL;DR >>> >>> > > >>> >>> > > Which one(or ones) would be reasonable solutions for this issue ? >>> >>> > > >>> >>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>> >>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 >>> >>> > > (3) Implement something different >>> >>> > > >>> >>> > > The issue I reported in the bug is that there is an inconsistency >>> between >>> >>> > > nova and neutron about the way to determine a hypervisor name. >>> >>> > > Currently neutron uses socket.gethostname() (which always returns >>> >>> > > shortname) >>> >>> > > to determine a hypervisor name to search the corresponding resource >>> >>> > > provider. >>> >>> > > On the other hand, nova uses libvirt's getHostname function (if >>> libvirt >>> >>> > > driver is used) >>> >>> > > which returns a canonical name. Canonical name can be shortname or >>> FQDN >>> >>> > > (*1) >>> >>> > > and if FQDN is used then neutron and nova never agree. >>> >>> > > >>> >>> > > (*1) >>> >>> > > IMO this is likely to happen in real deployments. For example, >>> TripelO uses >>> >>> > > FQDN for canonical names. >>> >>> > > >>> >>> > > Neutron already provides the resource_provider_defauly_hypervisors >>> option >>> >>> > > to override a hypervisor name used. However because this option >>> accepts >>> >>> > > a map between interface and hypervisor, setting this parameter >>> requires >>> >>> > > very redundant description especially when a compute node has >>> multiple >>> >>> > > interfaces/bridges. The following example shows how redundant the >>> current >>> >>> > > requirement is. >>> >>> > > ~~~ >>> >>> > > [OVS] >>> >>> > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ >>> >>> > > br-data3:1024,1024,br-data4,1024:1024 >>> >>> > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ >>> >>> > > >>> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain >>> >>> > > ~~~ >>> >>> > > >>> >>> > > I've submitted a change to propose a new single parameter to override >>> >>> > > the base hypervisor name but this is currently -2ed, mainly because >>> >>> > > I lacked analysis about the root cause of mismatch when I proposed >>> this. >>> >>> > > >>> >>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>> >>> > > >>> >>> > > On the other hand, I submitted a different change to neutron which >>> >>> > > implements >>> >>> > > the logic to get a hypervisor name which is fully compatible with >>> libvirt. >>> >>> > > While this would save users from even overriding hypervisor names, >>> I'm >>> >>> > > aware >>> >>> > > that this might break the other virt driver which depends on a >>> different >>> >>> > > logic >>> >>> > > to generate a hypervisor name. IMO the patch is still useful >>> considering >>> >>> > > the libvirt driver would be the most popular option now, but I'm not >>> fully >>> >>> > > aware of the impact on the other drivers, especially because I don't >>> know >>> >>> > > which virt driver would support the minimum QoS feature now. >>> >>> > > >>> >>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ >>> >>> > > >>> >>> > > In the review of (2), Sean mentioned implementing a logic to >>> determine >>> >>> > > an appropriate resource provider(3) even if there is a mismatch about >>> >>> > > host name format, but I'm not sure how I would implement that, tbh. >>> >>> > > >>> >>> > > >>> >>> > > My current thought is to merge (1) as a quick solution first, and >>> discuss >>> >>> > > whether >>> >>> > > we should merge (2), but I'd like to ask for some feedback about >>> this plan >>> >>> > > (like we should NOT merge (2)). >>> >>> > > >>> >>> > > I'd appreciate your thoughts about this $topic. >>> >>> > > >>> >>> > > Thank you, >>> >>> > > Takashi >>> >>> >>> -- >>> >>> Slawek Kaplonski >>> >>> Principal Software Engineer >>> >>> Red Hat >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri Jun 11 11:14:03 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 11 Jun 2021 20:14:03 +0900 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: <2993434.SUg3sCx5Oz@p1> Message-ID: Hi Radolfo, Thank you for your clarification and sorry I misread what you wrote. My concern with that approach is that adding the hypervisor_type parameter would mean neutron will implement a logic for the other virt drivers, which is currently maintained in nova or hypervisor like libvirt in the future and it would expand the scope of neutron too much. IIUC current Neutron doesn't care about virt drivers used, and I agree with Slawek that it's better to keep that current design here. Thank you, Takashi On Fri, Jun 11, 2021 at 7:39 PM Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > Hello: > > I think I'm not explaining myself correctly. This is what I'm proposing: > to provide a "hypervisor_type" variable in Neutron and implement, for each > supported hypervisor, a hostname method retrieval. > > If we don't support the hypervisor used, the user can always provide the > hostname via "resource_provider_hypervisors". > > Regards. > > On Fri, Jun 11, 2021 at 12:20 PM Takashi Kajinami > wrote: > >> Hi Slawek and Radolfo, >> >> Thank you for your feedback. >> >> On Fri, Jun 11, 2021 at 5:47 PM Rodolfo Alonso Hernandez < >> ralonsoh at redhat.com> wrote: >> >>> I agree with this idea but what >>> https://review.opendev.org/c/openstack/neutron/+/763563 is proposing >>> differs from what I'm saying: instead of providing the hostname (that is >>> something we can do "resource_provider_hypervisors"), we should provide the >>> hypervisor name (default: libvirt). >>> >> >> The main problem is that the logic to determine "hypervisor name" is >> different in each virt driver. >> For example libvirt driver uses canonical name while power driver uses >> [DEFAULT] host in nova.conf . >> So if we fix compatibility with one virt driver then it would break >> compatibility with the other driver. >> Because neutron is not aware of the virt driver used, it's impossible to >> avoid that inconsistency completely. >> >> >> Thank you, >> Takashi >> >> >> >> >>> >>> On Fri, Jun 11, 2021 at 10:36 AM Slawek Kaplonski >>> wrote: >>> >>>> Hi, >>>> >>>> Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez >>>> pisze: >>>> >>>> > Hello Takashi and Neutrinos: >>>> >>>> > >>>> >>>> > First of all, thank you for working on this. >>>> >>>> > >>>> >>>> > Currently users have the ability to override the host name using >>>> >>>> > "resource_provider_hypervisors". That means this parameter is always >>>> >>>> > configurable; IMO we are safe on this. >>>> >>>> > >>>> >>>> > The problem we have is how we should retrieve this host name if >>>> >>>> > "resource_provider_hypervisors" is not provided. I think the solution >>>> could >>>> >>>> > be a combination of: >>>> >>>> > >>>> >>>> > - A first patch providing the ability to select the hypervisor >>>> type. The >>>> >>>> > default one could be "libvirt". Each driver can have a particular >>>> host name >>>> >>>> > retrieval implementation. The default one will be the implemented >>>> right >>>> >>>> > now: "socket.gethostname()" >>>> >>>> > - https://review.opendev.org/c/openstack/neutron/+/788893, >>>> providing >>>> >>>> > full compatibility for libvirt. >>>> >>>> > >>>> >>>> > Those are my two cents. >>>> >>>> We can move on with the patch >>>> https://review.opendev.org/c/openstack/neutron/+/763563 to provide new >>>> config option as it's now and additionally implement >>>> https://review.opendev.org/c/openstack/neutron/+/788893 so users who >>>> are using libvirt will not need to change anything, but if someone is using >>>> other hypervisor, this will allow adjustments. Wdyt? >>>> >>>> > >>>> >>>> > Regards. >>>> >>>> > >>>> >>>> > >>>> >>>> > >>>> >>>> > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami >>> > >>>> >>>> > >>>> >>>> > wrote: >>>> >>>> > > Hi All, >>>> >>>> > > >>>> >>>> > > >>>> >>>> > > I've been working on bug 1926693[1], and am lost about the >>>> reasonable >>>> >>>> > > solutions we expect. Ideally I'd need to bring this topic in the >>>> team >>>> >>>> > > meeting >>>> >>>> > > but because of the timezone gap and complicated background, I'd >>>> like to >>>> >>>> > > gather some feedback in ml first. >>>> >>>> > > >>>> >>>> > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 >>>> >>>> > > >>>> >>>> > > TL;DR >>>> >>>> > > >>>> >>>> > > Which one(or ones) would be reasonable solutions for this issue ? >>>> >>>> > > >>>> >>>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>>> >>>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 >>>> >>>> > > (3) Implement something different >>>> >>>> > > >>>> >>>> > > The issue I reported in the bug is that there is an inconsistency >>>> between >>>> >>>> > > nova and neutron about the way to determine a hypervisor name. >>>> >>>> > > Currently neutron uses socket.gethostname() (which always returns >>>> >>>> > > shortname) >>>> >>>> > > to determine a hypervisor name to search the corresponding resource >>>> >>>> > > provider. >>>> >>>> > > On the other hand, nova uses libvirt's getHostname function (if >>>> libvirt >>>> >>>> > > driver is used) >>>> >>>> > > which returns a canonical name. Canonical name can be shortname or >>>> FQDN >>>> >>>> > > (*1) >>>> >>>> > > and if FQDN is used then neutron and nova never agree. >>>> >>>> > > >>>> >>>> > > (*1) >>>> >>>> > > IMO this is likely to happen in real deployments. For example, >>>> TripelO uses >>>> >>>> > > FQDN for canonical names. >>>> >>>> > > >>>> >>>> > > Neutron already provides the resource_provider_defauly_hypervisors >>>> option >>>> >>>> > > to override a hypervisor name used. However because this option >>>> accepts >>>> >>>> > > a map between interface and hypervisor, setting this parameter >>>> requires >>>> >>>> > > very redundant description especially when a compute node has >>>> multiple >>>> >>>> > > interfaces/bridges. The following example shows how redundant the >>>> current >>>> >>>> > > requirement is. >>>> >>>> > > ~~~ >>>> >>>> > > [OVS] >>>> >>>> > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ >>>> >>>> > > br-data3:1024,1024,br-data4,1024:1024 >>>> >>>> > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ >>>> >>>> > > >>>> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain >>>> >>>> > > ~~~ >>>> >>>> > > >>>> >>>> > > I've submitted a change to propose a new single parameter to >>>> override >>>> >>>> > > the base hypervisor name but this is currently -2ed, mainly because >>>> >>>> > > I lacked analysis about the root cause of mismatch when I proposed >>>> this. >>>> >>>> > > >>>> >>>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>>> >>>> > > >>>> >>>> > > On the other hand, I submitted a different change to neutron which >>>> >>>> > > implements >>>> >>>> > > the logic to get a hypervisor name which is fully compatible with >>>> libvirt. >>>> >>>> > > While this would save users from even overriding hypervisor names, >>>> I'm >>>> >>>> > > aware >>>> >>>> > > that this might break the other virt driver which depends on a >>>> different >>>> >>>> > > logic >>>> >>>> > > to generate a hypervisor name. IMO the patch is still useful >>>> considering >>>> >>>> > > the libvirt driver would be the most popular option now, but I'm >>>> not fully >>>> >>>> > > aware of the impact on the other drivers, especially because I >>>> don't know >>>> >>>> > > which virt driver would support the minimum QoS feature now. >>>> >>>> > > >>>> >>>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ >>>> >>>> > > >>>> >>>> > > In the review of (2), Sean mentioned implementing a logic to >>>> determine >>>> >>>> > > an appropriate resource provider(3) even if there is a mismatch >>>> about >>>> >>>> > > host name format, but I'm not sure how I would implement that, tbh. >>>> >>>> > > >>>> >>>> > > >>>> >>>> > > My current thought is to merge (1) as a quick solution first, and >>>> discuss >>>> >>>> > > whether >>>> >>>> > > we should merge (2), but I'd like to ask for some feedback about >>>> this plan >>>> >>>> > > (like we should NOT merge (2)). >>>> >>>> > > >>>> >>>> > > I'd appreciate your thoughts about this $topic. >>>> >>>> > > >>>> >>>> > > Thank you, >>>> >>>> > > Takashi >>>> >>>> >>>> -- >>>> >>>> Slawek Kaplonski >>>> >>>> Principal Software Engineer >>>> >>>> Red Hat >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Fri Jun 11 11:19:34 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Fri, 11 Jun 2021 19:19:34 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host Message-ID: I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is not found during the host configure part. PLAY [Verify that the Kayobe Ansible user account is accessible] TASK [Verify that a command can be executed] fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} Python3 is installed on the host. When searching where this platform-python is coming from it returns the kolla-ansible virtual envs: $ grep -rni -e "platform-python" venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: '8': /usr/libexec/platform-python venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: - /usr/libexec/platform-python I had a look through the deployment guide for Kayobe Wallaby and didnt see a note about changing this. Do I need to do further steps to support the ubuntu overcloud host? I have already set (as per the doc): os_distribution: ubuntu os_release: focal Regards, Tony Pearce -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Jun 11 11:26:13 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 11 Jun 2021 13:26:13 +0200 Subject: [masakari] Compute service with name XXXXX not found. In-Reply-To: <6241D8B0-5DF1-4A46-9089-F2A9A7C978E5@poczta.onet.pl> References: <6241D8B0-5DF1-4A46-9089-F2A9A7C978E5@poczta.onet.pl> Message-ID: On Fri, Jun 11, 2021 at 11:49 AM at wrote: > > Hi, Hello, > I have some problem with masakari. I can create segment (from CLI and Horizon), but can't create host (the same result from Horizon and CLI). > > openstack segment host create XXXXX COMPUTE SSH segment_id > > returns BadRequest: Compute service with name XXXXX could not be found. > XXXXX is the name which Horizon suggest, and it's a name of compute host. > > openstack compute service list > returns proper list with state up/enabled on compute hosts (zone nova) > Maybe I misunderstood some parameters of host create? As "type" I use COMPUTE, what value should it be? From "Binary" column of openstack compute service list? What is "control_attributes" field, because documentation lacks preceise information what value should be there and what is it use for. Tried to found some info on this error but I haven't found anything... The name should be as it is listed by nova. Masakari is querying nova for that compute host. The exact query can be run using: openstack compute service list --service nova-compute --host $HOSTNAME where $HOSTNAME is the desired hostname. The type should be "COMPUTE" and folks often use "SSH" for control_attributes (but it has no meaning). -yoctozepto From owalsh at redhat.com Fri Jun 11 11:47:52 2021 From: owalsh at redhat.com (Oliver Walsh) Date: Fri, 11 Jun 2021 12:47:52 +0100 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: Hi Takashi, On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami wrote: > Hi All, > > > I've been working on bug 1926693[1], and am lost about the reasonable > solutions we expect. Ideally I'd need to bring this topic in the team > meeting > but because of the timezone gap and complicated background, I'd like to > gather some feedback in ml first. > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > TL;DR > Which one(or ones) would be reasonable solutions for this issue ? > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > (3) Implement something different > > The issue I reported in the bug is that there is an inconsistency between > nova and neutron about the way to determine a hypervisor name. > Currently neutron uses socket.gethostname() (which always returns > shortname) > socket.gethostname() can return fqdn or shortname - https://docs.python.org/3/library/socket.html#socket.gethostname. I've seen cases where it switched from short to fqdn but I'm not sure of the root cause - DHCP lease setting a hostname/domainname perhaps. Thanks, Ollie to determine a hypervisor name to search the corresponding resource > provider. > On the other hand, nova uses libvirt's getHostname function (if libvirt > driver is used) > which returns a canonical name. Canonical name can be shortname or FQDN > (*1) > and if FQDN is used then neutron and nova never agree. > > (*1) > IMO this is likely to happen in real deployments. For example, TripelO uses > FQDN for canonical names. > > Neutron already provides the resource_provider_defauly_hypervisors option > to override a hypervisor name used. However because this option accepts > a map between interface and hypervisor, setting this parameter requires > very redundant description especially when a compute node has multiple > interfaces/bridges. The following example shows how redundant the current > requirement is. > ~~~ > [OVS] > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ > br-data3:1024,1024,br-data4,1024:1024 > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain > ~~~ > > I've submitted a change to propose a new single parameter to override > the base hypervisor name but this is currently -2ed, mainly because > I lacked analysis about the root cause of mismatch when I proposed this. > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > On the other hand, I submitted a different change to neutron which > implements > the logic to get a hypervisor name which is fully compatible with libvirt. > While this would save users from even overriding hypervisor names, I'm > aware > that this might break the other virt driver which depends on a different > logic > to generate a hypervisor name. IMO the patch is still useful considering > the libvirt driver would be the most popular option now, but I'm not fully > aware of the impact on the other drivers, especially because I don't know > which virt driver would support the minimum QoS feature now. > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > In the review of (2), Sean mentioned implementing a logic to determine > an appropriate resource provider(3) even if there is a mismatch about > host name format, but I'm not sure how I would implement that, tbh. > > > My current thought is to merge (1) as a quick solution first, and discuss > whether > we should merge (2), but I'd like to ask for some feedback about this plan > (like we should NOT merge (2)). > > I'd appreciate your thoughts about this $topic. > > Thank you, > Takashi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Fri Jun 11 12:47:03 2021 From: bkslash at poczta.onet.pl (at) Date: Fri, 11 Jun 2021 14:47:03 +0200 Subject: [masakari] Compute service with name XXXXX not found. In-Reply-To: References: Message-ID: <15C953C2-6BC2-4FC8-A6BD-9AB177911ADC@poczta.onet.pl> Hi, thx for the answer. > openstack compute service list --service nova-compute --host $HOSTNAME so in openstack segment host create I should use name which is displayed in "Host" column, right? So that's what I do :( openstack compute service list --service nova-compute ID Binary Host Zone Status State 20 nova-compute XXXXX nova enabled up openstack segment host create XXXXX COMPUTE SSH 00dd5bxxxxxx and still "Compute service with name XXXXX could not be found"..... How masakari discovers hosts? Best regards Adam Tomas > Wiadomość napisana przez Radosław Piliszek w dniu 11.06.2021, o godz. 13:26: > > On Fri, Jun 11, 2021 at 11:49 AM at wrote: >> >> Hi, > > Hello, > >> I have some problem with masakari. I can create segment (from CLI and Horizon), but can't create host (the same result from Horizon and CLI). >> >> openstack segment host create XXXXX COMPUTE SSH segment_id >> >> returns BadRequest: Compute service with name XXXXX could not be found. >> XXXXX is the name which Horizon suggest, and it's a name of compute host. >> >> openstack compute service list >> returns proper list with state up/enabled on compute hosts (zone nova) >> Maybe I misunderstood some parameters of host create? As "type" I use COMPUTE, what value should it be? From "Binary" column of openstack compute service list? What is "control_attributes" field, because documentation lacks preceise information what value should be there and what is it use for. Tried to found some info on this error but I haven't found anything... > > The name should be as it is listed by nova. Masakari is querying nova > for that compute host. The exact query can be run using: > openstack compute service list --service nova-compute --host $HOSTNAME > where $HOSTNAME is the desired hostname. > The type should be "COMPUTE" and folks often use "SSH" for > control_attributes (but it has no meaning). > > -yoctozepto From radoslaw.piliszek at gmail.com Fri Jun 11 12:56:06 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 11 Jun 2021 14:56:06 +0200 Subject: [masakari] Compute service with name XXXXX not found. In-Reply-To: <15C953C2-6BC2-4FC8-A6BD-9AB177911ADC@poczta.onet.pl> References: <15C953C2-6BC2-4FC8-A6BD-9AB177911ADC@poczta.onet.pl> Message-ID: On Fri, Jun 11, 2021 at 2:47 PM at wrote: > > Hi, thx for the answer. > > openstack compute service list --service nova-compute --host $HOSTNAME > so in > > openstack segment host create > > I should use name which is displayed in "Host" column, right? So that's what I do :( Yes. > openstack compute service list --service nova-compute > > ID Binary Host Zone Status State > 20 nova-compute XXXXX nova enabled up > > openstack segment host create XXXXX COMPUTE SSH 00dd5bxxxxxx > > and still "Compute service with name XXXXX could not be found"..... > > How masakari discovers hosts? I wrote this already: openstack compute service list --service nova-compute --host $HOSTNAME did you try including the same hostname in this command? If it works and Masakari does not, I would make sure you set up Masakari to speak to the right Nova API. Finally, if all else fails, please paste (e.g. https://paste.ubuntu.com/ ) masakari api logs for those rejected host creations. Though do that with debug=True in the config [DEFAULT] section. -yoctozepto From pierre at stackhpc.com Fri Jun 11 13:04:26 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 11 Jun 2021 15:04:26 +0200 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Hi Tony, Kayobe doesn't use platform-python anymore, on both stable/wallaby and stable/victoria: https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 Can you double-check what version you are using, and share how you installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. Best wishes, Pierre On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: > > I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is not found during the host configure part. > > PLAY [Verify that the Kayobe Ansible user account is accessible] > TASK [Verify that a command can be executed] > > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} > > Python3 is installed on the host. When searching where this platform-python is coming from it returns the kolla-ansible virtual envs: > > $ grep -rni -e "platform-python" > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: '8': /usr/libexec/platform-python > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: - /usr/libexec/platform-python > > I had a look through the deployment guide for Kayobe Wallaby and didnt see a note about changing this. > > Do I need to do further steps to support the ubuntu overcloud host? I have already set (as per the doc): > > os_distribution: ubuntu > os_release: focal > > Regards, > > Tony Pearce > From skaplons at redhat.com Fri Jun 11 13:13:52 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 11 Jun 2021 15:13:52 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: <1856963.qdIGSAVlHa@p1> Hi, Dnia piątek, 11 czerwca 2021 13:14:03 CEST Takashi Kajinami pisze: > Hi Radolfo, > > Thank you for your clarification and sorry I misread what you wrote. > > My concern with that approach is that adding the hypervisor_type parameter > would mean > neutron will implement a logic for the other virt drivers, which is > currently maintained in > nova or hypervisor like libvirt in the future and it would expand the scope > of neutron too much. > > IIUC current Neutron doesn't care about virt drivers used, and I agree with > Slawek that > it's better to keep that current design here. > > Thank you, > Takashi > > > On Fri, Jun 11, 2021 at 7:39 PM Rodolfo Alonso Hernandez < > > ralonsoh at redhat.com> wrote: > > Hello: > > > > I think I'm not explaining myself correctly. This is what I'm proposing: > > to provide a "hypervisor_type" variable in Neutron and implement, for each > > supported hypervisor, a hostname method retrieval. > > > > If we don't support the hypervisor used, the user can always provide the > > hostname via "resource_provider_hypervisors". I'm not sure if adding "hypervisor drivers" to neutron is good idea. Solution proposed by Takashi is simpler IMHO. If user just want's to override hostname for all resources, this new option can be used. But in some case, where it's needed to do it "per bridge", that's also possible. I know it's maybe not perfect but IMO still better than nothing. > > > > Regards. > > > > On Fri, Jun 11, 2021 at 12:20 PM Takashi Kajinami > > > > wrote: > >> Hi Slawek and Radolfo, > >> > >> Thank you for your feedback. > >> > >> On Fri, Jun 11, 2021 at 5:47 PM Rodolfo Alonso Hernandez < > >> > >> ralonsoh at redhat.com> wrote: > >>> I agree with this idea but what > >>> https://review.opendev.org/c/openstack/neutron/+/763563 is proposing > >>> differs from what I'm saying: instead of providing the hostname (that is > >>> something we can do "resource_provider_hypervisors"), we should provide the > >>> hypervisor name (default: libvirt). > >> > >> The main problem is that the logic to determine "hypervisor name" is > >> different in each virt driver. > >> For example libvirt driver uses canonical name while power driver uses > >> [DEFAULT] host in nova.conf . > >> So if we fix compatibility with one virt driver then it would break > >> compatibility with the other driver. > >> Because neutron is not aware of the virt driver used, it's impossible to > >> avoid that inconsistency completely. > >> > >> > >> Thank you, > >> Takashi > >> > >>> On Fri, Jun 11, 2021 at 10:36 AM Slawek Kaplonski > >>> > >>> wrote: > >>>> Hi, > >>>> > >>>> Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez > >>>> > >>>> pisze: > >>>> > Hello Takashi and Neutrinos: > >>>> > > >>>> > > >>>> > > >>>> > First of all, thank you for working on this. > >>>> > > >>>> > > >>>> > > >>>> > Currently users have the ability to override the host name using > >>>> > > >>>> > "resource_provider_hypervisors". That means this parameter is always > >>>> > > >>>> > configurable; IMO we are safe on this. > >>>> > > >>>> > > >>>> > > >>>> > The problem we have is how we should retrieve this host name if > >>>> > > >>>> > "resource_provider_hypervisors" is not provided. I think the solution > >>>> > >>>> could > >>>> > >>>> > be a combination of: > >>>> > - A first patch providing the ability to select the hypervisor > >>>> > >>>> type. The > >>>> > >>>> > default one could be "libvirt". Each driver can have a particular > >>>> > >>>> host name > >>>> > >>>> > retrieval implementation. The default one will be the implemented > >>>> > >>>> right > >>>> > >>>> > now: "socket.gethostname()" > >>>> > > >>>> > - https://review.opendev.org/c/openstack/neutron/+/788893, > >>>> > >>>> providing > >>>> > >>>> > full compatibility for libvirt. > >>>> > > >>>> > Those are my two cents. > >>>> > >>>> We can move on with the patch > >>>> https://review.opendev.org/c/openstack/neutron/+/763563 to provide new > >>>> config option as it's now and additionally implement > >>>> https://review.opendev.org/c/openstack/neutron/+/788893 so users who > >>>> are using libvirt will not need to change anything, but if someone is using > >>>> other hypervisor, this will allow adjustments. Wdyt? > >>>> > >>>> > Regards. > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami >>>> > > >>>> > wrote: > >>>> > > Hi All, > >>>> > > > >>>> > > > >>>> > > > >>>> > > > >>>> > > > >>>> > > I've been working on bug 1926693[1], and am lost about the > >>>> > >>>> reasonable > >>>> > >>>> > > solutions we expect. Ideally I'd need to bring this topic in the > >>>> > >>>> team > >>>> > >>>> > > meeting > >>>> > > > >>>> > > but because of the timezone gap and complicated background, I'd > >>>> > >>>> like to > >>>> > >>>> > > gather some feedback in ml first. > >>>> > > > >>>> > > > >>>> > > > >>>> > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > >>>> > > > >>>> > > > >>>> > > > >>>> > > TL;DR > >>>> > > > >>>> > > Which one(or ones) would be reasonable solutions for this issue ? > >>>> > > > >>>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > >>>> > > > >>>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > >>>> > > > >>>> > > (3) Implement something different > >>>> > > > >>>> > > The issue I reported in the bug is that there is an inconsistency > >>>> > >>>> between > >>>> > >>>> > > nova and neutron about the way to determine a hypervisor name. > >>>> > > > >>>> > > Currently neutron uses socket.gethostname() (which always returns > >>>> > > > >>>> > > shortname) > >>>> > > > >>>> > > to determine a hypervisor name to search the corresponding resource > >>>> > > > >>>> > > provider. > >>>> > > > >>>> > > On the other hand, nova uses libvirt's getHostname function (if > >>>> > >>>> libvirt > >>>> > >>>> > > driver is used) > >>>> > > > >>>> > > which returns a canonical name. Canonical name can be shortname or > >>>> > >>>> FQDN > >>>> > >>>> > > (*1) > >>>> > > > >>>> > > and if FQDN is used then neutron and nova never agree. > >>>> > > > >>>> > > > >>>> > > > >>>> > > (*1) > >>>> > > > >>>> > > IMO this is likely to happen in real deployments. For example, > >>>> > >>>> TripelO uses > >>>> > >>>> > > FQDN for canonical names. > >>>> > > > >>>> > > > >>>> > > > >>>> > > Neutron already provides the resource_provider_defauly_hypervisors > >>>> > >>>> option > >>>> > >>>> > > to override a hypervisor name used. However because this option > >>>> > >>>> accepts > >>>> > >>>> > > a map between interface and hypervisor, setting this parameter > >>>> > >>>> requires > >>>> > >>>> > > very redundant description especially when a compute node has > >>>> > >>>> multiple > >>>> > >>>> > > interfaces/bridges. The following example shows how redundant the > >>>> > >>>> current > >>>> > >>>> > > requirement is. > >>>> > > > >>>> > > ~~~ > >>>> > > > >>>> > > [OVS] > >>>> > > > >>>> > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024, \ > >>>> > > > >>>> > > br-data3:1024,1024,br-data4,1024:1024 > >>>> > > > >>>> > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > >>>> > >>>> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain > >>>> > >>>> > > ~~~ > >>>> > > > >>>> > > > >>>> > > > >>>> > > I've submitted a change to propose a new single parameter to > >>>> > >>>> override > >>>> > >>>> > > the base hypervisor name but this is currently -2ed, mainly because > >>>> > > > >>>> > > I lacked analysis about the root cause of mismatch when I proposed > >>>> > >>>> this. > >>>> > >>>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > >>>> > > > >>>> > > On the other hand, I submitted a different change to neutron which > >>>> > > > >>>> > > implements > >>>> > > > >>>> > > the logic to get a hypervisor name which is fully compatible with > >>>> > >>>> libvirt. > >>>> > >>>> > > While this would save users from even overriding hypervisor names, > >>>> > >>>> I'm > >>>> > >>>> > > aware > >>>> > > > >>>> > > that this might break the other virt driver which depends on a > >>>> > >>>> different > >>>> > >>>> > > logic > >>>> > > > >>>> > > to generate a hypervisor name. IMO the patch is still useful > >>>> > >>>> considering > >>>> > >>>> > > the libvirt driver would be the most popular option now, but I'm > >>>> > >>>> not fully > >>>> > >>>> > > aware of the impact on the other drivers, especially because I > >>>> > >>>> don't know > >>>> > >>>> > > which virt driver would support the minimum QoS feature now. > >>>> > > > >>>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > >>>> > > > >>>> > > In the review of (2), Sean mentioned implementing a logic to > >>>> > >>>> determine > >>>> > >>>> > > an appropriate resource provider(3) even if there is a mismatch > >>>> > >>>> about > >>>> > >>>> > > host name format, but I'm not sure how I would implement that, tbh. > >>>> > > > >>>> > > > >>>> > > > >>>> > > > >>>> > > > >>>> > > My current thought is to merge (1) as a quick solution first, and > >>>> > >>>> discuss > >>>> > >>>> > > whether > >>>> > > > >>>> > > we should merge (2), but I'd like to ask for some feedback about > >>>> > >>>> this plan > >>>> > >>>> > > (like we should NOT merge (2)). > >>>> > > > >>>> > > > >>>> > > > >>>> > > I'd appreciate your thoughts about this $topic. > >>>> > > > >>>> > > > >>>> > > > >>>> > > Thank you, > >>>> > > > >>>> > > Takashi > >>>> > >>>> -- > >>>> > >>>> Slawek Kaplonski > >>>> > >>>> Principal Software Engineer > >>>> > >>>> Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From balazs.gibizer at est.tech Fri Jun 11 14:02:26 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 11 Jun 2021 16:02:26 +0200 Subject: [nova][gate] openstack-tox-pep8 job broken In-Reply-To: References: Message-ID: <20JJUQ.TK9GSFVBOA1F1@est.tech> Hi, We merged https://review.opendev.org/c/openstack/nova/+/795744 instead to disable the failing job as the requirement patch bounced multiple times from the gate. The gate is unblocked now, but please be aware that we are tracking multiple nova CI instabilities that still causing the need of excessive rechecks. cheers, gibi On Wed, Jun 9, 2021 at 20:27, melanie witt wrote: > Hi all, > > The openstack-tox-pep8 job is currently failing with the following > error: > >> nova/crypto.py:39:1: error: Library stubs not installed for >> "paramiko" (or incompatible with Python 3.8) >> nova/crypto.py:39:1: note: Hint: "python3 -m pip install >> types-paramiko" >> nova/crypto.py:39:1: note: (or run "mypy --install-types" to install >> all missing stub packages) >> nova/crypto.py:39:1: note: See >> https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports >> Found 1 error in 1 file (checked 23 source files) >> ERROR: InvocationError for command /usr/bin/bash tools/mypywrap.sh >> (exited with code 1) > > Please hold your rechecks until the fix merges: > > https://review.opendev.org/c/openstack/nova/+/795533 > > Cheers, > -melanie > From vikash.kumarprasad at siemens.com Fri Jun 11 07:49:28 2021 From: vikash.kumarprasad at siemens.com (Kumar Prasad, Vikash) Date: Fri, 11 Jun 2021 07:49:28 +0000 Subject: How to flush UE context from USRP B210 Message-ID: Dear All, I am using USRP B210 for my eNB. USRP B210 is storing the context of previous connected UEs, I want to flush the UE context from USRP, could anyone suggest me how I can flush this previously connected UEs contexts? Thanks Vikash kumar prasad -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Fri Jun 11 08:35:46 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Fri, 11 Jun 2021 16:35:46 +0800 Subject: kayobe deploying Openstack - Train or Victoria Message-ID: I am running into issues with deploying Openstack using Kayobe. Is this list group the best place to raise? If not, my apologies - please could you share where I need to go? 1. kayobe hangs during deployment, does not time out, does not error out when has previously been successful and without configuration changes. 2. deployment fails due to breaking the network relating to the bridge. Also changes login password which locks out of console. Details: 1. Environment deployment: Train all-in-one host: centos7 Ansible control host: Ubuntu 18 With the first issue, I've seen this multiple times but have not been able to find the root cause. I searched online and came across other ansible users that state their playbooks were hanging and were able to resolve by clearing cache. I've tried clearing out ~/.ansible/* and /tmp/* on the control host. Also tried doing the same on the all-in-one host without success. This issue came about after doing a full destroy of the environment and then redeploying, making a minor config change and then redeploying. 2. Environment deployment: Victoria all-in-one host: centos8 Ansible control host: Ubuntu 20 Because I couldnt resolve the issue above and centos 8.4 is available, I decided to try and go to centos 8 and deploy Victoria. I hit 2 issues: 1. I am unable to login with "kayobe_ansible_user" using after "kayobe overcloud host configure" with "wrong password" message. Resetting the password resolves but the password seems changed again with a host configure. 2. deployment fails when "TASK [openvswitch : Ensuring OVS bridge is properly setup]" Looking at the ovs container, it's unable to add the physical interface to the bridge bond0 interface, complaining that the device is busy or is already up. I saw some log messages relating to ipv6 so I tried disabling ipv6 and redeploying but the same issue. I then rebuilt the host again and opted not to use a bond0 interface however the same loss of network occurs. If I log into the openvswitch_db container then there are CLI commands where I can delete the bridge to restore the network. So at this point, after turning off ipv6 again and running without bond0 bond, I tried another deploy wile tailing `/var/log/kolla/openvswitch/ovs-vswitchd.log` and now I do not see any errors but the network is lost to the host and the script fails to finish deployment. I've attached the logs that appear at the point the network dies: https://pasteboard.co/K65qwzR.png Are these known issues and does anyone have any information as to how I can work through or around them? Regards, Tony Pearce -------------- next part -------------- An HTML attachment was scrubbed... URL: From surabhi.kumari at siemens.com Fri Jun 11 13:04:10 2021 From: surabhi.kumari at siemens.com (Kumari, Surabhi) Date: Fri, 11 Jun 2021 13:04:10 +0000 Subject: How to flush UE context from USRP B210 In-Reply-To: References: Message-ID: In addition to the query asked by Vikash, When we connect our eNB(USRP210) to core, we get continue log for LTE_RRCConnectionReestablishmentRequest and uplink failure timer timeout. Can anyone suggest what's the reason behind these errors? RRC] [FRAME 00507][eNB][MOD 00][RNTI 4bbc] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00507][eNB][MOD 00][RNTI 4bbc] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti 4bbc) [RRC] [FRAME 00557][eNB][MOD 00][RNTI 411c] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e1a8c1) [RRC] [FRAME 00557][eNB][MOD 00][RNTI 411c] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00557][eNB][MOD 00][RNTI 411c] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00557][eNB][MOD 00][RNTI 411c] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti 411c) [RRC] [FRAME 00559][eNB][MOD 00][RNTI c84e] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e156c1) [RRC] [FRAME 00559][eNB][MOD 00][RNTI c84e] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00559][eNB][MOD 00][RNTI c84e] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00559][eNB][MOD 00][RNTI c84e] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti c84e) [RRC] Removing UE 8413 instance, because of uplink failure timer timeout [RRC] [eNB 0] Removing UE RNTI 8413 [RRC] Put UE 8413 into freeList [MAC] rrc_mac_remove_ue: UE 8413 not found [RRC] [FRAME 00000][eNB][MOD 00][RNTI 8413] Removed UE context [RRC] [release_UE_in_freeList] remove UE 8413 from freeList [RRC] Removing UE 5ce6 instance, because of uplink failure timer timeout [RRC] [eNB 0] Removing UE RNTI 5ce6 [RRC] Put UE 5ce6 into freeList [MAC] rrc_mac_remove_ue: UE 5ce6 not found [RRC] [FRAME 00000][eNB][MOD 00][RNTI 5ce6] Removed UE context [RRC] [release_UE_in_freeList] remove UE 5ce6 from freeList [RRC] [FRAME 00611][eNB][MOD 00][RNTI baef] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e0b1e1) [RRC] [FRAME 00611][eNB][MOD 00][RNTI baef] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00611][eNB][MOD 00][RNTI baef] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00611][eNB][MOD 00][RNTI baef] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti baef) [RRC] [FRAME 00629][eNB][MOD 00][RNTI 4750] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e1a8c1) [RRC] [FRAME 00629][eNB][MOD 00][RNTI 4750] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00629][eNB][MOD 00][RNTI 4750] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00629][eNB][MOD 00][RNTI 4750] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti 4750) [RRC] Removing UE 81f0 instance, because of uplink failure timer timeout [RRC] [eNB 0] Removing UE RNTI 81f0 [RRC] Put UE 81f0 into freeList [MAC] rrc_mac_remove_ue: UE 81f0 not found [RRC] [FRAME 00000][eNB][MOD 00][RNTI 81f0] Removed UE context [RRC] [release_UE_in_freeList] remove UE 81f0 from freeList [RRC] [FRAME 00711][eNB][MOD 00][RNTI d37d] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e156c1) [RRC] [FRAME 00711][eNB][MOD 00][RNTI d37d] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00711][eNB][MOD 00][RNTI d37d] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00711][eNB][MOD 00][RNTI d37d] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti d37d) [RRC] Removing UE 6f7e instance, because of uplink failure timer timeout [RRC] [eNB 0] Removing UE RNTI 6f7e [RRC] Put UE 6f7e into freeList [MAC] rrc_mac_remove_ue: UE 6f7e not found [RRC] [FRAME 00000][eNB][MOD 00][RNTI 6f7e] Removed UE context [RRC] [release_UE_in_freeList] remove UE 6f7e from freeList [RRC] [FRAME 00739][eNB][MOD 00][RNTI 2ac7] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e0b1e1) [RRC] [FRAME 00739][eNB][MOD 00][RNTI 2ac7] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00739][eNB][MOD 00][RNTI 2ac7] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00739][eNB][MOD 00][RNTI 2ac7] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti 2ac7) [RRC] Removing UE c48f instance, because of uplink failure timer timeout Regards, Surabhi From: openair5g-user-request at lists.eurecom.fr On Behalf Of Kumar Prasad, Vikash Sent: Friday, June 11, 2021 1:19 PM To: openair5g-user at lists.eurecom.fr; openstack-discuss at lists.openstack.org Subject: How to flush UE context from USRP B210 Dear All, I am using USRP B210 for my eNB. USRP B210 is storing the context of previous connected UEs, I want to flush the UE context from USRP, could anyone suggest me how I can flush this previously connected UEs contexts? Thanks Vikash kumar prasad -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Fri Jun 11 15:10:25 2021 From: bkslash at poczta.onet.pl (bkslash) Date: Fri, 11 Jun 2021 17:10:25 +0200 Subject: [masakari] Compute service with name XXXXX not found. In-Reply-To: References: Message-ID: <212EC217-274F-44B4-829B-D4C0D2F949FF@poczta.onet.pl> > openstack compute service list --service nova-compute --host $HOSTNAME > did you try including the same hostname in this command? yes, and it returns the same as "openstack compute service list" but of course only for host XXXXX > If it works and Masakari does not, I would make sure you set up > Masakari to speak to the right Nova API. I'm using kolla-ansible, all masakari configuration was generated based on globals.yaml and inventory file while deployment, so it should work almost "out of the box". Does masakari speak to nova via RabbitMQ? How else can I check which port/IP masakari speaks to? In logs I can only see requests TO masakari API, not where masakari tries to check hypervisor... > Though do that with debug=True in the config [DEFAULT] section. not much in logs, even with debug enabled.... 2021-06-11 14:45:49.111 959 DEBUG masakari.compute.nova [req-e9a58522-858d-4025-9c43-f9fee744a0db nova - - - -] Creating a Nova client using "nova" user novaclient /var/lib/kolla/venv/lib/python3.8/site-packages/masakari/compute/nova.py:102 2021-06-11 14:45:49.232 959 INFO masakari.compute.nova [req-e9a58522-858d-4025-9c43-f9fee744a0db nova - - - -] Call compute service find command to get list of matching hypervisor name 'XXXXX' 2021-06-11 14:45:49.829 959 INFO masakari.api.openstack.wsgi [req-e9a58522-858d-4025-9c43-f9fee744a0db nova - - - -] HTTP exception thrown: Compute service with name XXXXX could not be found. 2021-06-11 14:45:49.831 959 DEBUG masakari.api.openstack.wsgi [req-e9a58522-858d-4025-9c43-f9fee744a0db nova - - - -] Returning 400 to user: Compute service with name XXXXX could not be found. __call__ /var/lib/kolla/venv/lib/python3.8/site-packages/masakari/api/openstack/wsgi.py:1038 > On 11 Jun 2021, at 14:56, Radosław Piliszek wrote: > > On Fri, Jun 11, 2021 at 2:47 PM at wrote: >> >> Hi, thx for the answer. >>> openstack compute service list --service nova-compute --host $HOSTNAME >> so in >> >> openstack segment host create >> >> I should use name which is displayed in "Host" column, right? So that's what I do :( > > Yes. > >> openstack compute service list --service nova-compute >> >> ID Binary Host Zone Status State >> 20 nova-compute XXXXX nova enabled up >> >> openstack segment host create XXXXX COMPUTE SSH 00dd5bxxxxxx >> >> and still "Compute service with name XXXXX could not be found"..... >> >> How masakari discovers hosts? > > I wrote this already: > openstack compute service list --service nova-compute --host $HOSTNAME > did you try including the same hostname in this command? > If it works and Masakari does not, I would make sure you set up > Masakari to speak to the right Nova API. > > Finally, if all else fails, please paste (e.g. > https://paste.ubuntu.com/ ) masakari api logs for those rejected host > creations. > Though do that with debug=True in the config [DEFAULT] section. > > -yoctozepto From pierre at stackhpc.com Fri Jun 11 15:12:21 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 11 Jun 2021 17:12:21 +0200 Subject: kayobe deploying Openstack - Train or Victoria In-Reply-To: References: Message-ID: Hi Tony, This isn't a bad place to ask questions if you like email discussion. Just make sure to prefix your email subject lines with [kolla]. We are also on IRC (#openstack-kolla on the OFTC network). I will reply to your questions inline. On Fri, 11 Jun 2021 at 16:49, Tony Pearce wrote: > > I am running into issues with deploying Openstack using Kayobe. Is this list group the best place to raise? If not, my apologies - please could you share where I need to go? > > 1. kayobe hangs during deployment, does not time out, does not error out when has previously been successful and without configuration changes. > > 2. deployment fails due to breaking the network relating to the bridge. Also changes login password which locks out of console. > > Details: > 1. > Environment deployment: Train > all-in-one host: centos7 > Ansible control host: Ubuntu 18 > > With the first issue, I've seen this multiple times but have not been able to find the root cause. I searched online and came across other ansible users that state their playbooks were hanging and were able to resolve by clearing cache. I've tried clearing out ~/.ansible/* and /tmp/* on the control host. Also tried doing the same on the all-in-one host without success. This issue came about after doing a full destroy of the environment and then redeploying, making a minor config change and then redeploying. We would need more information to help you. What does your configuration look like, in particular the network and this bridge you mention? What command did you run and at which step does Kayobe hang? Are you able to SSH to your hosts successfully? > 2. > Environment deployment: Victoria > all-in-one host: centos8 > Ansible control host: Ubuntu 20 > > Because I couldnt resolve the issue above and centos 8.4 is available, I decided to try and go to centos 8 and deploy Victoria. I hit 2 issues: > > 1. I am unable to login with "kayobe_ansible_user" using after "kayobe overcloud host configure" with "wrong password" message. Resetting the password resolves but the password seems changed again with a host configure. Kayobe doesn't set a password for the "stack" user, you should use SSH keys to connect. > 2. deployment fails when "TASK [openvswitch : Ensuring OVS bridge is properly setup]" > Looking at the ovs container, it's unable to add the physical interface to the bridge bond0 interface, complaining that the device is busy or is already up. I saw some log messages relating to ipv6 so I tried disabling ipv6 and redeploying but the same issue. > I then rebuilt the host again and opted not to use a bond0 interface however the same loss of network occurs. If I log into the openvswitch_db container then there are CLI commands where I can delete the bridge to restore the network. > > So at this point, after turning off ipv6 again and running without bond0 bond, I tried another deploy wile tailing `/var/log/kolla/openvswitch/ovs-vswitchd.log` and now I do not see any errors but the network is lost to the host and the script fails to finish deployment. I've attached the logs that appear at the point the network dies: https://pasteboard.co/K65qwzR.png Again it would be good to see your network configuration. Do you have a single interface without any VLAN tagging that you are trying to use with Kayobe? From tkajinam at redhat.com Fri Jun 11 15:46:04 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Sat, 12 Jun 2021 00:46:04 +0900 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: On Fri, Jun 11, 2021 at 8:48 PM Oliver Walsh wrote: > Hi Takashi, > > On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami > wrote: > >> Hi All, >> >> >> I've been working on bug 1926693[1], and am lost about the reasonable >> solutions we expect. Ideally I'd need to bring this topic in the team >> meeting >> but because of the timezone gap and complicated background, I'd like to >> gather some feedback in ml first. >> >> [1] https://bugs.launchpad.net/neutron/+bug/1926693 >> >> TL;DR >> Which one(or ones) would be reasonable solutions for this issue ? >> (1) https://review.opendev.org/c/openstack/neutron/+/763563 >> (2) https://review.opendev.org/c/openstack/neutron/+/788893 >> (3) Implement something different >> >> The issue I reported in the bug is that there is an inconsistency between >> nova and neutron about the way to determine a hypervisor name. >> Currently neutron uses socket.gethostname() (which always returns >> shortname) >> > > socket.gethostname() can return fqdn or shortname - > https://docs.python.org/3/library/socket.html#socket.gethostname. > You are correct and my statement was not accurate. So socket.gethostname() returns what is returned by gethostname system call, and gethostname/sethostname accept both FQDN and short name, socket.gethostname() can return one of FQDN or short name. However the root problem is that this logic is not completely same as the ones used in each virt driver. Of cause we can require people the "correct" format usage for canonical name as well as "hostname", but fixthing this problem in neutron would be much more helpful considering the effect caused by enforcing users to "fix" hostname/canonical name formatting at this point. > I've seen cases where it switched from short to fqdn but I'm not sure of > the root cause - DHCP lease setting a hostname/domainname perhaps. > > Thanks, > Ollie > > to determine a hypervisor name to search the corresponding resource >> provider. >> On the other hand, nova uses libvirt's getHostname function (if libvirt >> driver is used) >> which returns a canonical name. Canonical name can be shortname or FQDN >> (*1) >> and if FQDN is used then neutron and nova never agree. >> >> (*1) >> IMO this is likely to happen in real deployments. For example, TripelO >> uses >> FQDN for canonical names. >> > >> Neutron already provides the resource_provider_defauly_hypervisors option >> to override a hypervisor name used. However because this option accepts >> a map between interface and hypervisor, setting this parameter requires >> very redundant description especially when a compute node has multiple >> interfaces/bridges. The following example shows how redundant the current >> requirement is. >> ~~~ >> [OVS] >> resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ >> br-data3:1024,1024,br-data4,1024:1024 >> resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ >> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain >> ~~~ >> >> I've submitted a change to propose a new single parameter to override >> the base hypervisor name but this is currently -2ed, mainly because >> I lacked analysis about the root cause of mismatch when I proposed this. >> (1) https://review.opendev.org/c/openstack/neutron/+/763563 >> >> >> On the other hand, I submitted a different change to neutron which >> implements >> the logic to get a hypervisor name which is fully compatible with libvirt. >> While this would save users from even overriding hypervisor names, I'm >> aware >> that this might break the other virt driver which depends on a different >> logic >> to generate a hypervisor name. IMO the patch is still useful considering >> the libvirt driver would be the most popular option now, but I'm not fully >> aware of the impact on the other drivers, especially because I don't know >> which virt driver would support the minimum QoS feature now. >> (2) https://review.opendev.org/c/openstack/neutron/+/788893/ >> >> >> In the review of (2), Sean mentioned implementing a logic to determine >> an appropriate resource provider(3) even if there is a mismatch about >> host name format, but I'm not sure how I would implement that, tbh. >> >> >> My current thought is to merge (1) as a quick solution first, and discuss >> whether >> we should merge (2), but I'd like to ask for some feedback about this plan >> (like we should NOT merge (2)). >> >> I'd appreciate your thoughts about this $topic. >> >> Thank you, >> Takashi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Fri Jun 11 16:23:43 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 11 Jun 2021 16:23:43 +0000 Subject: [ops] Automatically recover guests from down host Message-ID: <0670B960225633449A24709C291A5252511B1193@COM01.performair.local> All; What is the most effective means of having the OpenStack cluster restart guests when a hypervisor host fails? We're running OpenStack Victoria, installed manually through packages. My apologies, but my Google foo fails me on this issue; I don't know how to ask it the question. I recognize that OpenStack covers a great many different deployment scenarios, and in many of these this isn't feasible. In our case, images, volumes, and ephemeral storage are all on our Ceph cluster, so all storage is always available to all hypervisor hosts. I also recognize that resource restrictions mean that even in an environment such as mine, not all failed guests may be able to be restarted on new hosts. I'm ok with a dumb best effort, at least for now. Is there something already present in OpenStack which would allow this? Thank you, Dominic L. Hilsbos, MBA Vice President - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From syedammad83 at gmail.com Fri Jun 11 18:10:04 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Fri, 11 Jun 2021 23:10:04 +0500 Subject: [ops] Automatically recover guests from down host In-Reply-To: <0670B960225633449A24709C291A5252511B1193@COM01.performair.local> References: <0670B960225633449A24709C291A5252511B1193@COM01.performair.local> Message-ID: Hi, There is an option in nova to evacuate host. Triggering this will rebuild all the vms running on failed host to be scheduled on other host or reserved host. You can also try Openstack Masakri that is the instance HA service for openstack. Ammad On Fri, Jun 11, 2021 at 9:28 PM wrote: > All; > > What is the most effective means of having the OpenStack cluster restart > guests when a hypervisor host fails? We're running OpenStack Victoria, > installed manually through packages. > > My apologies, but my Google foo fails me on this issue; I don't know how > to ask it the question. > > I recognize that OpenStack covers a great many different deployment > scenarios, and in many of these this isn't feasible. In our case, images, > volumes, and ephemeral storage are all on our Ceph cluster, so all storage > is always available to all hypervisor hosts. > > I also recognize that resource restrictions mean that even in an > environment such as mine, not all failed guests may be able to be restarted > on new hosts. I'm ok with a dumb best effort, at least for now. > > Is there something already present in OpenStack which would allow this? > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Jun 11 22:44:48 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 11 Jun 2021 17:44:48 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 11th June, 21: Reading: 5 min Message-ID: <179fd3fcf2b.c8bcaf5d574870.5064027474273690788@ghanshyammann.com> Hello Everyone, Here is last week's summary of the Technical Committee activities. 1. What we completed this week: ========================= * Added 'vulnerability:managed' tag for os-brick[1]. * Replaced the ATC (Active Technical Contributors) terminology with AC (Active Contributors). ** TC resolution is also merged [2]. 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - https://meetings.opendev.org/meetings/tc/2021/tc.2021-06-10-15.00.log.html * We will have next week's meeting on June 17th, Thursday 15:00 UTC[3]. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ TC is using the etherpad[4] for Xena cycle working item. We will be checking and updating the status biweekly in the same etherpad. Open Reviews ----------------- * One open review for ongoing activities[5]. Migration from Freenode to OFTC ----------------------------------------- * All the required work for this migration is tracked in this etherpad[6] * Today we are changing the Freenode channels Topic about this migration. * We are in 'Communicate with community' work where all projects need to update all contributor doc etc. Please finish this in your project and mark the progress in etherpad[6]. * This migration has been published in Open Infra newsletter OpenStack's news also[8]. 'Y' release naming process ------------------------------- * Y release naming nomination is closed now. I have started the CIVS poll with TC members as the electorate. Retiring devstack-gate -------------------------- * As communicated over email, we are finally retiring the devstack-gate. It will keep supporting the stable branch until stable/wallaby goes to EOL[9]. The governance patch for officially retirement is also up[10] Updating project-team-guide for the meeting channel preference ---------------------------------------------------------------------------- * As communicated over email during migration to OFTC, we are adding the meeting channel preference to project own channel in project team guide[11] Test support for TLS default: ---------------------------------- Rico has started a separate email thread over testing with tls-proxy enabled[12], we encourage projects to participate in that testing and help to enable the tls-proxy in gate testing. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [14] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [15] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://review.opendev.org/c/openstack/governance/+/794680 [2] https://review.opendev.org/c/openstack/governance/+/794366 [3] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [4] https://etherpad.opendev.org/p/tc-xena-tracker [5] https://review.opendev.org/q/project:openstack/governance+status:open [6] https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc [7] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022780.html [8] https://superuser.openstack.org/articles/inside-open-infrastructure-the-latest-from-the-openinfra-foundation-4/ [9] https://review.opendev.org/q/topic:%22deprecate-devstack-gate%22+(status:open%20OR%20status:merged) [10] https://review.opendev.org/c/openstack/governance/+/795385 [11] https://review.opendev.org/c/openstack/project-team-guide/+/794839 [12] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023000.html [13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [14] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [15] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From whayutin at redhat.com Sun Jun 13 23:24:52 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 13 Jun 2021 17:24:52 -0600 Subject: master jobs down Message-ID: Greetings, Having some issues w/ infra... https://bugs.launchpad.net/tripleo/+bug/1931821 Details are in the bug.. this will block upstream master jobs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Sun Jun 13 23:29:05 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 13 Jun 2021 17:29:05 -0600 Subject: master jobs down In-Reply-To: References: Message-ID: sorry.. this was meant for [tripleo] and "infra" is not upstream openstack infra.. this is rdo infra. On Sun, Jun 13, 2021 at 5:24 PM Wesley Hayutin wrote: > Greetings, > > Having some issues w/ infra... > https://bugs.launchpad.net/tripleo/+bug/1931821 > > Details are in the bug.. this will block upstream master jobs. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Sun Jun 13 23:29:33 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 13 Jun 2021 17:29:33 -0600 Subject: [tripleo] master jobs down Message-ID: Having some issues w/ infra... https://bugs.launchpad.net/tripleo/+bug/1931821 Details are in the bug.. this will block upstream master jobs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Jun 13 23:30:20 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 13 Jun 2021 23:30:20 +0000 Subject: [tripleo] master jobs down In-Reply-To: References: Message-ID: <20210613233019.3axtibwd34iz4jk7@yuggoth.org> On 2021-06-13 17:24:52 -0600 (-0600), Wesley Hayutin wrote: > Greetings, > > Having some issues w/ infra... > https://bugs.launchpad.net/tripleo/+bug/1931821 > > Details are in the bug.. this will block upstream master jobs. I've added a tripleo subject tag in my reply, since it caught my attention and I thought you were saying there was a new problem I hadn't heard about yet in the OpenDev CI infrastructure. Or was this intended for a different mailing list? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Sun Jun 13 23:34:02 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 13 Jun 2021 23:34:02 +0000 Subject: [tripleo] master jobs down In-Reply-To: <20210613233019.3axtibwd34iz4jk7@yuggoth.org> References: <20210613233019.3axtibwd34iz4jk7@yuggoth.org> Message-ID: <20210613233402.j42lbj2kj4kfp75a@yuggoth.org> On 2021-06-13 23:30:20 +0000 (+0000), Jeremy Stanley wrote: > On 2021-06-13 17:24:52 -0600 (-0600), Wesley Hayutin wrote: > > Greetings, > > > > Having some issues w/ infra... > > https://bugs.launchpad.net/tripleo/+bug/1931821 > > > > Details are in the bug.. this will block upstream master jobs. > > I've added a tripleo subject tag in my reply, since it caught my > attention and I thought you were saying there was a new problem I > hadn't heard about yet in the OpenDev CI infrastructure. > > Or was this intended for a different mailing list? Nevermind, I see you sent some follow-up clarification while my question was crossing the electron seas. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tonyppe at gmail.com Mon Jun 14 06:20:09 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Mon, 14 Jun 2021 14:20:09 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Hi Pierre, thanks for replying to my message. To install kayobe I followed the documentation which summarise: installing a few system packages and setting up the kayobe virtual environment and then pulling the correct kayobe git version for the openstack to be installed. After configuring the yaml files I have run these commands: - kayobe control host bootstrap - kayobe overcloud host configure -> this one is failing with /usr/libexec/platform-python: not found After reading your message on the weekend I concluded that maybe I had done something wrong. Today, I re-pulled the kayobe wallaby git and manually transferred the configuration over to the new directory structure on the ansible host and set up again as per the guide but the same issue is seen. What I ended up doing to try and resolve was finding where this "platform-python" is coming from. It is coming from the virtual environment which is being set up during the kayobe ansible host bootstrap. Initially, I found the base.yml and it looks like it tries to match what the host is. I noticed that there is no ubuntu 20 listed there so I created it however it did not resolve the issue. So then I tried systematically replacing this reference in the other files found in the same location "venvs\kayobe\share\kayobe\ansible". The file I changed which allowed it to progress is "kayobe-target-venv.yml" But unfortunately it fails a bit further on, failing to find an selinux package [1] Seeing as the error is mentioning selinux (a RedHat security feature not installed on ubuntu) could the root cause issue be that kayobe is not matching the host as ubuntu? I did already set in kayobe that I am using ubuntu OS distribution within globals.yml [2]. Are there any extra steps that I need to complete that maybe are not listed in the documentation / guide? [1] TASK [MichaelRigart.interfaces : Debian | install current/latest network package - Pastebin.com [2] ---# Kayobe global configuration.######################################### - Pastebin.com Regards, Tony Pearce On Fri, 11 Jun 2021 at 21:05, Pierre Riteau wrote: > Hi Tony, > > Kayobe doesn't use platform-python anymore, on both stable/wallaby and > stable/victoria: > https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 > > Can you double-check what version you are using, and share how you > installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. > > Best wishes, > Pierre > > On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: > > > > I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 > machine to deploy Wallaby. I'm getting an error that python is not found > during the host configure part. > > > > PLAY [Verify that the Kayobe Ansible user account is accessible] > > TASK [Verify that a command can be executed] > > > > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": > "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": > "", "msg": "The module failed to execute correctly, you probably need to > set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} > > > > Python3 is installed on the host. When searching where this > platform-python is coming from it returns the kolla-ansible virtual envs: > > > > $ grep -rni -e "platform-python" > > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: > '8': /usr/libexec/platform-python > > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: > - /usr/libexec/platform-python > > > > I had a look through the deployment guide for Kayobe Wallaby and didnt > see a note about changing this. > > > > Do I need to do further steps to support the ubuntu overcloud host? I > have already set (as per the doc): > > > > os_distribution: ubuntu > > os_release: focal > > > > Regards, > > > > Tony Pearce > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon Jun 14 06:59:45 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 14 Jun 2021 08:59:45 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: Hello: I'll approve [1] although I see no need for it. Having "resource_provider_hypervisors", there is no need for a second configuration parameter to provide the same information, regardless of the comfort of providing one single string and not a list of tuples. Regards. [1]https://review.opendev.org/c/openstack/neutron/+/763563 On Fri, Jun 11, 2021 at 5:51 PM Takashi Kajinami wrote: > On Fri, Jun 11, 2021 at 8:48 PM Oliver Walsh wrote: > >> Hi Takashi, >> >> On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami >> wrote: >> >>> Hi All, >>> >>> >>> I've been working on bug 1926693[1], and am lost about the reasonable >>> solutions we expect. Ideally I'd need to bring this topic in the team >>> meeting >>> but because of the timezone gap and complicated background, I'd like to >>> gather some feedback in ml first. >>> >>> [1] https://bugs.launchpad.net/neutron/+bug/1926693 >>> >>> TL;DR >>> Which one(or ones) would be reasonable solutions for this issue ? >>> (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>> (2) https://review.opendev.org/c/openstack/neutron/+/788893 >>> (3) Implement something different >>> >>> The issue I reported in the bug is that there is an inconsistency between >>> nova and neutron about the way to determine a hypervisor name. >>> Currently neutron uses socket.gethostname() (which always returns >>> shortname) >>> >> >> socket.gethostname() can return fqdn or shortname - >> https://docs.python.org/3/library/socket.html#socket.gethostname. >> > You are correct and my statement was not accurate. > So socket.gethostname() returns what is returned by gethostname system > call, > and gethostname/sethostname accept both FQDN and short name, > socket.gethostname() > can return one of FQDN or short name. > > However the root problem is that this logic is not completely same as the > ones used > in each virt driver. Of cause we can require people the "correct" format > usage for > canonical name as well as "hostname", but fixthing this problem in neutron > would > be much more helpful considering the effect caused by enforcing users to > "fix" > hostname/canonical name formatting at this point. > > >> I've seen cases where it switched from short to fqdn but I'm not sure of >> the root cause - DHCP lease setting a hostname/domainname perhaps. >> >> Thanks, >> Ollie >> >> to determine a hypervisor name to search the corresponding resource >>> provider. >>> On the other hand, nova uses libvirt's getHostname function (if libvirt >>> driver is used) >>> which returns a canonical name. Canonical name can be shortname or FQDN >>> (*1) >>> and if FQDN is used then neutron and nova never agree. >>> >>> (*1) >>> IMO this is likely to happen in real deployments. For example, TripelO >>> uses >>> FQDN for canonical names. >>> >> >>> Neutron already provides the resource_provider_defauly_hypervisors option >>> to override a hypervisor name used. However because this option accepts >>> a map between interface and hypervisor, setting this parameter requires >>> very redundant description especially when a compute node has multiple >>> interfaces/bridges. The following example shows how redundant the current >>> requirement is. >>> ~~~ >>> [OVS] >>> resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ >>> br-data3:1024,1024,br-data4,1024:1024 >>> resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ >>> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain >>> ~~~ >>> >>> I've submitted a change to propose a new single parameter to override >>> the base hypervisor name but this is currently -2ed, mainly because >>> I lacked analysis about the root cause of mismatch when I proposed this. >>> (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>> >>> >>> On the other hand, I submitted a different change to neutron which >>> implements >>> the logic to get a hypervisor name which is fully compatible with >>> libvirt. >>> While this would save users from even overriding hypervisor names, I'm >>> aware >>> that this might break the other virt driver which depends on a different >>> logic >>> to generate a hypervisor name. IMO the patch is still useful considering >>> the libvirt driver would be the most popular option now, but I'm not >>> fully >>> aware of the impact on the other drivers, especially because I don't know >>> which virt driver would support the minimum QoS feature now. >>> (2) https://review.opendev.org/c/openstack/neutron/+/788893/ >>> >>> >>> In the review of (2), Sean mentioned implementing a logic to determine >>> an appropriate resource provider(3) even if there is a mismatch about >>> host name format, but I'm not sure how I would implement that, tbh. >>> >>> >>> My current thought is to merge (1) as a quick solution first, and >>> discuss whether >>> we should merge (2), but I'd like to ask for some feedback about this >>> plan >>> (like we should NOT merge (2)). >>> >>> I'd appreciate your thoughts about this $topic. >>> >>> Thank you, >>> Takashi >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Jun 14 07:24:42 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 14 Jun 2021 09:24:42 +0200 Subject: [ops] Automatically recover guests from down host In-Reply-To: References: <0670B960225633449A24709C291A5252511B1193@COM01.performair.local> Message-ID: On Fri, Jun 11, 2021 at 8:13 PM Ammad Syed wrote: > > Hi, > > There is an option in nova to evacuate host. Triggering this will rebuild all the vms running on failed host to be scheduled on other host or reserved host. > > You can also try Openstack Masakri that is the instance HA service for openstack. Just following up on this: the project is called Masakari and has docs in: https://docs.openstack.org/masakari/latest/ The team can be reached on OFTC ( https://www.oftc.net/ ) at #openstack-masakari or via this mailing list with [masakari] tag in subject. > Ammad > > On Fri, Jun 11, 2021 at 9:28 PM wrote: >> >> All; >> >> What is the most effective means of having the OpenStack cluster restart guests when a hypervisor host fails? We're running OpenStack Victoria, installed manually through packages. >> >> My apologies, but my Google foo fails me on this issue; I don't know how to ask it the question. >> >> I recognize that OpenStack covers a great many different deployment scenarios, and in many of these this isn't feasible. In our case, images, volumes, and ephemeral storage are all on our Ceph cluster, so all storage is always available to all hypervisor hosts. >> >> I also recognize that resource restrictions mean that even in an environment such as mine, not all failed guests may be able to be restarted on new hosts. I'm ok with a dumb best effort, at least for now. >> >> Is there something already present in OpenStack which would allow this? One of the goals Masakari has is to introduce a system of recovery prioritisation to go beyond the "dumb best effort" mentioned. For now it's pretty simple but matches your requirements. -yoctozepto From radoslaw.piliszek at gmail.com Mon Jun 14 07:35:07 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 14 Jun 2021 09:35:07 +0200 Subject: [masakari] Compute service with name XXXXX not found. In-Reply-To: <212EC217-274F-44B4-829B-D4C0D2F949FF@poczta.onet.pl> References: <212EC217-274F-44B4-829B-D4C0D2F949FF@poczta.onet.pl> Message-ID: The line > 2021-06-11 14:45:49.829 959 INFO masakari.api.openstack.wsgi [req-e9a58522-858d-4025-9c43-f9fee744a0db nova - - - -] HTTP exception thrown: Compute service with name XXXXX could not be found. suggests that nova actively disagrees that this compute node actually exists. As for the exercised behaviour: this is tested both in Masakari and Kolla Ansible CIs, and it works. I am afraid the answer to why this fails lies in the format of that hidden XXXXX. At the moment, I can't really think of how that could affect the outcome. Is XXXXX 100% the same between the different logs? If you can't somehow share how XXXXX looks, then you might want to check the Nova API logs (again, debug=True might help a bit) and compare between how the openstack client query works vs how the Masakari query works. Perhaps, there is a glitch at some stage that causes the XXXXX to get garbled. You can also append --debug to the openstack commands to get the client side of the conversation. On Fri, Jun 11, 2021 at 5:10 PM bkslash wrote: > > > openstack compute service list --service nova-compute --host $HOSTNAME > > did you try including the same hostname in this command? > yes, and it returns the same as "openstack compute service list" but of course only for host XXXXX > > > If it works and Masakari does not, I would make sure you set up > > Masakari to speak to the right Nova API. > I'm using kolla-ansible, all masakari configuration was generated based on globals.yaml and inventory file while deployment, so it should work almost "out of the box". Does masakari speak to nova via RabbitMQ? How else can I check which port/IP masakari speaks to? In logs I can only see requests TO masakari API, not where masakari tries to check hypervisor... Masakari speaks to Nova via Nova API only. If you used Kolla Ansible, then it's set up correctly unless you manually overrode that. By default, Masakari looks up the Nova endpoint from the Keystone catalogue. -yoctozepto From mark at stackhpc.com Mon Jun 14 08:10:31 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 14 Jun 2021 09:10:31 +0100 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: > > Hi Pierre, thanks for replying to my message. > > To install kayobe I followed the documentation which summarise: installing a few system packages and setting up the kayobe virtual environment and then pulling the correct kayobe git version for the openstack to be installed. After configuring the yaml files I have run these commands: > > - kayobe control host bootstrap > - kayobe overcloud host configure -> this one is failing with /usr/libexec/platform-python: not found > > After reading your message on the weekend I concluded that maybe I had done something wrong. Today, I re-pulled the kayobe wallaby git and manually transferred the configuration over to the new directory structure on the ansible host and set up again as per the guide but the same issue is seen. > > What I ended up doing to try and resolve was finding where this "platform-python" is coming from. It is coming from the virtual environment which is being set up during the kayobe ansible host bootstrap. Initially, I found the base.yml and it looks like it tries to match what the host is. I noticed that there is no ubuntu 20 listed there so I created it however it did not resolve the issue. > > So then I tried systematically replacing this reference in the other files found in the same location "venvs\kayobe\share\kayobe\ansible". The file I changed which allowed it to progress is "kayobe-target-venv.yml" > > But unfortunately it fails a bit further on, failing to find an selinux package [1] > > Seeing as the error is mentioning selinux (a RedHat security feature not installed on ubuntu) could the root cause issue be that kayobe is not matching the host as ubuntu? I did already set in kayobe that I am using ubuntu OS distribution within globals.yml [2]. > > Are there any extra steps that I need to complete that maybe are not listed in the documentation / guide? > > [1] TASK [MichaelRigart.interfaces : Debian | install current/latest network package - Pastebin.com > [2] ---# Kayobe global configuration.######################################### - Pastebin.com Hi Tony, That's definitely not a recent Wallaby checkout you're using. Ubuntu no longer uses that MichaelRigart.interfaces role. Check that you have recent commits. Here is the most recent on stable/wallaby: 13169077aaec0f7a28ae1f15b419dafc2456faf7. Mark > > Regards, > > Tony Pearce > > > > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau wrote: >> >> Hi Tony, >> >> Kayobe doesn't use platform-python anymore, on both stable/wallaby and >> stable/victoria: >> https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 >> >> Can you double-check what version you are using, and share how you >> installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. >> >> Best wishes, >> Pierre >> >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: >> > >> > I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is not found during the host configure part. >> > >> > PLAY [Verify that the Kayobe Ansible user account is accessible] >> > TASK [Verify that a command can be executed] >> > >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} >> > >> > Python3 is installed on the host. When searching where this platform-python is coming from it returns the kolla-ansible virtual envs: >> > >> > $ grep -rni -e "platform-python" >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: '8': /usr/libexec/platform-python >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: - /usr/libexec/platform-python >> > >> > I had a look through the deployment guide for Kayobe Wallaby and didnt see a note about changing this. >> > >> > Do I need to do further steps to support the ubuntu overcloud host? I have already set (as per the doc): >> > >> > os_distribution: ubuntu >> > os_release: focal >> > >> > Regards, >> > >> > Tony Pearce >> > From tonyppe at gmail.com Mon Jun 14 08:40:40 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Mon, 14 Jun 2021 16:40:40 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Hi Mark, I followed this guide to do a "git clone" specifying the branch "-b" to "stable/wallaby" [1]. What additional steps do I need to do to get the latest commits? [1] OpenStack Docs: Overcloud Kind regards, Tony Pearce On Mon, 14 Jun 2021 at 16:10, Mark Goddard wrote: > On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: > > > > Hi Pierre, thanks for replying to my message. > > > > To install kayobe I followed the documentation which summarise: > installing a few system packages and setting up the kayobe virtual > environment and then pulling the correct kayobe git version for the > openstack to be installed. After configuring the yaml files I have run > these commands: > > > > - kayobe control host bootstrap > > - kayobe overcloud host configure -> this one is failing with > /usr/libexec/platform-python: not found > > > > After reading your message on the weekend I concluded that maybe I had > done something wrong. Today, I re-pulled the kayobe wallaby git and > manually transferred the configuration over to the new directory structure > on the ansible host and set up again as per the guide but the same issue is > seen. > > > > What I ended up doing to try and resolve was finding where this > "platform-python" is coming from. It is coming from the virtual environment > which is being set up during the kayobe ansible host bootstrap. Initially, > I found the base.yml and it looks like it tries to match what the host is. > I noticed that there is no ubuntu 20 listed there so I created it however > it did not resolve the issue. > > > > So then I tried systematically replacing this reference in the other > files found in the same location "venvs\kayobe\share\kayobe\ansible". The > file I changed which allowed it to progress is "kayobe-target-venv.yml" > > > > But unfortunately it fails a bit further on, failing to find an selinux > package [1] > > > > Seeing as the error is mentioning selinux (a RedHat security feature not > installed on ubuntu) could the root cause issue be that kayobe is not > matching the host as ubuntu? I did already set in kayobe that I am using > ubuntu OS distribution within globals.yml [2]. > > > > Are there any extra steps that I need to complete that maybe are not > listed in the documentation / guide? > > > > [1] TASK [MichaelRigart.interfaces : Debian | install current/latest > network package - Pastebin.com > > [2] ---# Kayobe global > configuration.######################################### - Pastebin.com > > Hi Tony, > > That's definitely not a recent Wallaby checkout you're using. Ubuntu > no longer uses that MichaelRigart.interfaces role. Check that you have > recent commits. Here is the most recent on stable/wallaby: > 13169077aaec0f7a28ae1f15b419dafc2456faf7. > > Mark > > > > > Regards, > > > > Tony Pearce > > > > > > > > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau wrote: > >> > >> Hi Tony, > >> > >> Kayobe doesn't use platform-python anymore, on both stable/wallaby and > >> stable/victoria: > >> https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 > >> > >> Can you double-check what version you are using, and share how you > >> installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. > >> > >> Best wishes, > >> Pierre > >> > >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: > >> > > >> > I'm trying to run "kayobe overcloud host configure" against an ubuntu > 20 machine to deploy Wallaby. I'm getting an error that python is not found > during the host configure part. > >> > > >> > PLAY [Verify that the Kayobe Ansible user account is accessible] > >> > TASK [Verify that a command can be executed] > >> > > >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": > "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": > "", "msg": "The module failed to execute correctly, you probably need to > set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} > >> > > >> > Python3 is installed on the host. When searching where this > platform-python is coming from it returns the kolla-ansible virtual envs: > >> > > >> > $ grep -rni -e "platform-python" > >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: > '8': /usr/libexec/platform-python > >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: > - /usr/libexec/platform-python > >> > > >> > I had a look through the deployment guide for Kayobe Wallaby and > didnt see a note about changing this. > >> > > >> > Do I need to do further steps to support the ubuntu overcloud host? I > have already set (as per the doc): > >> > > >> > os_distribution: ubuntu > >> > os_release: focal > >> > > >> > Regards, > >> > > >> > Tony Pearce > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Jun 14 10:30:32 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 14 Jun 2021 11:30:32 +0100 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: On Sat, 2021-06-12 at 00:46 +0900, Takashi Kajinami wrote: > On Fri, Jun 11, 2021 at 8:48 PM Oliver Walsh wrote: > > Hi Takashi, > > > > On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami > > wrote: > > > Hi All, > > > > > > > > > I've been working on bug 1926693[1], and am lost about the > > > reasonable > > > solutions we expect. Ideally I'd need to bring this topic in the > > > team meeting > > > but because of the timezone gap and complicated background, I'd > > > like to > > > gather some feedback in ml first. > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > > > > > TL;DR > > >  Which one(or ones) would be reasonable solutions for this issue ? > > >   (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > >   (2) https://review.opendev.org/c/openstack/neutron/+/788893 > > >   (3) Implement something different > > > > > > The issue I reported in the bug is that there is an inconsistency > > > between > > > nova and neutron about the way to determine a hypervisor name. > > > Currently neutron uses socket.gethostname() (which always returns > > > shortname) > > > > > > > > > socket.gethostname() can return fqdn or shortname -  > > https://docs.python.org/3/library/socket.html#socket.gethostname. > > > > You are correct and my statement was not accurate. > So socket.gethostname() returns what is returned by gethostname system > call, > and gethostname/sethostname accept both FQDN and short name, > socket.gethostname() > can return one of FQDN or short name. > > However the root problem is that this logic is not completely same as > the ones used > in each virt driver. Of cause we can require people the "correct" > format usage for > canonical name as well as "hostname", but fixthing this problem in > neutron would > be much more helpful considering the effect caused by enforcing users > to "fix" > hostname/canonical name formatting at this point. this is not really something that can be fixed in neutron we can either create a common funciton in oslo.utils or placement-lib that we can use in nova, neutron and all other project or we can use the config option. if we want to "fix" this in neutron then neutron should either try looking up the RP using the host name and then fall back to using the fqdn or we shoudl look at using the hypervior api as we discussed a few years ago when this last came up http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011044.html i dont think neutron shoudl know anything about hyperviors so i would just proceed with the new config option that takashi has proposed but i would not implemente Rodolfo's solution of adding a hypervisor_type. just as nova has no awareness of the neutron backend and trys to treat all fo them the same neutron should remain hypervior independent and we should look to provide common code that can be reused to identify the RP in a seperate lib as a longer term solution. for many deployment that do not set the fqdn as the canonical host name in /etc/host the current default behavior works out of the box whatever solution we take we need to ensure that no existing deployment is affected by the change which means we cannot default to only using the fqdn or similar as that would be an upgrade breakage so we have to maintain the current behavior by default and enhance neutron to either fall back to the fqdn if the hostname based lookup fails or use the new config intoduc ed by takashi's patch where the fqdn is used as the server canonical hostname. >   > > I've seen cases where it switched from short to fqdn but I'm not sure > > of the root cause - DHCP lease setting a hostname/domainname perhaps. > > > > Thanks, > > Ollie > > > > > to determine a hypervisor name to search the corresponding resource > > > provider. > > > On the other hand, nova uses libvirt's getHostname function (if > > > libvirt driver is used) > > > which returns a canonical name. Canonical name can be shortname or > > > FQDN (*1) > > > and if FQDN is used then neutron and nova never agree. > > > > > > (*1) > > > IMO this is likely to happen in real deployments. For example, > > > TripelO uses > > > FQDN for canonical names.   > > > > > > > > > Neutron already provides the resource_provider_defauly_hypervisors > > > option > > > to override a hypervisor name used. However because this option > > > accepts > > > a map between interface and hypervisor, setting this parameter > > > requires > > > very redundant description especially when a compute node has > > > multiple > > > interfaces/bridges. The following example shows how redundant the > > > current > > > requirement is. > > > ~~~ > > > [OVS] > > > resource_provider_bandwidths=br-data1:1024:1024,br- > > > data2:1024:1024,\ > > > br-data3:1024,1024,br-data4,1024:1024 > > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > > > compute0.mydomain,br-data3:compute0.mydomain,br- > > > data4:compute0.mydomain > > > ~~~ > > > > > > I've submitted a change to propose a new single parameter to > > > override > > > the base hypervisor name but this is currently -2ed, mainly because > > > I lacked analysis about the root cause of mismatch when I proposed > > > this. > > >  (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > > > > > > On the other hand, I submitted a different change to neutron which > > > implements > > > the logic to get a hypervisor name which is fully compatible with > > > libvirt. > > > While this would save users from even overriding hypervisor names, > > > I'm aware > > > that this might break the other virt driver which depends on a > > > different logic > > > to generate a hypervisor name. IMO the patch is still useful > > > considering > > > the libvirt driver would be the most popular option now, but I'm > > > not fully > > > aware of the impact on the other drivers, especially because I > > > don't know > > > which virt driver would support the minimum QoS feature now. > > >  (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > > > > > > > In the review of (2), Sean mentioned implementing a logic to > > > determine > > > an appropriate resource provider(3) even if there is a mismatch > > > about > > > host name format, but I'm not sure how I would implement that, tbh. > > > > > > > > > My current thought is to merge (1) as a quick solution first, and > > > discuss whether > > > we should merge (2), but I'd like to ask for some feedback about > > > this plan > > > (like we should NOT merge (2)). > > > > > > I'd appreciate your thoughts about this $topic. > > > > > > Thank you, > > > Takashi From mark at stackhpc.com Mon Jun 14 10:36:34 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 14 Jun 2021 11:36:34 +0100 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: On Mon, 14 Jun 2021 at 09:40, Tony Pearce wrote: > > Hi Mark, > > I followed this guide to do a "git clone" specifying the branch "-b" to "stable/wallaby" [1]. What additional steps do I need to do to get the latest commits? That should be sufficient. When you install it via pip, note that 'pip install kayobe' will still pull from PyPI, even if there is a local kayobe directory. Use ./kayobe, or 'pip install .' if in the same directory. Mark > > [1] OpenStack Docs: Overcloud > > Kind regards, > > Tony Pearce > > > On Mon, 14 Jun 2021 at 16:10, Mark Goddard wrote: >> >> On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: >> > >> > Hi Pierre, thanks for replying to my message. >> > >> > To install kayobe I followed the documentation which summarise: installing a few system packages and setting up the kayobe virtual environment and then pulling the correct kayobe git version for the openstack to be installed. After configuring the yaml files I have run these commands: >> > >> > - kayobe control host bootstrap >> > - kayobe overcloud host configure -> this one is failing with /usr/libexec/platform-python: not found >> > >> > After reading your message on the weekend I concluded that maybe I had done something wrong. Today, I re-pulled the kayobe wallaby git and manually transferred the configuration over to the new directory structure on the ansible host and set up again as per the guide but the same issue is seen. >> > >> > What I ended up doing to try and resolve was finding where this "platform-python" is coming from. It is coming from the virtual environment which is being set up during the kayobe ansible host bootstrap. Initially, I found the base.yml and it looks like it tries to match what the host is. I noticed that there is no ubuntu 20 listed there so I created it however it did not resolve the issue. >> > >> > So then I tried systematically replacing this reference in the other files found in the same location "venvs\kayobe\share\kayobe\ansible". The file I changed which allowed it to progress is "kayobe-target-venv.yml" >> > >> > But unfortunately it fails a bit further on, failing to find an selinux package [1] >> > >> > Seeing as the error is mentioning selinux (a RedHat security feature not installed on ubuntu) could the root cause issue be that kayobe is not matching the host as ubuntu? I did already set in kayobe that I am using ubuntu OS distribution within globals.yml [2]. >> > >> > Are there any extra steps that I need to complete that maybe are not listed in the documentation / guide? >> > >> > [1] TASK [MichaelRigart.interfaces : Debian | install current/latest network package - Pastebin.com >> > [2] ---# Kayobe global configuration.######################################### - Pastebin.com >> >> Hi Tony, >> >> That's definitely not a recent Wallaby checkout you're using. Ubuntu >> no longer uses that MichaelRigart.interfaces role. Check that you have >> recent commits. Here is the most recent on stable/wallaby: >> 13169077aaec0f7a28ae1f15b419dafc2456faf7. >> >> Mark >> >> > >> > Regards, >> > >> > Tony Pearce >> > >> > >> > >> > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau wrote: >> >> >> >> Hi Tony, >> >> >> >> Kayobe doesn't use platform-python anymore, on both stable/wallaby and >> >> stable/victoria: >> >> https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 >> >> >> >> Can you double-check what version you are using, and share how you >> >> installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. >> >> >> >> Best wishes, >> >> Pierre >> >> >> >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: >> >> > >> >> > I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is not found during the host configure part. >> >> > >> >> > PLAY [Verify that the Kayobe Ansible user account is accessible] >> >> > TASK [Verify that a command can be executed] >> >> > >> >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} >> >> > >> >> > Python3 is installed on the host. When searching where this platform-python is coming from it returns the kolla-ansible virtual envs: >> >> > >> >> > $ grep -rni -e "platform-python" >> >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: '8': /usr/libexec/platform-python >> >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: - /usr/libexec/platform-python >> >> > >> >> > I had a look through the deployment guide for Kayobe Wallaby and didnt see a note about changing this. >> >> > >> >> > Do I need to do further steps to support the ubuntu overcloud host? I have already set (as per the doc): >> >> > >> >> > os_distribution: ubuntu >> >> > os_release: focal >> >> > >> >> > Regards, >> >> > >> >> > Tony Pearce >> >> > From hberaud at redhat.com Mon Jun 14 11:27:05 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 14 Jun 2021 13:27:05 +0200 Subject: [release] Release countdown for week R-16, Jun 14 - Jun 18 Message-ID: Development Focus ----------------- The Xena-2 milestone will happen next month, on 15 July, 2021. Xena-related specs should now be finalized so that teams can move to implementation ASAP. Some teams observe specific deadlines on the second milestone (mostly spec freezes): please refer to https://releases.openstack.org/xena/schedule.html for details. General Information ------------------- Please remember that libraries need to be released at least once per milestone period. At milestone 2, the release team will propose releases for any library that has not been otherwise released since milestone 1. Other non-library deliverables that follow the cycle-with-intermediary release model should have an intermediary release before milestone-2. Those who haven't will be proposed to switch to the cycle-with-rc model, which is more suited to deliverables that are released only once per cycle. At milestone-2 we also freeze the contents of the final release. If you have a new deliverable that should be included in the final release, you should make sure it has a deliverable file in: https://opendev.org/openstack/releases/src/branch/master/deliverables/xena You should request a beta release (or intermediary release) for those new deliverables by milestone-2. We understand some may not be quite ready for a full release yet, but if you have something minimally viable to get released it would be good to do a 0.x release to exercise the release tooling for your deliverables. See the MembershipFreeze description for more details: https://releases.openstack.org/xena/schedule.html#x-mf Finally, now may be a good time for teams to check on any stable releases that need to be done for your deliverables. If you have bug fixes that have been backported, but no stable release getting those. If you are unsure what is out there committed but not released, in the openstack/releases repo, running the command "tools/list_stable_unreleased_changes.sh " gives a nice report. Upcoming Deadlines & Dates -------------------------- Xena-2 Milestone: 15 July, 2021 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Mon Jun 14 11:50:55 2021 From: adivya1.singh at gmail.com (Adivya Singh) Date: Mon, 14 Jun 2021 17:20:55 +0530 Subject: Regaring Volume not getting attached Message-ID: Hello Team, I am facing a issue, where i am unable to attach a volume to Instances , The third party is using qnap storage, what i am seeing is whenever i am trying to attach, i can see the volume goes from reserved state to attached state and finally goes to receive state and give a error state, regarding HTTP 400, Regards Adivya Singh 9590986094 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sz_cuitao at 163.com Mon Jun 14 13:43:25 2021 From: sz_cuitao at 163.com (tommy) Date: Mon, 14 Jun 2021 21:43:25 +0800 Subject: Install v version failed on centos 8 Message-ID: <000001d76123$40768de0$c163a9a0$@163.com> Why?? [root at control ~]# packstack --answer-file=./openstack.ini Welcome to the Packstack setup utility The installation log file is available at: /var/tmp/packstack/20210614-203611-y8gez97l/openstack-setup.log Installing: Clean Up [ DONE ] Discovering ip protocol version [ DONE ] root at 192.168.10.32's password: root at 192.168.10.30's password: root at 192.168.10.30's password: root at 192.168.10.31's password: root at 192.168.10.33's password: Setting up ssh keys [ DONE ] Preparing servers [ DONE ] Pre installing Puppet and discovering hosts' details [ DONE ] Preparing pre-install entries [ DONE ] Setting up CACERT [ DONE ] Preparing AMQP entries [ DONE ] Preparing MariaDB entries [ DONE ] Fixing Keystone LDAP config parameters to be undef if empty[ DONE ] Preparing Keystone entries [ DONE ] Preparing Glance entries [ DONE ] Preparing Nova API entries [ DONE ] Creating ssh keys for Nova migration [ DONE ] Gathering ssh host keys for Nova migration [ DONE ] Preparing Nova Compute entries [ DONE ] Preparing Nova Scheduler entries [ DONE ] Preparing Nova VNC Proxy entries [ DONE ] Preparing OpenStack Network-related Nova entries [ DONE ] Preparing Nova Common entries [ DONE ] Preparing Neutron API entries [ DONE ] Preparing Neutron L3 entries [ DONE ] Preparing Neutron L2 Agent entries [ DONE ] Preparing Neutron DHCP Agent entries [ DONE ] Preparing Neutron Metering Agent entries [ DONE ] Checking if NetworkManager is enabled and running [ DONE ] Preparing OpenStack Client entries [ DONE ] Preparing Horizon entries [ DONE ] Preparing Swift builder entries [ DONE ] Preparing Swift proxy entries [ DONE ] Preparing Swift storage entries [ DONE ] Preparing Gnocchi entries [ DONE ] Preparing Redis entries [ DONE ] Preparing Ceilometer entries [ DONE ] Preparing Aodh entries [ DONE ] Preparing Puppet manifests [ DONE ] Copying Puppet modules and manifests [ DONE ] Applying 192.168.10.30_controller.pp 192.168.10.30_controller.pp: [ DONE ] Applying 192.168.10.32_network.pp Applying 192.168.10.30_network.pp Applying 192.168.10.31_network.pp Applying 192.168.10.33_network.pp 192.168.10.31_network.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.10.31_network.pp Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Missing title. The title expression resulted in undef (file: /var/tmp/packstack/a201c33193194a599c3654c79945e4a1/modules/ovn/manifests/co ntroller/port.pp, line: 11, column: 13) (file: /var/tmp/packstack/a201c33193194a599c3654c79945e4a1/modules/ovn/manifests/co ntroller.pp, line: 137) on node comps1 You will find full trace in log /var/tmp/packstack/20210614-203611-y8gez97l/manifests/192.168.10.31_network. pp.log Please check log file /var/tmp/packstack/20210614-203611-y8gez97l/openstack-setup.log for more information Additional information: * Parameter CONFIG_NEUTRON_L2_AGENT: You have chosen OVN Neutron backend. Note that this backend does not support the VPNaaS plugin. Geneve will be used as the encapsulation method for tenant networks * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * File /root/keystonerc_admin has been created on OpenStack client host 192.168.10.30. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://192.168.10.30/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. -------------- next part -------------- An HTML attachment was scrubbed... URL: From antonio.paulo at cern.ch Mon Jun 14 14:28:07 2021 From: antonio.paulo at cern.ch (=?UTF-8?Q?Ant=c3=b3nio_Paulo?=) Date: Mon, 14 Jun 2021 16:28:07 +0200 Subject: [nova] GPU VMs using MIG? Message-ID: <803dae06-8317-27f4-42ac-365f72ff31f4@cern.ch> Hi! Has anyone looked into instancing VMs with NVIDIA's Multi-Instance GPU (MIG) devices [1] without having to rely on vGPUs? Unfortunately, NVIDIA vGPUs lack tracing and profiling support that our users need. I could not find anything specific to MIG in the OpenStack docs but I was wondering if doing PCI passthrough [2] of MIG devices is an option that someone has seen or tested? Maybe some massaging to expose the MIG as a Linux device is required [3]? Cheers, António [1] https://docs.nvidia.com/datacenter/tesla/mig-user-guide/ [2] https://docs.openstack.org/nova/pike/admin/pci-passthrough.html [3] https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#device-nodes From rafaelweingartner at gmail.com Mon Jun 14 14:29:53 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 14 Jun 2021 11:29:53 -0300 Subject: [CLOUDKITTY] Missed CloudKitty meeting today Message-ID: Hello guys, I would like to apologize for missing the CloudKitty meeting today. I was concentrating on some work, and my alarm for the meeting did not ring. If you need something, just let me know. Sorry for the inconvenience; see you guys at our next meeting. -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon Jun 14 14:34:16 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 14 Jun 2021 08:34:16 -0600 Subject: [tripleo] master jobs down In-Reply-To: References: Message-ID: On Sun, Jun 13, 2021 at 5:29 PM Wesley Hayutin wrote: > Having some issues w/ infra... > https://bugs.launchpad.net/tripleo/+bug/1931821 > > Details are in the bug.. this will block upstream master jobs. > The incorrect dlrn aggregate hash on master has been corrected. This problem is now resolved. Details in the bug. Thanks whayutin> marios|ruck, anbanerj|rover rlandy jpena https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo.md5 :) [08:13:23] we're now at the right hash [08:13:30] thanks for all your time folks [08:13:46] ++ [08:13:50] https://trunk.rdoproject.org/api-centos8-master-uc/api/civotes_agg_detail.html?ref_hash=ee4aecfe06de7e8ca63aed041b3e42a8 [08:14:14] http://images.rdoproject.org/centos8/master/rdo_trunk/ee4aecfe06de7e8ca63aed041b3e42a8/ [08:14:35] https://hub.docker.com/r/tripleomaster/openstack-base/tags?page=1&ordering=last_updated [08:14:45] TAG [08:14:45] ee4aecfe06de7e8ca63aed041b3e42a8_manifest [08:14:45] docker pull tripleomaster/openstack-base:ee4aecfe06de7e8ca63aed041b3e42a8_manifest [08:14:45] Last pushed16 hours agobyrdotripleomirror [08:14:45] DIGEST [08:14:45] OS/ARCH [08:14:45] COMPRESSED SIZE [08:14:45] 4041f5fa79ea [08:14:45] linux/amd64 [08:14:45] 197.73 MB [08:14:45] 6a6f59b227f5 [08:14:45] linux/ppc64le -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Jun 14 14:45:23 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 14 Jun 2021 07:45:23 -0700 Subject: [TC] Open Infra Live- Open Source Governance Message-ID: Hello TC Folks :) So I have been tasked with helping to collect a couple volunteers for our July 29th episode of Open Infra Live (at 14:00 UTC) on open source governance. I am also working on getting a couple members from the k8s steering committee to join us that day. If you are interested in participating, please let me know! I only need like two volunteers, but if we have more people than that dying to join in, I am sure we can work it out. Thanks! -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Mon Jun 14 15:23:37 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Mon, 14 Jun 2021 15:23:37 +0000 (UTC) Subject: Hiding tabs for users with _member_ role References: <1621683371.10146158.1623684217887.ref@mail.yahoo.com> Message-ID: <1621683371.10146158.1623684217887@mail.yahoo.com> Hi all, I have enabled the octavia dashboard and also swift object store. I would like to keep these for the admin or project admin and not display them (tabs) to regular users. Is this possible? thanks in advance. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jun 14 15:29:48 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 14 Jun 2021 10:29:48 -0500 Subject: [TC] Open Infra Live- Open Source Governance In-Reply-To: References: Message-ID: <17a0b24a55b.f7ebe436657274.7104555013371533433@ghanshyammann.com> Thanks, Kendall for information, I can volunteer for that. gmann ---- On Mon, 14 Jun 2021 09:45:23 -0500 Kendall Nelson wrote ---- > Hello TC Folks :) > So I have been tasked with helping to collect a couple volunteers for our July 29th episode of Open Infra Live (at 14:00 UTC) on open source governance. > I am also working on getting a couple members from the k8s steering committee to join us that day. > If you are interested in participating, please let me know! I only need like two volunteers, but if we have more people than that dying to join in, I am sure we can work it out. > Thanks! > -Kendall Nelson (diablo_rojo) From haleyb.dev at gmail.com Mon Jun 14 15:42:06 2021 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 14 Jun 2021 11:42:06 -0400 Subject: [neutron] Bug deputy report for week of June 7th Message-ID: Hi, I was Neutron bug deputy last week. Below is a short summary about the reported bugs. -Brian Critical bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1931220 - _ObjectChangeHandler.handle_event failing on port after_create event - https://review.opendev.org/c/openstack/neutron/+/795260 High bugs --------- * https://bugs.launchpad.net/neutron/+bug/1931244 - ovn sriov broken from ussuri onwards - broken by https://review.opendev.org/c/openstack/neutron/+/765874 - https://review.opendev.org/c/openstack/neutron/+/795781 * https://bugs.launchpad.net/neutron/+bug/1931583 - Wrong status of trunk sub-port after setting binding_profile - Kamil is working on it * https://bugs.launchpad.net/neutron/+bug/1931639 - [OVN Octavia Provider] Load Balancer not reachable from some Subnets - Flavio is working on it Medium bugs ----------- * https://bugs.launchpad.net/neutron/+bug/1931098 - With pyroute 0.6.2, eventlet fails in lower-constraints test - https://review.opendev.org/c/openstack/neutron/+/795082 Low bugs -------- * https://bugs.launchpad.net/bugs/1931259 - API "subnet-segmentid-writable" does not include "is_filter" in the "segment_id" field - https://review.opendev.org/c/openstack/neutron-lib/+/795340 Misc bugs --------- * https://bugs.launchpad.net/neutron/+bug/1926045 - Restrictions on FIP binding - Re-opened without supplying reason, asked for more information * https://bugs.launchpad.net/neutron/+bug/1931513/ - neutron bootstrap container failed during deploy openstack victoria - Asked for more information as it looks like an error communicating with SQL server Wishlist bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1931100 - [rfe] Add RBAC support for BGPVPNs From sbauza at redhat.com Mon Jun 14 16:01:45 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 14 Jun 2021 18:01:45 +0200 Subject: [nova] GPU VMs using MIG? In-Reply-To: <803dae06-8317-27f4-42ac-365f72ff31f4@cern.ch> References: <803dae06-8317-27f4-42ac-365f72ff31f4@cern.ch> Message-ID: On Mon, Jun 14, 2021 at 4:37 PM António Paulo wrote: > Hi! > > Has anyone looked into instancing VMs with NVIDIA's Multi-Instance GPU > (MIG) devices [1] without having to rely on vGPUs? Unfortunately, NVIDIA > vGPUs lack tracing and profiling support that our users need. > > I could not find anything specific to MIG in the OpenStack docs but I > was wondering if doing PCI passthrough [2] of MIG devices is an option > that someone has seen or tested? > > Maybe some massaging to expose the MIG as a Linux device is required [3]? > > Nividia MIG feature is orthogonal to virtual GPUs and hardware dependent. As the latter, this is not really something we can "support" upstream as our upstream CI can't just verify it. Some downstream vendors tho have work efforts for trying to test this with their own solutions but again, not something we can discuss it here. Cheers, > António > > [1] https://docs.nvidia.com/datacenter/tesla/mig-user-guide/ > [2] https://docs.openstack.org/nova/pike/admin/pci-passthrough.html > [3] https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#device-nodes > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sz_cuitao at 163.com Mon Jun 14 16:07:28 2021 From: sz_cuitao at 163.com (tommy) Date: Tue, 15 Jun 2021 00:07:28 +0800 Subject: Install v version failed on centos 8 In-Reply-To: <000001d76123$40768de0$c163a9a0$@163.com> References: <000001d76123$40768de0$c163a9a0$@163.com> Message-ID: <001b01d76137$5fc8f520$1f5adf60$@163.com> I have resolved it. From: openstack-discuss-bounces+sz_cuitao=163.com at lists.openstack.org On Behalf Of tommy Sent: Monday, June 14, 2021 9:43 PM To: 'OpenStack Discuss' Subject: Install v version failed on centos 8 Why?? [root at control ~]# packstack --answer-file=./openstack.ini Welcome to the Packstack setup utility The installation log file is available at: /var/tmp/packstack/20210614-203611-y8gez97l/openstack-setup.log Installing: Clean Up [ DONE ] Discovering ip protocol version [ DONE ] root at 192.168.10.32's password: root at 192.168.10.30's password: root at 192.168.10.30's password: root at 192.168.10.31's password: root at 192.168.10.33's password: Setting up ssh keys [ DONE ] Preparing servers [ DONE ] Pre installing Puppet and discovering hosts' details [ DONE ] Preparing pre-install entries [ DONE ] Setting up CACERT [ DONE ] Preparing AMQP entries [ DONE ] Preparing MariaDB entries [ DONE ] Fixing Keystone LDAP config parameters to be undef if empty[ DONE ] Preparing Keystone entries [ DONE ] Preparing Glance entries [ DONE ] Preparing Nova API entries [ DONE ] Creating ssh keys for Nova migration [ DONE ] Gathering ssh host keys for Nova migration [ DONE ] Preparing Nova Compute entries [ DONE ] Preparing Nova Scheduler entries [ DONE ] Preparing Nova VNC Proxy entries [ DONE ] Preparing OpenStack Network-related Nova entries [ DONE ] Preparing Nova Common entries [ DONE ] Preparing Neutron API entries [ DONE ] Preparing Neutron L3 entries [ DONE ] Preparing Neutron L2 Agent entries [ DONE ] Preparing Neutron DHCP Agent entries [ DONE ] Preparing Neutron Metering Agent entries [ DONE ] Checking if NetworkManager is enabled and running [ DONE ] Preparing OpenStack Client entries [ DONE ] Preparing Horizon entries [ DONE ] Preparing Swift builder entries [ DONE ] Preparing Swift proxy entries [ DONE ] Preparing Swift storage entries [ DONE ] Preparing Gnocchi entries [ DONE ] Preparing Redis entries [ DONE ] Preparing Ceilometer entries [ DONE ] Preparing Aodh entries [ DONE ] Preparing Puppet manifests [ DONE ] Copying Puppet modules and manifests [ DONE ] Applying 192.168.10.30_controller.pp 192.168.10.30_controller.pp: [ DONE ] Applying 192.168.10.32_network.pp Applying 192.168.10.30_network.pp Applying 192.168.10.31_network.pp Applying 192.168.10.33_network.pp 192.168.10.31_network.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.10.31_network.pp Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Missing title. The title expression resulted in undef (file: /var/tmp/packstack/a201c33193194a599c3654c79945e4a1/modules/ovn/manifests/co ntroller/port.pp, line: 11, column: 13) (file: /var/tmp/packstack/a201c33193194a599c3654c79945e4a1/modules/ovn/manifests/co ntroller.pp, line: 137) on node comps1 You will find full trace in log /var/tmp/packstack/20210614-203611-y8gez97l/manifests/192.168.10.31_network. pp.log Please check log file /var/tmp/packstack/20210614-203611-y8gez97l/openstack-setup.log for more information Additional information: * Parameter CONFIG_NEUTRON_L2_AGENT: You have chosen OVN Neutron backend. Note that this backend does not support the VPNaaS plugin. Geneve will be used as the encapsulation method for tenant networks * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * File /root/keystonerc_admin has been created on OpenStack client host 192.168.10.30. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://192.168.10.30/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Jun 14 16:28:43 2021 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 14 Jun 2021 11:28:43 -0500 Subject: [all] Naming Poll for my Y Release Name Vote Message-ID: <7d28ec7e-d737-7912-ded1-bed711a3adf8@gmail.com> All, As I reviewed the excellent list of names for the 'Y' release I had a hard time choosing.  This inspired me to create a poll to get the community's feedback on how you all would like me to vote as a member of the Technical Committee. To cast your vote please visit: https://www.surveymonkey.com/r/RHR3NCY Happy voting! Jay (jungleboyj) From dangerzonen at gmail.com Sun Jun 13 03:58:43 2021 From: dangerzonen at gmail.com (dangerzone ar) Date: Sun, 13 Jun 2021 11:58:43 +0800 Subject: [Magnum] Rocky Openstack Magnum Error Message-ID: Hi, I'm getting error while trying to verify my magnum installation Here is my keystone_admin details:- *unset OS_SERVICE_TOKEN* * export OS_USERNAME=admin* * export OS_PASSWORD='b6cf5552a9be44e7'* * export OS_REGION_NAME=RegionOne* * export OS_AUTH_URL=http://192.168.0.122:5000/v3 * * export PS1='[\u@\h \W(keystone_admin)]\$ '* *export OS_PROJECT_NAME=admin* *export OS_USER_DOMAIN_NAME=Default* *export OS_PROJECT_DOMAIN_NAME=Default* *export OS_IDENTITY_API_VERSION=3* Pls find attached my magnum.conf file and this is error output:- *[root at myosptac ~(keystone_admin)]# magnum --debug service-list* *DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader')* *DEBUG (extension:189) found extension EntryPoint.parse('token_endpoint = openstackclient.api.auth_plugin:TokenEndpoint')* *DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader')* *DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token')* *DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth')* *DEBUG (extension:189) found extension EntryPoint.parse('*v3oauth1* = keystoneauth1.extras.oauth1._loading:V3OAuth1')* *DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken')* *DEBUG (extension:189) found extension EntryPoint.parse('*v3oidcauthcode* = keystoneauth1.loading._plugins.identity.v3:*OpenIDConnectAuthorie*')* *DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password')* *DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password')* *DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password')* *DEBUG (extension:189) found extension EntryPoint.parse('*v3adfspassword* = keystoneauth1.extras._saml2._loading:ADFSPassword')* *DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAcce* *DEBUG (extension:189) found extension EntryPoint.parse('*v3oidcpassword* = keystoneauth1.loading._plugins.identity.v3:*OpenIDConnectPasswor *DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos')* *DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token')* *DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConneredentials')* *DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth')* *DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token')* *DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP')* *DEBUG (extension:189) found extension EntryPoint.parse('* v3applicationcredential* = keystoneauth1.loading._plugins.identity.v3:* Applicationl*')* *DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password')* *DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:*MappedKerberos*')* *DEBUG (extension:189) found extension EntryPoint.parse('gnocchi-basic = * gnocchiclient.auth*:GnocchiBasicLoader')* *DEBUG (extension:189) found extension EntryPoint.parse('gnocchi-noauth = * gnocchiclient.auth*:GnocchiNoAuthLoader')* *DEBUG (extension:189) found extension EntryPoint.parse('aodh-noauth = aodhclient.noauth:AodhNoAuthLoader')* *DEBUG (session:448) REQ: curl -g -i -X GET http://192.168.0.122:5000/v3 -H "Accept: application/json" -H "User-Agent: magnum *keystoneaut* python-requests/2.19.1 CPython/2.7.5"* *DEBUG (connectionpool:207) Starting new HTTP connection (1): 192.168.0.122* *DEBUG (connectionpool:395) http://192.168.0.122:5000 "GET /v3 HTTP/1.1" 200 196* *DEBUG (session:479) RESP: [200] Connection: Keep-Alive Content-Encoding: gzip Content-Length: 196 Content-Type: application/json Date: Sn 2021 18:28:25 GMT Keep-Alive: timeout=15, max=100 Server: Apache/2.4.6 (CentOS) Vary: X-Auth-Token,Accept-Encoding x-openstack-requeste8a01587-a1ea-4b9e-9960-b7f1c64cebd6* *DEBUG (session:511) RESP BODY: {"version": {"status": "stable", "updated": "2018-10-15T00:00:00Z", "media-types": [{"base": "applicationtype": "application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links": [{"href": "http://192.168.0.122:5000/v3/ ", "rel": "self"}* *DEBUG (session:853) GET call to http://192.168.0.122:5000/v3 used request id req-e8a01587-a1ea-4b9e-9960-b7f1c64cebd6* *DEBUG (base:176) Making authentication request to http://192.168.0.122:5000/v3/auth/tokens * *DEBUG (connectionpool:395) http://192.168.0.122:5000 "POST /v3/auth/tokens HTTP/1.1" 201 10983* *DEBUG (base:181) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "ff6467cba06d4325b5dc26ffec6d67ea", "name": "h_owner"}, {"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}, {"id": "37dc1fb164684e77bba1df8ced2faa9f", "name": "reader"}, a78bcad80fe4ae897adf16e5b2b265a", "name": "member"}, {"id": "3c2996b04b5c46189c6d618f69b4aa8b", "name": "admin"}], "expires_at": "2021-08:25.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "d430c3698e8f4d7d874ea7e708ac17d7", "name": "admin"}, " [{"endpoints": [{"url": "http://192.168.0.122:8000/v1 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "14b4e2798b31fa00eaaa823"}, {"url": "http://192.168.0.122:8000/v1 ", "interface": "internal", "region": "RegionOne", "region_id": "RegionOn "9e33b93ca2f24206983d6f6b2e58d2f9"}, {"url": "http://192.168.0.122:8000/v1 ", "interface": "public", "region": "RegionOne", "region_id":ne", "id": "fc5eadc7dbfd492299f9b9e9585c8201"}], "type": "cloudformation", "id": "001cd752638e46eb8fbc96c9181a9efc", "name": "heat-cfn"}ints": [{"url": "http://192.168.0.122:8080/v1/AUTH_d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "admin", "region": "RegionOne", "regiRegionOne", "id": "0030669365ad497d8eefd13a99850c68"}, {"url": "http://192.168.0.122:8080/v1/AUTH_d430c3698e8f4d7d874ea7e708ac17d7 ", "in "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "53e2d455b96c4fc0951d98d5ce7790cf"}, {"url": "http://192.168.0.122:8TH_d430c3698e8f4d7d874ea7e708ac17d7", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "9f037da9221f43a1a3017d5"}], "type": "object-store", "id": "0610a7bebc1642f398e6de9f25b582e8", "name": "swift"}, {"endpoints": [{"url": "http://192.168.0.12 "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "553632abff5a4380911acb11a5e16b74"}, {"url": "http://1922:9696 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "6fb59f7c44d44ceaa0da2dfe0a4e4935"}, {"url": "http8.0.122:9696", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "c234ac3128294662b2a3c5b445b81d28"}], "typerk", "id": "068e962709db4d4d8904e740824503c8", "name": "neutron"}, {"endpoints": [{"url": "http://192.168.0.122:8004/v1/d430c3698e8f4d7d8ac17d7 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "2ff81a5c643546b49900256cc12aa739"}, {"url": "htt68.0.122:8004/v1/d430c3698e8f4d7d874ea7e708ac17d7", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "8ffb8f2bf17cfc24206ac99"}, {"url": "http://192.168.0.122:8004/v1/d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "internal", "region": "Regioegion_id": "RegionOne", "id": "925912fbfcc14e8eac063aae807021c2"}], "type": "orchestration", "id": "08c8970db6ca4c4c9592070b78002ea3", "eat"}, {"endpoints": [{"url": "http://192.168.0.122:5000/v3 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "i49a40c7542fbb6f031d8eb3d66ac"}, {"url": "http://192.168.0.122:5000/v3 ", "interface": "internal", "region": "RegionOne", "region_id": "Re "id": "8b49647ee89246bba936e6717a6915f9"}, {"url": "http://192.168.0.122:35357/v3 ", "interface": "admin", "region": "RegionOne", "regioegionOne", "id": "e80547d366fa403a87af0a04926d5fe8"}], "type": "identity", "id": "0c65ebd11cb549008dd49fe960623ed9", "name": "keystone"}ints": [{"url": "http://192.168.0.122:8776/v3/d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "internal", "region": "RegionOne", "regiongionOne", "id": "2ad2414e6c9a4bc78fa83fa53a1fbf4e"}, {"url": "http://192.168.0.122:8776/v3/d430c3698e8f4d7d874ea7e708ac17d7 ", "interfacec", "region": "RegionOne", "region_id": "RegionOne", "id": "2b70431e0c0049a88c7d0bf76d04a033"}, {"url": "http://192.168.0.122:8776/v3/d4f4d7d874ea7e708ac17d7 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "64e454f408c24549aeb1e720295992ce"}: "volumev3", "id": "1b07bd83eeb4460d9b8330794ba33850", "name": "cinderv3"}, {"endpoints": [{"url": "http://192.168.0.122:9890/ ", "interdmin", "region": "RegionOne", "region_id": "RegionOne", "id": "0473f73ee7e44b918b08736636570df8"}, {"url": "http://192.168.0.122:9890/ ",ce": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "89af54ac6f1e4f109324b8c4e9b8bca5"}, {"url": "http://192.168.0.1 , "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "d8deb8aa66264ff292b206424ebe735a"}], "type": "nfv-orche, "id": "22d0c99d2b9147c992c430bc01955b59", "name": "tacker"}, {"endpoints": [{"url": "http://192.168.0.122:8776/v2/d430c3698e8f4d7d874e7d7 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "11cee7c429b24730abff08cbcc99d4b7"}, {"url": "http:/0.122:8776/v2/d430c3698e8f4d7d874ea7e708ac17d7", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "caf5b4b874a44321adb7db9"}, {"url": "http://192.168.0.122:8776/v2/d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "admin", "region": "RegionOnen_id": "RegionOne", "id": "eab5457ca880490b8cbe357ef870fcaa"}], "type": "volumev2", "id": "488472e3277d4e568a169246b899d223", "name": "c, {"endpoints": [{"url": "http://192.168.0.122:8776/v1/d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "admin", "region": "RegionOne", "": "RegionOne", "id": "24e810d702ff47c6880b0c5231aa9f1a"}, {"url": "http://192.168.0.122:8776/v1/d430c3698e8f4d7d874ea7e708ac17d7 ", "int"internal", "region": "RegionOne", "region_id": "RegionOne", "id": "3d887ec2f90a48d9bb2cf6c71264b6bc"}, {"url": "http://192.168.0.122:870c3698e8f4d7d874ea7e708ac17d7", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "532bf115ce0741b0afe052b3b], "type": "volume", "id": "4941273cd7734cdcb01eb6423dd3c2d9", "name": "cinder"}, {"endpoints": [{"url": "http://192.168.0.122:9311 ", "i: "public", "region": "RegionOne", "region_id": "RegionOne", "id": "9ccc3fbd5cb240ae972aa26b3e7a0082"}, {"url": "http://192.168.0.122:93erface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "e3bc06b47fad453c8880f80950d3978c"}, {"url": "http://192.168.0 ., "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "e9f9e7f2a5814d2cbe52b126288bd399"}], "type": "key-mand": "4d342a315084482b86f616299eebfd3a", "name": "barbican"}, {"endpoints": [{"url": "http://192.168.0.122:8774/v2.1/d430c3698e8f4d7d874e7d7 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "01b84df8a50f47df96c66dca731c8999"}, {"url": "http:/0.122:8774/v2.1/d430c3698e8f4d7d874ea7e708ac17d7", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "2c339fca57d65acce177bcd"}, {"url": "http://192.168.0.122:8774/v2.1/d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "admin", "region": "Regioegion_id": "RegionOne", "id": "898b9568e1bc46309870bfb1a432995e"}], "type": "compute", "id": "83e61d9ee67e4c4b9da5f2aeb342ac33", "name": {"endpoints": [{"url": "http://192.168.0.122:8042 ", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "3da4ffea975d229f4208cab"}, {"url": "http://192.168.0.122:8042 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "iee5bb6a049ab944e828c66af861c"}, {"url": "http://192.168.0.122:8042 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOn "8745e0169c8c498fac9033e301cfeb3e"}], "type": "alarming", "id": "989e7012ea604c7794455a197fdedf6b", "name": "aodh"}, {"endpoints": [{"up://192.168.0.122:8778/placement ", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "9b81b11df55d4651bffe0af"}, {"url": "http://192.168.0.122:8778/placement ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "c1940aebbed105332160986"}, {"url": "http://192.168.0.122:8778/placement ", "interface": "admin", "region": "RegionOne", "region_id": "Region": "f6fc036a48bf47f1ba034d5c46a3257d"}], "type": "placement", "id": "b975d296f13141539055f90971b870a7", "name": "placement"}, {"endpointl": "http://192.168.0.122:9511/v1 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "3d83664ffd1e4fafb605adb"}, {"url": "http://192.168.0.122:9511/v1 ", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "7f06217f1297822167470ca"}, {"url": "http://192.168.0.122:9511/v1 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "71d466a953e3727aea930f2"}], "type": "container-infra", "id": "bb891040e2f24a4f86aa596f780a668e", "name": "magnum"}, {"endpoints": [{"url//192.168.0.122:9292 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "0355aeb6a9bf41e19c33adb5d207c40b"} "http://192.168.0.122:9292 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "3e894b75a3494f2792e8c479493c"url": "http://192.168.0.122:9292 ", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "5baff29672d44f308eb33f4"}], "type": "image", "id": "da951f6f026b46918337129a67331f0d", "name": "glance"}, {"endpoints": [{"url": "http://192.168.0.122:8777face": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "753f5b41610c4de3b7ea283dcecab32a"}, {"url": "http://192.168.0.1 "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "b5c21dfe1d1e496093c1bfb16d9d5161"}, {"url": "http://1922:8777 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "cfc236dbffd846cb94f3add4cfbb5448"}], "type": "me"id": "e378bc2a0adc451e905a35ffdc59cf35", "name": "ceilometer"}, {"endpoints": [{"url": "http://192.168.0.122:8041 ", "interface": "adminn": "RegionOne", "region_id": "RegionOne", "id": "3ea207d844b44d77806a359bf49b3b46"}, {"url": "http://192.168.0.122:8041 ", "interface": ", "region": "RegionOne", "region_id": "RegionOne", "id": "78d7ac10e85745148af1f01f2942ed83"}, {"url": "http://192.168.0.122:8041 ", "int"public", "region": "RegionOne", "region_id": "RegionOne", "id": "f15500eca821457c994ce8d8c4c859f6"}], "type": "metric", "id": "ebd1ffd8161a292ff66140d", "name": "gnocchi"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "ad": "9c91d3f3e3c943f09b57fe70889894b8"}, "audit_ids": ["vOQFHBd6QJCOBEAz_1Y7jA"], "issued_at": "2021-06-12T18:28:25.000000Z"}}* *DEBUG (session:448) REQ: curl -g -i -X GET http://192.168.0.122:9511/v1/mservices -H "Accept: application/json" -H "Content-Type: applicn" -H "OpenStack-API-Version: container-infra latest" -H "User-Agent: None" -H "X-Auth-Token: {SHA1}1f364e2302d9282710c8314355929963d3b4* *DEBUG (connectionpool:207) Starting new HTTP connection (1): 192.168.0.122* *DEBUG (connectionpool:395) http://192.168.0.122:9511 "GET /v1/mservices HTTP/1.1" 503 218* *DEBUG (session:479) RESP: [503] Content-Length: 218 Content-Type: application/json Date: Sat, 12 Jun 2021 18:28:26 GMT Server: Werkzeug/thon/2.7.5 x-openstack-request-id: req-d9894aff-b197-4a36-b9b9-f752d38f29d3* *DEBUG (session:511) RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.

\nThe Keysice is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"}* *DEBUG (session:844) GET call to container-infra for http://192.168.0.122:9511/v1/mservices used request id req-d9894aff-b197-4a36-b9b9-fd3* *DEBUG (shell:643) 'errors'* *Traceback (most recent call last):* * File "/usr/lib/python2.7/site-packages/magnumclient/shell.py", line 640, in main* * OpenStackMagnumShell().main(map(encodeutils.safe_decode, sys.argv[1:]))* * File "/usr/lib/python2.7/site-packages/magnumclient/shell.py", line 552, in main* * args.func(self.cs, args)* * File "/usr/lib/python2.7/site-packages/magnumclient/v1/mservices_shell.py", line 22, in do_service_list* * mservices = cs.mservices.list()* * File "/usr/lib/python2.7/site-packages/magnumclient/v1/mservices.py", line 68, in list* * return self._list(self._path(path), "mservices")* * File "/usr/lib/python2.7/site-packages/magnumclient/common/base.py", line 121, in _list* * resp, body = self.api.json_request('GET', url)* * File "/usr/lib/python2.7/site-packages/magnumclient/common/httpclient.py", line 368, in json_request* * resp = self._http_request(url, method, **kwargs)* * File "/usr/lib/python2.7/site-packages/magnumclient/common/httpclient.py", line 349, in _http_request* * error_json = _extract_error_json(resp.content)* * File "/usr/lib/python2.7/site-packages/magnumclient/common/httpclient.py", line 55, in _extract_error_json* * error_body = body_json['errors'][0]* *KeyError: 'errors'* *ERROR: 'errors'* I'm running rocky openstack, install magnum and not use barbican. Appreciate if someone could advise magnum.conf or anything to resolve the above error. Please help. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: magnum.conf Type: application/octet-stream Size: 74098 bytes Desc: not available URL: From kennelson11 at gmail.com Mon Jun 14 16:37:51 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 14 Jun 2021 09:37:51 -0700 Subject: [TC][All] Election Officiating! Message-ID: Come join the election officials! I setup a meeting for June 29th at 16:00 UTC for those that would like to join and learn about running an election. We can adjust it. I just wanted to get something on the calendar before its too late and we are behind again. The election officials team really could use your help! I will walk through the scripts we use to generate dates and the electorate and answer questions you might have about the README.rst that has the entire process in it. Hope to see you there! - Kendall Nelson (diablo_rojo) [1] https://opendev.org/openstack/election/src/branch/master/README.rst -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1532 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting-95186227884.ics Type: text/calendar Size: 1581 bytes Desc: not available URL: From gouthampravi at gmail.com Mon Jun 14 17:42:53 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 14 Jun 2021 10:42:53 -0700 Subject: [all] Naming Poll for my Y Release Name Vote In-Reply-To: <7d28ec7e-d737-7912-ded1-bed711a3adf8@gmail.com> References: <7d28ec7e-d737-7912-ded1-bed711a3adf8@gmail.com> Message-ID: On Mon, Jun 14, 2021 at 9:33 AM Jay Bryant wrote: > All, > > As I reviewed the excellent list of names for the 'Y' release I had a > hard time choosing. This inspired me to create a poll to get the > community's feedback on how you all would like me to vote as a member of > the Technical Committee. > > To cast your vote please visit: https://www.surveymonkey.com/r/RHR3NCY I'm glad you asked :) Thanks Jay! > > > Happy voting! > > Jay > > (jungleboyj) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Jun 14 17:56:15 2021 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 14 Jun 2021 12:56:15 -0500 Subject: [TC] Open Infra Live- Open Source Governance In-Reply-To: References: Message-ID: <187a78ef-0e29-d7dd-5506-73515fb28dbd@gmail.com> On 6/14/2021 9:45 AM, Kendall Nelson wrote: > Hello TC Folks :) > > So I have been tasked with helping to collect a couple volunteers for > our July 29th episode of Open Infra Live (at 14:00 UTC) on open source > governance. > > I am also working on getting a couple members from the k8s steering > committee to join us that day. > > If you are interested in participating, please let me know! I only > need like two volunteers, but if we have more people than that dying > to join in, I am sure we can work it out. > I can help if you need another person.  Let me know. Jay > Thanks! > > -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From akanevsk at redhat.com Mon Jun 14 18:49:26 2021 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Mon, 14 Jun 2021 13:49:26 -0500 Subject: [Interop] co-chair requirement Message-ID: team, after discussing with a few board members I think we should drop the requirement that one of the co-chairs of the Interop WG is from the OIF board. Are you comfortable with this? Thanks, -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Mon Jun 14 19:22:02 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 14 Jun 2021 12:22:02 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: Ping? On Mon, Jun 7, 2021 at 2:27 PM Pete Zhang wrote: > Julie, > > The original email is too long and requires moderator approval. So I have > a new email thread instead. > > The openstack-vswitch is required (>=11.0.0 < 12.0.0) by openstack-neutron > (v15.0.0, from openstack-release-train, the release we chose). > I downloaded openstack-vswitch-11.0.0 from > https://forge.puppet.com/modules/openstack/vswitch/11.0.0. > > Where I can download the missing *librte and its dependencies*? I don't > think we have a yum-repo for Centos Extra so I might need to have those > dependencies downloaded as well. > > Thanks a lot! > > Pete > > -- > > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jun 14 20:27:47 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 14 Jun 2021 15:27:47 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 17th at 1500 UTC Message-ID: <17a0c35752f.fad2fd90669617.8187659333285607760@ghanshyammann.com> Hello Everyone, NOTE: TC MEETINGS WILL BE HELD IN #openstack-tc CHANNEL ON OFTC NETWORK (NOT FREENODE) Technical Committee's next weekly meeting is scheduled for June 17th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, June 16th , at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From juliaashleykreger at gmail.com Mon Jun 14 20:47:37 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 14 Jun 2021 13:47:37 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: Sorry, I thought the thread had addressed this. Did http://lists.openstack.org/pipermail/openstack-discuss/2021-June/022965.html not help? That being said, To reiterate what was said in the various replies. Train is in Extended Maintenance. The community cannot cut new releases of the old packages to address and fix issues. Your best bet is a newer, current, release of OpenStack and related packaging. The only case I personally would advise installing Train on a *new* deployment is if you explicitly have a vendor and their downstream packages/testing/processes supporting you. On Mon, Jun 14, 2021 at 12:22 PM Pete Zhang wrote: > Ping? > > On Mon, Jun 7, 2021 at 2:27 PM Pete Zhang > wrote: > >> Julie, >> >> The original email is too long and requires moderator approval. So >> I have a new email thread instead. >> >> The openstack-vswitch is required (>=11.0.0 < 12.0.0) by >> openstack-neutron (v15.0.0, from openstack-release-train, the release we >> chose). >> I downloaded openstack-vswitch-11.0.0 from >> https://forge.puppet.com/modules/openstack/vswitch/11.0.0. >> >> Where I can download the missing *librte and its dependencies*? I don't >> think we have a yum-repo for Centos Extra so I might need to have those >> dependencies downloaded as well. >> >> Thanks a lot! >> >> Pete >> >> -- >> >> >> > > > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Mon Jun 14 21:48:12 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 14 Jun 2021 14:48:12 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: Julia, Thanks for the update. In our environment, we don't have access to centos-extra repos. Does anyone know the site where we can download those missing/needed rpms? thx. Pete On Mon, Jun 14, 2021 at 1:47 PM Julia Kreger wrote: > Sorry, I thought the thread had addressed this. Did > http://lists.openstack.org/pipermail/openstack-discuss/2021-June/022965.html > not help? > > That being said, To reiterate what was said in the various replies. Train > is in Extended Maintenance. The community cannot cut new releases of the > old packages to address and fix issues. Your best bet is a newer, current, > release of OpenStack and related packaging. The only case I personally > would advise installing Train on a *new* deployment is if you explicitly > have a vendor and their downstream packages/testing/processes supporting > you. > > On Mon, Jun 14, 2021 at 12:22 PM Pete Zhang > wrote: > >> Ping? >> >> On Mon, Jun 7, 2021 at 2:27 PM Pete Zhang >> wrote: >> >>> Julie, >>> >>> The original email is too long and requires moderator approval. So >>> I have a new email thread instead. >>> >>> The openstack-vswitch is required (>=11.0.0 < 12.0.0) by >>> openstack-neutron (v15.0.0, from openstack-release-train, the release we >>> chose). >>> I downloaded openstack-vswitch-11.0.0 from >>> https://forge.puppet.com/modules/openstack/vswitch/11.0.0. >>> >>> Where I can download the missing *librte and its dependencies*? I don't >>> think we have a yum-repo for Centos Extra so I might need to have those >>> dependencies downloaded as well. >>> >>> Thanks a lot! >>> >>> Pete >>> >>> -- >>> >>> >>> >> >> >> -- >> >> >> > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Jun 15 00:17:57 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 15 Jun 2021 09:17:57 +0900 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: Thank you all for your additional thoughts. Because I've not received very strong objections about existing two patches[1][2], I updated these patches to resolve conflicts between these patches. [1] https://review.opendev.org/c/openstack/neutron/+/763563 [2] https://review.opendev.org/c/openstack/neutron/+/788893 I made the patch to add default hypervisor name as base one because it doesn't change behavior and would be "safe" for backports. So far we have received positive feedback about fixing compatibility with libvirt (in master) but I'll create a backport of that change as well to ask some feedback about its profit and risk for backport. I think strategy is now clear with this feedback but please feel free to put your thoughts in this thread or the above patches. > if we want to "fix" this in neutron then neutron should either try > looking up the RP using the host name and then fall back to using the > fqdn or we should look at using the hypervior api as we discussed a few > years ago when this last came up > http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011044.html I feel like this discussion would be a good chance to revisit the requirement of basic client implementation for placement. (or abstraction layer like castellan) Currently each components like nova, neutron, and cyborg(?) have their own placement client implementation (and logic to query resource providers) but IMO it is more efficient if we can maintain the common client implementation instead. > for many deployment that do not set the fqdn as the canonical host name > in /etc/host the current default behavior works out of the box > whatever solution we take we need to ensure that no existing deployment > is affected by the change which means we cannot default to only using > the fqdn or similar as that would be an upgrade breakage so we have > to maintain the current behavior by default and enhance neutron to > either fall back to the fqdn if the hostname based lookup fails or use > the new config intoduc ed by takashi's patch where the fqdn is used as > the server canonical hostname. Thank you for pointing this out. To be clear, the behavior change I proposed[2] doesn't break any deployment with libvirt but would break deployments with non-libvirt drivers. This point should be considered when reviewing that change. So far most of the feedback I received is that it is preferred to fix compatibility with libvirt as it's the "default" option but please share your thoughts on the patch. On Mon, Jun 14, 2021 at 7:30 PM Sean Mooney wrote: > On Sat, 2021-06-12 at 00:46 +0900, Takashi Kajinami wrote: > > On Fri, Jun 11, 2021 at 8:48 PM Oliver Walsh wrote: > > > Hi Takashi, > > > > > > On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami > > > wrote: > > > > Hi All, > > > > > > > > > > > > I've been working on bug 1926693[1], and am lost about the > > > > reasonable > > > > solutions we expect. Ideally I'd need to bring this topic in the > > > > team meeting > > > > but because of the timezone gap and complicated background, I'd > > > > like to > > > > gather some feedback in ml first. > > > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > > > > > > > TL;DR > > > > Which one(or ones) would be reasonable solutions for this issue ? > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > > > > (3) Implement something different > > > > > > > > The issue I reported in the bug is that there is an inconsistency > > > > between > > > > nova and neutron about the way to determine a hypervisor name. > > > > Currently neutron uses socket.gethostname() (which always returns > > > > shortname) > > > > > > > > > > > > > socket.gethostname() can return fqdn or shortname - > > > https://docs.python.org/3/library/socket.html#socket.gethostname. > > > > > > > You are correct and my statement was not accurate. > > So socket.gethostname() returns what is returned by gethostname system > > call, > > and gethostname/sethostname accept both FQDN and short name, > > socket.gethostname() > > can return one of FQDN or short name. > > > > However the root problem is that this logic is not completely same as > > the ones used > > in each virt driver. Of cause we can require people the "correct" > > format usage for > > canonical name as well as "hostname", but fixthing this problem in > > neutron would > > be much more helpful considering the effect caused by enforcing users > > to "fix" > > hostname/canonical name formatting at this point. > this is not really something that can be fixed in neutron > we can either create a common funciton in oslo.utils or placement-lib > that we can use in nova, neutron and all other project or we can use > the config option. > > if we want to "fix" this in neutron then neutron should either try > looking up the RP using the host name and then fall back to using the > fqdn or we shoudl look at using the hypervior api as we discussed a few > years ago when this last came up > > http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011044.html > > i dont think neutron shoudl know anything about hyperviors so i would > just proceed with the new config option that takashi has proposed but i > would not implemente Rodolfo's solution of adding a hypervisor_type. > > just as nova has no awareness of the neutron backend and trys to treat > all fo them the same neutron should remain hypervior independent and we > should look to provide common code that can be reused to identify the > RP in a seperate lib as a longer term solution. > > for many deployment that do not set the fqdn as the canonical host name > in /etc/host the current default behavior works out of the box > whatever solution we take we need to ensure that no existing deployment > is affected by the change which means we cannot default to only using > the fqdn or similar as that would be an upgrade breakage so we have > to maintain the current behavior by default and enhance neutron to > either fall back to the fqdn if the hostname based lookup fails or use > the new config intoduc ed by takashi's patch where the fqdn is used as > the server canonical hostname. > > > > > I've seen cases where it switched from short to fqdn but I'm not sure > > > of the root cause - DHCP lease setting a hostname/domainname perhaps. > > > > > > Thanks, > > > Ollie > > > > > > > to determine a hypervisor name to search the corresponding resource > > > > provider. > > > > On the other hand, nova uses libvirt's getHostname function (if > > > > libvirt driver is used) > > > > which returns a canonical name. Canonical name can be shortname or > > > > FQDN (*1) > > > > and if FQDN is used then neutron and nova never agree. > > > > > > > > (*1) > > > > IMO this is likely to happen in real deployments. For example, > > > > TripelO uses > > > > FQDN for canonical names. > > > > > > > > > > > > Neutron already provides the resource_provider_defauly_hypervisors > > > > option > > > > to override a hypervisor name used. However because this option > > > > accepts > > > > a map between interface and hypervisor, setting this parameter > > > > requires > > > > very redundant description especially when a compute node has > > > > multiple > > > > interfaces/bridges. The following example shows how redundant the > > > > current > > > > requirement is. > > > > ~~~ > > > > [OVS] > > > > resource_provider_bandwidths=br-data1:1024:1024,br- > > > > data2:1024:1024,\ > > > > br-data3:1024,1024,br-data4,1024:1024 > > > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > > > > compute0.mydomain,br-data3:compute0.mydomain,br- > > > > data4:compute0.mydomain > > > > ~~~ > > > > > > > > I've submitted a change to propose a new single parameter to > > > > override > > > > the base hypervisor name but this is currently -2ed, mainly because > > > > I lacked analysis about the root cause of mismatch when I proposed > > > > this. > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > > > > > > > > > On the other hand, I submitted a different change to neutron which > > > > implements > > > > the logic to get a hypervisor name which is fully compatible with > > > > libvirt. > > > > While this would save users from even overriding hypervisor names, > > > > I'm aware > > > > that this might break the other virt driver which depends on a > > > > different logic > > > > to generate a hypervisor name. IMO the patch is still useful > > > > considering > > > > the libvirt driver would be the most popular option now, but I'm > > > > not fully > > > > aware of the impact on the other drivers, especially because I > > > > don't know > > > > which virt driver would support the minimum QoS feature now. > > > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > > > > > > > > > > In the review of (2), Sean mentioned implementing a logic to > > > > determine > > > > an appropriate resource provider(3) even if there is a mismatch > > > > about > > > > host name format, but I'm not sure how I would implement that, tbh. > > > > > > > > > > > > My current thought is to merge (1) as a quick solution first, and > > > > discuss whether > > > > we should merge (2), but I'd like to ask for some feedback about > > > > this plan > > > > (like we should NOT merge (2)). > > > > > > > > I'd appreciate your thoughts about this $topic. > > > > > > > > Thank you, > > > > Takashi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Jun 15 02:35:21 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 14 Jun 2021 21:35:21 -0500 Subject: [openstack-helm] No meeting tomorrow Message-ID: Hey team, Since there are no agenda items [0] for the IRC meeting tomorrow, the meeting is cancelled. Our next meeting will be June 22nd. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Tue Jun 15 05:42:11 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Tue, 15 Jun 2021 07:42:11 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: On Mon, Jun 14, 2021 at 02:48:12PM -0700, Pete Zhang wrote: > Julia, > > Thanks for the update. > > In our environment, we don't have access to centos-extra repos. Does > anyone know the site where we can download those missing/needed rpms? thx. > > Pete > > On Mon, Jun 14, 2021 at 1:47 PM Julia Kreger > wrote: CentOS without CentOS Extras (which is enabled by default) sounds broken to me. Place that file in /etc/yum.repos.d, if you happen to run CentOS. [stack at devstack yum.repos.d]$ cat CentOS-Linux-Extras.repo # CentOS-Linux-Extras.repo # # The mirrorlist system uses the connecting IP address of the client and the # update status of each mirror to pick current mirrors that are geographically # close to the client. You should use this for CentOS updates unless you are # manually picking other mirrors. # # If the mirrorlist does not work for you, you can try the commented out # baseurl line instead. [extras] name=CentOS Linux $releasever - Extras mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra #baseurl=http://mirror.centos.org/$contentdir/$releasever/extras/$basearch/os/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial Matthias -- Matthias Runge From tonyppe at gmail.com Tue Jun 15 05:50:46 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Tue, 15 Jun 2021 13:50:46 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Hi Mark, I had never used the "pip install ." method. Maybe a miscomprehension on my side, from the documentation [1] there are three ways to install kayobe. I had opted for the first way which is "pip install kayobe" since January 2020. The understanding was as conveyed in the doc "Installing from PyPI ensures the use of well used and tested software". I have since followed your steps in your mail which is the installation from source. I had new problems: *During ansible bootstrap:* During ansible host bootstrap it errors out and says the kolla_ansible is not found and needs to be installed in the same virtual environment. In all previous times, I had understood that kolla ansible is installed by kayobe at this point. I eventually done "pip install kolla-ansible" and it seemed to take care of that and allowed me to move on to "host configure" *During host configure:* I was able to get past the previous python issue but then it failed on the network due to a "duplicate bond name", though this config was deployed successfully in Train. I dont think I really need a bond at this point so I deleted the bond and the host configure is now successful. (fyi this is an all-in-one host.) *During kayobe service deploy:* This then fails with "no module named docker" on the host. To troubleshoot this I logged into the host and activated the kayobe virtual env (/opt/kayobe/venvs/kayobe/bin/activate) and then "pip install docker". It was already installed. Eventually, I issued "pip install --ignore-installed docker" within these three (environment) locations which resolved this and allowed the kayobe command to complete successfully and progress further: - /opt/kayobe/venvs/kayobe/ - /opt/kayobe/venvs/kolla-ansible/ - native on the host after deactivating the venv. Now the blocker is the following failure; TASK [nova-cell : Waiting for nova-compute services to register themselves] ********************************************************************************************** FAILED - RETRYING: Waiting for nova-compute services to register themselves (20 retries left). FAILED - RETRYING: Waiting for nova-compute services to register themselves (19 retries left). I haven't seen this one before but previously I had seen something similar with mariadb because the API dns was not available. What I have been using here is a /etc/hosts entry for this. I checked that this entry is available on the host and in the nova containers. I decided to reboot the host anyway (previously resolved similar mariadb issue) to restart the containers just in case the dns was not available in one of them and I missed it. Unfortunately I now have two additional issues which are hard blockers: 1. The network is no longer working on the host after reboot, so I am unable to ssh 2. The user password has been changed by kayobe, so I am unable to login using the console Due to the above, I am unable to login to the host to investigate or remediate. Previously when this happened with centos I could use the root user to log in. This time around as it's ubuntu I do not have a root user. The user I am using for both "kolla_ansible_user" and "kayobe_ansible_user" is the same - is this causing a problem with Victoria and Wallaby? I had this user password change issue beginning with Victoria. So at this point I need to re-install the host and go back to the host configure before service deploy. *Summary* Any guidance is well appreciated as I'm at a loss at this point. Last week I had a working Openstack Train deployment in a single host. "Kayobe" stopped working (maybe because I had previously always used pip install kayobe). I would like to deploy Wallaby, should I be able to successfully do this today or should I be using Victoria at the moment (or even, Train)? [1] OpenStack Docs: Installation Regards, Tony Pearce On Mon, 14 Jun 2021 at 18:36, Mark Goddard wrote: > On Mon, 14 Jun 2021 at 09:40, Tony Pearce wrote: > > > > Hi Mark, > > > > I followed this guide to do a "git clone" specifying the branch "-b" to > "stable/wallaby" [1]. What additional steps do I need to do to get the > latest commits? > > That should be sufficient. When you install it via pip, note that 'pip > install kayobe' will still pull from PyPI, even if there is a local > kayobe directory. Use ./kayobe, or 'pip install .' if in the same > directory. > > Mark > > > > [1] OpenStack Docs: Overcloud > > > > Kind regards, > > > > Tony Pearce > > > > > > On Mon, 14 Jun 2021 at 16:10, Mark Goddard wrote: > >> > >> On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: > >> > > >> > Hi Pierre, thanks for replying to my message. > >> > > >> > To install kayobe I followed the documentation which summarise: > installing a few system packages and setting up the kayobe virtual > environment and then pulling the correct kayobe git version for the > openstack to be installed. After configuring the yaml files I have run > these commands: > >> > > >> > - kayobe control host bootstrap > >> > - kayobe overcloud host configure -> this one is failing with > /usr/libexec/platform-python: not found > >> > > >> > After reading your message on the weekend I concluded that maybe I > had done something wrong. Today, I re-pulled the kayobe wallaby git and > manually transferred the configuration over to the new directory structure > on the ansible host and set up again as per the guide but the same issue is > seen. > >> > > >> > What I ended up doing to try and resolve was finding where this > "platform-python" is coming from. It is coming from the virtual environment > which is being set up during the kayobe ansible host bootstrap. Initially, > I found the base.yml and it looks like it tries to match what the host is. > I noticed that there is no ubuntu 20 listed there so I created it however > it did not resolve the issue. > >> > > >> > So then I tried systematically replacing this reference in the other > files found in the same location "venvs\kayobe\share\kayobe\ansible". The > file I changed which allowed it to progress is "kayobe-target-venv.yml" > >> > > >> > But unfortunately it fails a bit further on, failing to find an > selinux package [1] > >> > > >> > Seeing as the error is mentioning selinux (a RedHat security feature > not installed on ubuntu) could the root cause issue be that kayobe is not > matching the host as ubuntu? I did already set in kayobe that I am using > ubuntu OS distribution within globals.yml [2]. > >> > > >> > Are there any extra steps that I need to complete that maybe are not > listed in the documentation / guide? > >> > > >> > [1] TASK [MichaelRigart.interfaces : Debian | install current/latest > network package - Pastebin.com > >> > [2] ---# Kayobe global > configuration.######################################### - Pastebin.com > >> > >> Hi Tony, > >> > >> That's definitely not a recent Wallaby checkout you're using. Ubuntu > >> no longer uses that MichaelRigart.interfaces role. Check that you have > >> recent commits. Here is the most recent on stable/wallaby: > >> 13169077aaec0f7a28ae1f15b419dafc2456faf7. > >> > >> Mark > >> > >> > > >> > Regards, > >> > > >> > Tony Pearce > >> > > >> > > >> > > >> > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau > wrote: > >> >> > >> >> Hi Tony, > >> >> > >> >> Kayobe doesn't use platform-python anymore, on both stable/wallaby > and > >> >> stable/victoria: > >> >> > https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 > >> >> > >> >> Can you double-check what version you are using, and share how you > >> >> installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. > >> >> > >> >> Best wishes, > >> >> Pierre > >> >> > >> >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: > >> >> > > >> >> > I'm trying to run "kayobe overcloud host configure" against an > ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is > not found during the host configure part. > >> >> > > >> >> > PLAY [Verify that the Kayobe Ansible user account is accessible] > >> >> > TASK [Verify that a command can be executed] > >> >> > > >> >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, > "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", > "module_stdout": "", "msg": "The module failed to execute correctly, you > probably need to set the interpreter.\nSee stdout/stderr for the exact > error", "rc": 127} > >> >> > > >> >> > Python3 is installed on the host. When searching where this > platform-python is coming from it returns the kolla-ansible virtual envs: > >> >> > > >> >> > $ grep -rni -e "platform-python" > >> >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: > '8': /usr/libexec/platform-python > >> >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: > - /usr/libexec/platform-python > >> >> > > >> >> > I had a look through the deployment guide for Kayobe Wallaby and > didnt see a note about changing this. > >> >> > > >> >> > Do I need to do further steps to support the ubuntu overcloud > host? I have already set (as per the doc): > >> >> > > >> >> > os_distribution: ubuntu > >> >> > os_release: focal > >> >> > > >> >> > Regards, > >> >> > > >> >> > Tony Pearce > >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Tue Jun 15 06:58:46 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 14 Jun 2021 23:58:46 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: Matthias, I got the missing dkdp*.rpm from http://ftp.riken.jp/Linux/cern/centos/7/extras/x86_64/Packages/ and resolved the dependencies, thx. Pete On Mon, Jun 14, 2021 at 10:51 PM Matthias Runge wrote: > On Mon, Jun 14, 2021 at 02:48:12PM -0700, Pete Zhang wrote: > > Julia, > > > > Thanks for the update. > > > > In our environment, we don't have access to centos-extra repos. Does > > anyone know the site where we can download those missing/needed rpms? > thx. > > > > Pete > > > > On Mon, Jun 14, 2021 at 1:47 PM Julia Kreger < > juliaashleykreger at gmail.com> > > wrote: > > CentOS without CentOS Extras (which is enabled by default) sounds broken > to me. > > Place that file in /etc/yum.repos.d, if you happen to run CentOS. > > [stack at devstack yum.repos.d]$ cat CentOS-Linux-Extras.repo > > # CentOS-Linux-Extras.repo > # > # The mirrorlist system uses the connecting IP address of the client and > the > # update status of each mirror to pick current mirrors that are > geographically > # close to the client. You should use this for CentOS updates unless you > are > # manually picking other mirrors. > # > # If the mirrorlist does not work for you, you can try the commented out > # baseurl line instead. > > [extras] > name=CentOS Linux $releasever - Extras > mirrorlist= > https://urldefense.com/v3/__http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra__;!!DCbAVzZNrAf4!VBdfNjMCYCHdfNC-bRTUdURIp7XctPb4TtJmDt9fjEO3oKSJyyksdsCtP6mq0pj6hRXnOHQ$ > #baseurl= > https://urldefense.com/v3/__http://mirror.centos.org/$contentdir/$releasever/extras/$basearch/os/__;!!DCbAVzZNrAf4!VBdfNjMCYCHdfNC-bRTUdURIp7XctPb4TtJmDt9fjEO3oKSJyyksdsCtP6mq0pj6ilFPzes$ > gpgcheck=1 > enabled=1 > gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial > > > Matthias > -- > Matthias Runge > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From bxzhu_5355 at 163.com Tue Jun 15 07:49:00 2021 From: bxzhu_5355 at 163.com (Boxiang Zhu) Date: Tue, 15 Jun 2021 15:49:00 +0800 (GMT+08:00) Subject: [cinder] revert volume to snapshot Message-ID: <782fa353.71d3.17a0ea5201f.Coremail.bxzhu_5355@163.com> Hi, There is a restful api[1] to revert volume to snapshot. But the description means we can only use this api to revert volume to its latest snapshot. Are some drivers limited to rolling back only to the latest snapshot? Or just nobody helps to improve the api to revert volume to any snapshots of the volume? [1] https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=revert-volume-to-snapshot-detail#revert-volume-to-snapshot Thanks, Boxiang -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Jun 15 08:02:43 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 15 Jun 2021 09:02:43 +0100 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: On Tue, 15 Jun 2021 at 06:51, Tony Pearce wrote: > > Hi Mark, > > I had never used the "pip install ." method. Maybe a miscomprehension on my side, from the documentation [1] there are three ways to install kayobe. I had opted for the first way which is "pip install kayobe" since January 2020. The understanding was as conveyed in the doc "Installing from PyPI ensures the use of well used and tested software". That is true, but since Wallaby has not been released for Kayobe yet, it is not on PyPI. If you do install from PyPI, I would advise using a version constraint to ensure you get the release series you need. > > I have since followed your steps in your mail which is the installation from source. I had new problems: > > During ansible bootstrap: > During ansible host bootstrap it errors out and says the kolla_ansible is not found and needs to be installed in the same virtual environment. In all previous times, I had understood that kolla ansible is installed by kayobe at this point. I eventually done "pip install kolla-ansible" and it seemed to take care of that and allowed me to move on to "host configure" Kolla Ansible should be installed automatically during 'kayobe control host bootstrap', in a separate virtualenv from Kayobe. You should not need to install it manually, and I would again advise against doing so without version constraints. > > During host configure: > I was able to get past the previous python issue but then it failed on the network due to a "duplicate bond name", though this config was deployed successfully in Train. I dont think I really need a bond at this point so I deleted the bond and the host configure is now successful. (fyi this is an all-in-one host.) > > During kayobe service deploy: > This then fails with "no module named docker" on the host. To troubleshoot this I logged into the host and activated the kayobe virtual env (/opt/kayobe/venvs/kayobe/bin/activate) and then "pip install docker". It was already installed. Eventually, I issued "pip install --ignore-installed docker" within these three (environment) locations which resolved this and allowed the kayobe command to complete successfully and progress further: > - /opt/kayobe/venvs/kayobe/ > - /opt/kayobe/venvs/kolla-ansible/ > - native on the host after deactivating the venv. > > Now the blocker is the following failure; > > TASK [nova-cell : Waiting for nova-compute services to register themselves] ********************************************************************************************** > FAILED - RETRYING: Waiting for nova-compute services to register themselves (20 retries left). > FAILED - RETRYING: Waiting for nova-compute services to register themselves (19 retries left). > > I haven't seen this one before but previously I had seen something similar with mariadb because the API dns was not available. What I have been using here is a /etc/hosts entry for this. I checked that this entry is available on the host and in the nova containers. I decided to reboot the host anyway (previously resolved similar mariadb issue) to restart the containers just in case the dns was not available in one of them and I missed it. I'd check the nova compute logs here, to find why they are not registering themselves. > > Unfortunately I now have two additional issues which are hard blockers: > 1. The network is no longer working on the host after reboot, so I am unable to ssh > 2. The user password has been changed by kayobe, so I am unable to login using the console > > Due to the above, I am unable to login to the host to investigate or remediate. Previously when this happened with centos I could use the root user to log in. This time around as it's ubuntu I do not have a root user. > The user I am using for both "kolla_ansible_user" and "kayobe_ansible_user" is the same - is this causing a problem with Victoria and Wallaby? I had this user password change issue beginning with Victoria. > > So at this point I need to re-install the host and go back to the host configure before service deploy. > > Summary > Any guidance is well appreciated as I'm at a loss at this point. Last week I had a working Openstack Train deployment in a single host. "Kayobe" stopped working (maybe because I had previously always used pip install kayobe). > > I would like to deploy Wallaby, should I be able to successfully do this today or should I be using Victoria at the moment (or even, Train)? We are very close to release of Wallaby, and I expect that it should generally work, but Ubuntu is a new distro for Kayobe, and Wallaby is a new release. There may be teething problems, so if you're looking for something more stable then I'd suggest CentOS & Victoria. > > [1] OpenStack Docs: Installation > > Regards, > > Tony Pearce > > > On Mon, 14 Jun 2021 at 18:36, Mark Goddard wrote: >> >> On Mon, 14 Jun 2021 at 09:40, Tony Pearce wrote: >> > >> > Hi Mark, >> > >> > I followed this guide to do a "git clone" specifying the branch "-b" to "stable/wallaby" [1]. What additional steps do I need to do to get the latest commits? >> >> That should be sufficient. When you install it via pip, note that 'pip >> install kayobe' will still pull from PyPI, even if there is a local >> kayobe directory. Use ./kayobe, or 'pip install .' if in the same >> directory. >> >> Mark >> > >> > [1] OpenStack Docs: Overcloud >> > >> > Kind regards, >> > >> > Tony Pearce >> > >> > >> > On Mon, 14 Jun 2021 at 16:10, Mark Goddard wrote: >> >> >> >> On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: >> >> > >> >> > Hi Pierre, thanks for replying to my message. >> >> > >> >> > To install kayobe I followed the documentation which summarise: installing a few system packages and setting up the kayobe virtual environment and then pulling the correct kayobe git version for the openstack to be installed. After configuring the yaml files I have run these commands: >> >> > >> >> > - kayobe control host bootstrap >> >> > - kayobe overcloud host configure -> this one is failing with /usr/libexec/platform-python: not found >> >> > >> >> > After reading your message on the weekend I concluded that maybe I had done something wrong. Today, I re-pulled the kayobe wallaby git and manually transferred the configuration over to the new directory structure on the ansible host and set up again as per the guide but the same issue is seen. >> >> > >> >> > What I ended up doing to try and resolve was finding where this "platform-python" is coming from. It is coming from the virtual environment which is being set up during the kayobe ansible host bootstrap. Initially, I found the base.yml and it looks like it tries to match what the host is. I noticed that there is no ubuntu 20 listed there so I created it however it did not resolve the issue. >> >> > >> >> > So then I tried systematically replacing this reference in the other files found in the same location "venvs\kayobe\share\kayobe\ansible". The file I changed which allowed it to progress is "kayobe-target-venv.yml" >> >> > >> >> > But unfortunately it fails a bit further on, failing to find an selinux package [1] >> >> > >> >> > Seeing as the error is mentioning selinux (a RedHat security feature not installed on ubuntu) could the root cause issue be that kayobe is not matching the host as ubuntu? I did already set in kayobe that I am using ubuntu OS distribution within globals.yml [2]. >> >> > >> >> > Are there any extra steps that I need to complete that maybe are not listed in the documentation / guide? >> >> > >> >> > [1] TASK [MichaelRigart.interfaces : Debian | install current/latest network package - Pastebin.com >> >> > [2] ---# Kayobe global configuration.######################################### - Pastebin.com >> >> >> >> Hi Tony, >> >> >> >> That's definitely not a recent Wallaby checkout you're using. Ubuntu >> >> no longer uses that MichaelRigart.interfaces role. Check that you have >> >> recent commits. Here is the most recent on stable/wallaby: >> >> 13169077aaec0f7a28ae1f15b419dafc2456faf7. >> >> >> >> Mark >> >> >> >> > >> >> > Regards, >> >> > >> >> > Tony Pearce >> >> > >> >> > >> >> > >> >> > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau wrote: >> >> >> >> >> >> Hi Tony, >> >> >> >> >> >> Kayobe doesn't use platform-python anymore, on both stable/wallaby and >> >> >> stable/victoria: >> >> >> https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 >> >> >> >> >> >> Can you double-check what version you are using, and share how you >> >> >> installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. >> >> >> >> >> >> Best wishes, >> >> >> Pierre >> >> >> >> >> >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: >> >> >> > >> >> >> > I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is not found during the host configure part. >> >> >> > >> >> >> > PLAY [Verify that the Kayobe Ansible user account is accessible] >> >> >> > TASK [Verify that a command can be executed] >> >> >> > >> >> >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} >> >> >> > >> >> >> > Python3 is installed on the host. When searching where this platform-python is coming from it returns the kolla-ansible virtual envs: >> >> >> > >> >> >> > $ grep -rni -e "platform-python" >> >> >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: '8': /usr/libexec/platform-python >> >> >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: - /usr/libexec/platform-python >> >> >> > >> >> >> > I had a look through the deployment guide for Kayobe Wallaby and didnt see a note about changing this. >> >> >> > >> >> >> > Do I need to do further steps to support the ubuntu overcloud host? I have already set (as per the doc): >> >> >> > >> >> >> > os_distribution: ubuntu >> >> >> > os_release: focal >> >> >> > >> >> >> > Regards, >> >> >> > >> >> >> > Tony Pearce >> >> >> > From smooney at redhat.com Tue Jun 15 10:38:43 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 15 Jun 2021 11:38:43 +0100 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: On Tue, 2021-06-15 at 09:17 +0900, Takashi Kajinami wrote: > Thank you all for your additional thoughts. > > Because I've not received very strong objections about existing two > patches[1][2], > I updated these patches to resolve conflicts between these patches. >   [1] https://review.opendev.org/c/openstack/neutron/+/763563 > >   [2] https://review.opendev.org/c/openstack/neutron/+/788893 >   > I made the patch to add default hypervisor name as base one because > it doesn't > change behavior and would be "safe" for backports. So far we have > received positive > feedback about fixing compatibility with libvirt (in master) but I'll > create a backport > of that change as well to ask some feedback about its profit and risk > for backport. > > I think strategy is now clear with this feedback but please feel free > to put your > thoughts in this thread or the above patches. > > > if we want to "fix" this in neutron then neutron should either try > > looking up the RP using the host name and then fall back to using > the > > fqdn or we should look at using the hypervior api as we discussed a > few > > years ago when this last came up > > http://lists.openstack.org/pipermail/openstack-discuss/2019- > November/011044.html > > I feel like this discussion would be a good chance to revisit the > requirement of basic client > implementation for placement. (or abstraction layer like castellan) > Currently each components like nova, neutron, and cyborg(?) have > their own placement > client implementation (and logic to query resource providers) but IMO > it is more efficient > if we can maintain the common client implementation instead. it may be useful in a form of placement-lib this is not somethign that coudl have been adress in a common client however as for example ironic or other clustered driver have 1 compute service but multipel resouce provider per compute service so we cant always assume 1:1 mappings. its why we cant use conf.HOST in the general case altough we could have used it for libvirt. > > > for many deployment that do not set the fqdn as the canonical host > name > > in /etc/host the current default behavior works out of the box > > whatever solution we take we need to ensure that no existing > deployment > > is affected by the change which means we cannot default to only > using > > the fqdn or similar as that would be an upgrade breakage so we have > > to maintain the current behavior by default and enhance neutron to > > either fall back to the fqdn if the hostname based lookup fails or > use > > the new config intoduc ed by takashi's patch where the fqdn is used > as > > the server canonical hostname. > Thank you for pointing this out. To be clear, the behavior change I > proposed[2] doesn't > break any deployment with libvirt but would break deployments with > non-libvirt drivers. > This point should be considered when reviewing that change. So far > most of the feedback > I received is that it is preferred to fix compatibility with libvirt > as it's the "default" option > but please share your thoughts on the patch. ok there are 3 sets of name that are likely to be used the hostname, the fqdn, and the value of conf.HOST conf.HOST default to the hostname. if we are to enhance the default behavior i think we should just implement a fallback behavior which would check all 3 values if they are distinct i.e. lookup by hostname, if that fails lookup by fqdn, if that fails lookup by conf.HOST if and only if it not the same as the hostname(its default value) or the fqdn. it would be unusual fo rthe conf.host to not match the hostname or fqdn but it does happen for example if you are rinning multiple virt driver on the same host wehn you deploy say libvirt and ironic on the same host or you use the fake dirver for scale testing. > > > On Mon, Jun 14, 2021 at 7:30 PM Sean Mooney > wrote: > > On Sat, 2021-06-12 at 00:46 +0900, Takashi Kajinami wrote: > > > On Fri, Jun 11, 2021 at 8:48 PM Oliver Walsh > > wrote: > > > > Hi Takashi, > > > > > > > > On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami > > > > > > wrote: > > > > > Hi All, > > > > > > > > > > > > > > > I've been working on bug 1926693[1], and am lost about the > > > > > reasonable > > > > > solutions we expect. Ideally I'd need to bring this topic in > > the > > > > > team meeting > > > > > but because of the timezone gap and complicated background, > > I'd > > > > > like to > > > > > gather some feedback in ml first. > > > > > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > > > > > > > > > TL;DR > > > > >  Which one(or ones) would be reasonable solutions for this > > issue ? > > > > >   (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > >   (2) https://review.opendev.org/c/openstack/neutron/+/788893 > > > > >   (3) Implement something different > > > > > > > > > > The issue I reported in the bug is that there is an > > inconsistency > > > > > between > > > > > nova and neutron about the way to determine a hypervisor > > name. > > > > > Currently neutron uses socket.gethostname() (which always > > returns > > > > > shortname) > > > > > > > > > > > > > > > > > socket.gethostname() can return fqdn or shortname -   > > > > > > https://docs.python.org/3/library/socket.html#socket.gethostname. > > > > > > > > > > You are correct and my statement was not accurate. > > > So socket.gethostname() returns what is returned by gethostname > > system > > > call, > > > and gethostname/sethostname accept both FQDN and short name, > > > socket.gethostname() > > > can return one of FQDN or short name. > > > > > > However the root problem is that this logic is not completely > > same as > > > the ones used > > > in each virt driver. Of cause we can require people the "correct" > > > format usage for > > > canonical name as well as "hostname", but fixthing this problem > > in > > > neutron would > > > be much more helpful considering the effect caused by enforcing > > users > > > to "fix" > > > hostname/canonical name formatting at this point. > > this is not really something that can be fixed in neutron > > we can either create a common funciton in oslo.utils or placement- > > lib > > that we can use in nova, neutron and all other project or we can > > use > > the config option. > > > > if we want to "fix" this in neutron then neutron should either try > > looking up the RP using the host name and then fall back to using > > the > > fqdn or we shoudl look at using the hypervior api as we discussed a > > few > > years ago when this last came up > > > http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011044.html > > > > i dont think neutron shoudl know anything about hyperviors so i > > would > > just proceed with the new config option that takashi has proposed > > but i > > would not implemente Rodolfo's solution of adding a > > hypervisor_type. > > > > just as nova has no awareness of the neutron backend and trys to > > treat > > all fo them the same neutron should remain hypervior independent > > and we > > should look to provide common code that can be reused to identify > > the > > RP in a seperate lib as a longer term solution. > > > > for many deployment that do not set the fqdn as the canonical host > > name > > in /etc/host the current default behavior works out of the box > > whatever solution we take we need to ensure that no existing > > deployment > > is affected by the change which means we cannot default to only > > using > > the fqdn or similar as that would be an upgrade breakage so we have > > to maintain the current behavior by default and enhance neutron to > > either fall back to the fqdn if the hostname based lookup fails or > > use > > the new config intoduc ed by takashi's patch where the fqdn is used > > as > > the server canonical hostname. > > >   > > > > I've seen cases where it switched from short to fqdn but I'm > > not sure > > > > of the root cause - DHCP lease setting a hostname/domainname > > perhaps. > > > > > > > > Thanks, > > > > Ollie > > > > > > > > > to determine a hypervisor name to search the corresponding > > resource > > > > > provider. > > > > > On the other hand, nova uses libvirt's getHostname function > > (if > > > > > libvirt driver is used) > > > > > which returns a canonical name. Canonical name can be > > shortname or > > > > > FQDN (*1) > > > > > and if FQDN is used then neutron and nova never agree. > > > > > > > > > > (*1) > > > > > IMO this is likely to happen in real deployments. For > > example, > > > > > TripelO uses > > > > > FQDN for canonical names.   > > > > > > > > > > > > > > > Neutron already provides the > > resource_provider_defauly_hypervisors > > > > > option > > > > > to override a hypervisor name used. However because this > > option > > > > > accepts > > > > > a map between interface and hypervisor, setting this > > parameter > > > > > requires > > > > > very redundant description especially when a compute node has > > > > > multiple > > > > > interfaces/bridges. The following example shows how redundant > > the > > > > > current > > > > > requirement is. > > > > > ~~~ > > > > > [OVS] > > > > > resource_provider_bandwidths=br-data1:1024:1024,br- > > > > > data2:1024:1024,\ > > > > > br-data3:1024,1024,br-data4,1024:1024 > > > > > resource_provider_hypervisors=br-data1:compute0.mydomain,br- > > data2:\ > > > > > compute0.mydomain,br-data3:compute0.mydomain,br- > > > > > data4:compute0.mydomain > > > > > ~~~ > > > > > > > > > > I've submitted a change to propose a new single parameter to > > > > > override > > > > > the base hypervisor name but this is currently -2ed, mainly > > because > > > > > I lacked analysis about the root cause of mismatch when I > > proposed > > > > > this. > > > > >  (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > > > > > > > > > > > > On the other hand, I submitted a different change to neutron > > which > > > > > implements > > > > > the logic to get a hypervisor name which is fully compatible > > with > > > > > libvirt. > > > > > While this would save users from even overriding hypervisor > > names, > > > > > I'm aware > > > > > that this might break the other virt driver which depends on > > a > > > > > different logic > > > > > to generate a hypervisor name. IMO the patch is still useful > > > > > considering > > > > > the libvirt driver would be the most popular option now, but > > I'm > > > > > not fully > > > > > aware of the impact on the other drivers, especially because > > I > > > > > don't know > > > > > which virt driver would support the minimum QoS feature now. > > > > >  (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > > > > > > > > > > > > > In the review of (2), Sean mentioned implementing a logic to > > > > > determine > > > > > an appropriate resource provider(3) even if there is a > > mismatch > > > > > about > > > > > host name format, but I'm not sure how I would implement > > that, tbh. > > > > > > > > > > > > > > > My current thought is to merge (1) as a quick solution first, > > and > > > > > discuss whether > > > > > we should merge (2), but I'd like to ask for some feedback > > about > > > > > this plan > > > > > (like we should NOT merge (2)). > > > > > > > > > > I'd appreciate your thoughts about this $topic. > > > > > > > > > > Thank you, > > > > > Takashi > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Jun 15 10:54:00 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 15 Jun 2021 05:54:00 -0500 Subject: [cinder] revert volume to snapshot In-Reply-To: <782fa353.71d3.17a0ea5201f.Coremail.bxzhu_5355@163.com> References: <782fa353.71d3.17a0ea5201f.Coremail.bxzhu_5355@163.com> Message-ID: <20210615105400.GA8236@sm-workstation> > There is a restful api[1] to revert volume to snapshot. But the description means > we can only use this api to revert volume to its latest snapshot. > > > Are some drivers limited to rolling back only to the latest snapshot? Or just nobody > helps to improve the api to revert volume to any snapshots of the volume? > This is partly due to the rollback abilities of each type of storage. Some types can't revert to several snapshots back without losing the more recent snapshots. This means that Cinder would still think there are snapshots available, but those snapshots would no longer be present on the storage device. This is considered a data loss condition, so we need to protect against that from happening. It's been discussed several times at Design Summits and PTGs, and at least so far there has not been a good way to handle it. The best recommendation we can give is for anyone that needs to go back several snapshots, you will need to revert one snapshot at a time to get back to where you need to be. But it is also worth pointing out that snapshots, and the ability to revert to snapshots, is not necessarily the best mechanism for data protection. If you need to have the ability to restore a volume back to its earlier state, using the backup/restore APIs are likely the more appropriate way to go. Sean From antonio.paulo at cern.ch Tue Jun 15 11:39:18 2021 From: antonio.paulo at cern.ch (=?UTF-8?Q?Ant=c3=b3nio_Paulo?=) Date: Tue, 15 Jun 2021 13:39:18 +0200 Subject: [nova] GPU VMs using MIG? In-Reply-To: References: <803dae06-8317-27f4-42ac-365f72ff31f4@cern.ch> Message-ID: <96350c31-c656-c523-6649-54795863fa16@cern.ch> I see, thank you for the reply. Even if it does not make sense for MIG to be supported/documented upstream if someone does come across MIG+OpenStack not backed by virtual GPUs do ping me please :-) I'll be trying to get this working when some cards arrive. Cheers, António On 14/06/21 18:01, Sylvain Bauza wrote: > > > On Mon, Jun 14, 2021 at 4:37 PM António Paulo > wrote: > > Hi! > > Has anyone looked into instancing VMs with NVIDIA's Multi-Instance GPU > (MIG) devices [1] without having to rely on vGPUs? Unfortunately, NVIDIA > vGPUs lack tracing and profiling support that our users need. > > I could not find anything specific to MIG in the OpenStack docs but I > was wondering if doing PCI passthrough [2] of MIG devices is an option > that someone has seen or tested? > > Maybe some massaging to expose the MIG as a Linux device is required > [3]? > > > Nividia MIG feature is orthogonal to virtual GPUs and hardware dependent. > As the latter, this is not really something we can "support" upstream as > our upstream CI can't just verify it. > > Some downstream vendors tho have work efforts for trying to test this > with their own solutions but again, not something we can discuss it here. > > Cheers, > António > > [1] https://docs.nvidia.com/datacenter/tesla/mig-user-guide/ > > [2] https://docs.openstack.org/nova/pike/admin/pci-passthrough.html > > [3] > https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#device-nodes > From stephenfin at redhat.com Tue Jun 15 14:18:45 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 15 Jun 2021 15:18:45 +0100 Subject: AW: AW: Customization of nova-scheduler In-Reply-To: <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> Message-ID: On Thu, 2021-06-10 at 17:21 +0200, levonmelikbekjan at yahoo.de wrote: > Hi Stephen, > > I'm trying to customize my nova scheduler. However, if I change the nova.conf as it is written here https://docs.openstack.org/operations-guide/de/ops-customize-compute.html, then my python file cannot be found. How can I configure it correctly? > > Do you have any idea? > > My controller node is running with CENTOS 7. I couldn't install devstack because it is only supported for CENTOS 8 version. That document is very old. You want [1], which documents how to do this properly. Hope this helps, Stephen [1] https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > Best regards > Levon > > -----Ursprüngliche Nachricht----- > Von: Stephen Finucane > Gesendet: Montag, 31. Mai 2021 18:21 > An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org > Betreff: Re: AW: Customization of nova-scheduler > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > Hello Stephen, > > > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > > > Two cases should be solved! > > > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. > > What you're describing is commonly referred to as "preemptible" or "spot" > instances. This topic has a long, complicated history in nova and has yet to be implemented. Searching for "preemptible instances openstack" should yield you lots of discussion on the topic along with a few proof-of-concept approaches using external services or out-of-tree modifications to nova. > > > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? > > As hinted above, this is likely to be a very difficult project given the fraught history of the idea. I don't want to dissuade you from this work but you should be aware of what you're getting into from the start. If you're serious about pursuing this, I suggest you first do some research on prior art. As noted above, there is lots of information on the internet about this. With this research done, you'll need to decide whether this is something you want to approach within nova itself, via out-of-tree extensions or via a third party project. If you're opting for integration with nova, then you'll need to think long and hard about how you would design such a system and start working on a spec (a design document) outlining your proposed solution. Details on how to write a spec are discussed at [1]. The only extension points nova offers today are scheduler filters and weighers so your options for an out-of-tree extension approach will be limited. A third party project will arguably be the easiest approach but you will be restricted to talking to nova's REST APIs which may limit the design somewhat. This Blazar spec [2] could give you some ideas on this approach (assuming it was never actually implemented, though it may well have been). > > > My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. > > > > My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > > > The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. > > > > What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? > > This could potentially work, but I suspect there will be serious performance implications with this, particularly at scale. Scheduler filters are historically used for simple things like "find me a group of hosts that have this metadata attribute I set on my image". Making API calls sounds like something that would take significant time and therefore slow down the schedule process. You'd also have to decide what your heuristic for deciding which VM(s) to delete would be, since there's nothing obvious in nova that you could use. > You could use something as simple as filter extra specs or something as complicated as an external service. > > This should be lots to get you started. Once again, do make sure you're aware of what you're getting yourself into before you start. This could get complicated very quickly :) > > Cheers, > Stephen > > > I'm very happy to have found you!!! > > > > Thank you really much for your time! > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > [2] https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > > Best regards > > Levon > > > > -----Ursprüngliche Nachricht----- > > Von: Stephen Finucane > > Gesendet: Montag, 31. Mai 2021 12:34 > > An: Levon Melikbekjan ; > > openstack at lists.openstack.org > > Betreff: Re: Customization of nova-scheduler > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > Hello Openstack team, > > > > > > is it possible to customize the nova-scheduler via Python? If yes, how? > > > > Yes, you can provide your own filters and weighers. This is documented at [1]. > > > > Hope this helps, > > Stephen > > > > [1] > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-y > > our-own-filter > > > > > > > > Best regards > > > Levon > > > > > > > > > From smooney at redhat.com Tue Jun 15 14:36:56 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 15 Jun 2021 15:36:56 +0100 Subject: AW: AW: Customization of nova-scheduler In-Reply-To: References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> Message-ID: On Tue, 2021-06-15 at 15:18 +0100, Stephen Finucane wrote: > On Thu, 2021-06-10 at 17:21 +0200, levonmelikbekjan at yahoo.de wrote: > > Hi Stephen, > > > > I'm  trying to customize my nova scheduler. However, if I change the > > nova.conf as it is written here > > https://docs.openstack.org/operations-guide/de/ops-customize-compute.html > > , then my python file cannot be found. How can I configure it > > correctly? > > > > Do you have any idea? > > > > My controller node is running with CENTOS 7. I couldn't install > > devstack because it is only supported for CENTOS 8 version. > > That document is very old. You want [1], which documents how to do this > properly. wwell that depend if they acatully want to write ther own filter yes but if they want to replace the scheduler with a new one we recently removed support for that right. previously we had several schduler implemtation like the caching scheduler and that old doc https://docs.openstack.org/operations-guide/de/ops-customize-compute.html descibes on how to replace the filter scheduler dirver with an new one. we deprecated it ussuri https://github.com/openstack/nova/commit/6a4cb24d39623930fd240e67d65013803459839d and you finally removed the extention point in febuary https://github.com/openstack/nova/commit/5aeb3a387494c4559d183d1290db3c92a96dfb90 so from wallaby on you can nolonger write an alternitvie schduler implemenation out of tree without reverting that. so yes https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter is how you customise schduling now but you cant customise the schduler itself out fo tree anymore. > > Hope this helps, > Stephen > > [1] > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > > > Best regards > > Levon > > > > -----Ursprüngliche Nachricht----- > > Von: Stephen Finucane > > Gesendet: Montag, 31. Mai 2021 18:21 > > An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org > > Betreff: Re: AW: Customization of nova-scheduler > > > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > > Hello Stephen, > > > > > > I am a student from Germany who is currently working on his > > > bachelor thesis. My job is to build a cloud solution for my > > > university with Openstack. The functionality should include the > > > prioritization of users. So that you can imagine exactly how the > > > whole thing should work, I would like to give you an example. > > > > > > Two cases should be solved! > > > > > > Case 1: A user A with a low priority uses a VM from Openstack with > > > half performance of the available host. Then user B comes in with a > > > high priority and needs the full performance of the host for his > > > VM. When creating the VM of user B, the VM of user A should be > > > deleted because there is not enough compute power for user B. The > > > VM of user B is successfully created. > > > > > > Case 2: A user A with a low priority uses a VM with half the > > > performance of the available host, then user B comes in with a high > > > priority and needs half of the performance of the host for his VM. > > > When creating the VM of user B, user A should not be deleted, since > > > enough computing power is available for both users. > > > > > > These cases should work for unlimited users. In order to optimize > > > the whole thing, I would like to write a function that precisely > > > calculates all performance components to determine whether enough > > > resources are available for the VM of the high priority user. > > > > What you're describing is commonly referred to as "preemptible" or > > "spot" > > instances. This topic has a long, complicated history in nova and has > > yet to be implemented. Searching for "preemptible instances > > openstack" should yield you lots of discussion on the topic along > > with a few proof-of-concept approaches using external services or > > out-of-tree modifications to nova. > > > > > I’m new to Openstack, but I’ve already implemented cloud projects > > > with Microsoft Azure and have solid programming skills. Can you > > > give me a hint where and how I can start? > > > > As hinted above, this is likely to be a very difficult project given > > the fraught history of the idea. I don't want to dissuade you from > > this work but you should be aware of what you're getting into from > > the start. If you're serious about pursuing this, I suggest you first > > do some research on prior art. As noted above, there is lots of > > information on the internet about this. With this research done, > > you'll need to decide whether this is something you want to approach > > within nova itself, via out-of-tree extensions or via a third party > > project. If you're opting for integration with nova, then you'll need > > to think long and hard about how you would design such a system and > > start working on a spec (a design document) outlining your proposed > > solution. Details on how to write a spec are discussed at [1]. The > > only extension points nova offers today are scheduler filters and > > weighers so your options for an out-of-tree extension approach will > > be limited. A third party project will arguably be the easiest > > approach but you will be restricted to talking to nova's REST APIs > > which may limit the design somewhat. This Blazar spec [2] could give > > you some ideas on this approach (assuming it was never actually > > implemented, though it may well have been). > > > > > My university gave me three compute hosts and one control host to > > > implement this solution for the bachelor thesis. I’m currently > > > setting up Openstack and all the services on the control host all > > > by myself to understand all the functionality (sorry for not using > > > Packstack) 😉. All my hosts have CentOS 7 and the minimum > > > deployment which I configure is Train. > > > > > > My idea is to work with nova schedulers, because they seem to be > > > interesting for my case. I've found a whole infrastructure > > > description of the provisioning of an instance in Openstack > > > https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png > > > .  > > > > > > The nova scheduler > > > https://docs.openstack.org/operations-guide/ops-customize-compute.html > > >  is the first component, where it is possible to implement > > > functions via Python and the Compute API > > > https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail > > >  to check for active VMs and probably delete them if needed before > > > a successful request for an instantiation can be made. > > > > > > What do you guys think about it? Does it seem like a good starting > > > point for you or is it the wrong approach? > > > > This could potentially work, but I suspect there will be serious > > performance implications with this, particularly at scale. Scheduler > > filters are historically used for simple things like "find me a group > > of hosts that have this metadata attribute I set on my image". Making > > API calls sounds like something that would take significant time and > > therefore slow down the schedule process. You'd also have to decide > > what your heuristic for deciding which VM(s) to delete would be, > > since there's nothing obvious in nova that you could use. > > You could use something as simple as filter extra specs or something > > as complicated as an external service. > > > > This should be lots to get you started. Once again, do make sure > > you're aware of what you're getting yourself into before you start. > > This could get complicated very quickly :) > > > > Cheers, > > Stephen > > > > > I'm very happy to have found you!!! > > > > > > Thank you really much for your time! > > > > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > > [2] > > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > > > > Best regards > > > Levon > > > > > > -----Ursprüngliche Nachricht----- > > > Von: Stephen Finucane > > > Gesendet: Montag, 31. Mai 2021 12:34 > > > An: Levon Melikbekjan ; > > > openstack at lists.openstack.org > > > Betreff: Re: Customization of nova-scheduler > > > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > > Hello Openstack team, > > > > > > > > is it possible to customize the nova-scheduler via Python? If > > > > yes, how? > > > > > > Yes, you can provide your own filters and weighers. This is > > > documented at [1]. > > > > > > Hope this helps, > > > Stephen > > > > > > [1] > > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-y > > > our-own-filter > > > > > > > > > > > Best regards > > > > Levon > > > > > > > > > > > > > > > > > From gmann at ghanshyammann.com Tue Jun 15 14:37:42 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 15 Jun 2021 09:37:42 -0500 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: <6fc4dc79-2083-4cf3-9ca8-ef6e1dd0ca5d@www.fastmail.com> References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> <179ebf99f29.d451fbf0365691.4329366033312889323@ghanshyammann.com> <6fc4dc79-2083-4cf3-9ca8-ef6e1dd0ca5d@www.fastmail.com> Message-ID: <17a101b4e15.f69ca44330433.7144220500254514851@ghanshyammann.com> ---- On Tue, 08 Jun 2021 12:12:11 -0500 Clark Boylan wrote ---- > On Tue, Jun 8, 2021, at 7:14 AM, Ghanshyam Mann wrote: > > ---- On Tue, 08 Jun 2021 07:42:21 -0500 Luigi Toscano > > wrote ---- > > > On Tuesday, 8 June 2021 14:11:40 CEST Fernando Ferraz wrote: > > > > Hello, > > > > > > > > The NetApp CI for Cinder also relies on Zuul v2. We were able to > > > > recently move our jobs to focal, but dropping devstack-gate is a > > big > > > > concern considering our team size and schedule. > > > > Luigi, could you clarify what would immediately break after xena is > > > > branched? > > > > > > > > > > For example grenade jobs won't work anymore because there won't be > > any new > > > entry related to stable/xena added here to devstack-vm-gate-wrap.sh: > > > > > > > > https://opendev.org/openstack/devstack-gate/src/branch/master/devstack-vm-gate-wrap.sh#L335 > > > > > > I understand that grenade testing is probably not relevant for 3rd > > party CIs > > > (it should be, but that's a different discussion), but the main > > point is that > > > devstack-gate is already now in almost-maintenance mode. The minimum > > amount of > > > fixed that have been merged have been used to keep working the very > > few legacy > > > jobs defined on opendev.org, and that number is basically 0 at this > > point. > > > > > > This mean that there are a ton of potential breakages happening > > anytime, and > > > the focal change is just one (and each one of you, CI owner, had to > > fix it on > > > your own). Others may come anytime and they won't be detected nor > > investigated > > > anymore because we don't have de-facto legacy jobs around since > > wallaby. > > > > > > To summarize: if you use Zuul v2, you have been running for a long > > while on an > > > unsupported software stack. The last tiny bits which could be used > > on both > > > zuulv2 and zuulv3 in legacy mode to easy the transition are > > unsupported too. > > > > > > This problem, I believe, has been communicated periodically by the > > various > > > team and the time to migrate is... last month. Please hurry up! > > > > Yes, we have done this migration in Victoria release cycle with two > > community-wide goals together > > with the direction of moving all the CI from devstack gate from wallaby > > itself. But by seeing few jobs > > and especially 3rd party CI, we extended the devstack-gate support for > > wallaby release [1]. So we > > extended the support for one more release until stable/wallaby. > > > > NOTE: supporting a extra release extend the devstack-gate support until > > that release until that become EOL, > > as we need to support that release stable CI. So it is not just a one > > more cycle support but even longer > > time of 1 year or more. > > > > Now extended the support for Xena cycle also seems very difficult by > > seeing very less number of > > contributor or less bandwidth of current core members in devstack-gate. > > > > I will plan to officially declare the devstack-gate deprecation with > > team but please move your CI/CD to > > latest Focal and to zuulv3 ASAP. > > These changes have started to go up [2]. > > I want to clarify a few things though. As far as I can remember we have never required any specific CI system or setup. What we have done are required basic behaviors from the CI system. Things like respond to "recheck", post logs in a publicly accessible location and report them back, have contacts available so we can contact you if things break, and so on. What this means is that some third party CI system are likely running Jenkins. I know others that ran some homegrown thing that watched the Gerrit event stream. We recommend Zuul and now Zuulv3 or newer because it is a tool that we understand and can provide some assistance with. > > Those that choose not to use the recommended tools are likely to need to invest in their own tooling and debugging. For devstack-gate we will not accept new patches to keep it running against master, but need to keep it around for older stable branches. If those that are running their own set of tools want to keep devstack-gate alive for modern openstack then forking it is likely the best path forward. Updates: All the patches for deprecating the devstack-gate are merged now, along with governance one: - https://review.opendev.org/c/openstack/governance/+/795385 README file has been updated with the warning and about forking way Clark mentioned above: https://opendev.org/openstack/devstack-gate/src/branch/master/README.rst -gmann > > > > > 1. > > https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > 2. > > https://governance.openstack.org/tc/goals/selected/victoria/native-zuulv3-jobs.html > > > > > > [1] > > https://review.opendev.org/c/openstack/devstack-gate/+/778129 > > https://review.opendev.org/c/openstack/devstack-gate/+/785010 > > [2] https://review.opendev.org/q/topic:%22deprecate-devstack-gate%22+(status:open%20OR%20status:merged) > From levonmelikbekjan at yahoo.de Tue Jun 15 14:59:04 2021 From: levonmelikbekjan at yahoo.de (levonmelikbekjan at yahoo.de) Date: Tue, 15 Jun 2021 16:59:04 +0200 Subject: AW: AW: AW: Customization of nova-scheduler In-Reply-To: References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> Message-ID: <000001d761f6$fc41d1a0$f4c574e0$@yahoo.de> Hi Stephen, I am already done with my solution. Everything works as expected! :) Thank you for your support. You guys are great. Best regards Levon -----Ursprüngliche Nachricht----- Von: Stephen Finucane Gesendet: Dienstag, 15. Juni 2021 16:19 An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org Betreff: Re: AW: AW: Customization of nova-scheduler On Thu, 2021-06-10 at 17:21 +0200, levonmelikbekjan at yahoo.de wrote: > Hi Stephen, > > I'm trying to customize my nova scheduler. However, if I change the novaconf as it is written here https://docs.openstack.org/operations-guide/de/ops-customize-compute.html, then my python file cannot be found. How can I configure it correctly? > > Do you have any idea? > > My controller node is running with CENTOS 7. I couldn't install devstack because it is only supported for CENTOS 8 version. That document is very old. You want [1], which documents how to do this properly. Hope this helps, Stephen [1] https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > Best regards > Levon > > -----Ursprüngliche Nachricht----- > Von: Stephen Finucane > Gesendet: Montag, 31. Mai 2021 18:21 > An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org > Betreff: Re: AW: Customization of nova-scheduler > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > Hello Stephen, > > > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > > > Two cases should be solved! > > > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. > > What you're describing is commonly referred to as "preemptible" or "spot" > instances. This topic has a long, complicated history in nova and has yet to be implemented. Searching for "preemptible instances openstack" should yield you lots of discussion on the topic along with a few proof-of-concept approaches using external services or out-of-tree modifications to nova. > > > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? > > As hinted above, this is likely to be a very difficult project given the fraught history of the idea. I don't want to dissuade you from this work but you should be aware of what you're getting into from the start. If you're serious about pursuing t