From gmann at ghanshyammann.com Tue Jun 1 00:56:19 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 31 May 2021 19:56:19 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 3rd at 1500 UTC Message-ID: <179c5124664.d7d11855244381.7893037772801020341@ghanshyammann.com> Hello Everyone, NOTE: FROM THIS WEEK ONWARDS, TC MEETINGS WILL BE HELD IN #openstack-tc CHANNEL ON OFTC NETWORK (NOT FREENODE) Technical Committee's next weekly meeting is scheduled for June 3rd at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, June 2nd, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From yasufum.o at gmail.com Tue Jun 1 01:31:28 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 1 Jun 2021 10:31:28 +0900 Subject: [tacker] Next meeting moving on OFTC Message-ID: <4ba17cf3-5446-4a86-aea1-fa621b12b2a3@gmail.com> Hi team, The next tacker team meeting will be held on OFTC #openstack-meeting channel. Please refer [1] and [2] for joining to OFTC if you are not ready. [1] https://www.oftc.net/ [2] https://www.oftc.net/Services/#register-your-account Thanks, Yasufumi From kklimonda at syntaxhighlighted.com Tue Jun 1 06:56:02 2021 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Tue, 01 Jun 2021 08:56:02 +0200 Subject: [magnum] docker_volume_size considered harmful? Message-ID: <57d9666b-75c0-47b8-9998-8f3394cd6f83@www.fastmail.com> Hi, Reading through recent magnum reviews on gerrit, I've noticed Spyros' comment that he hopes not many people use this option. That lead me to look into issues related to this, but the only related piece of information I could find was that this option has been observed (or proven, depending on whether we read release notes or commit message) to become a bottleneck for scaling larger clusters. Firstly, if this option is considered problematic for scaling wouldn't it make sense to somehow deprecated it, and put a warning in the documentation describing the reasoning for that? Right now it seems to be a preferred way of deploying clusters based on the documentation - it is used in the cluster template creation example here: https://docs.openstack.org/magnum/latest/user/ Secondly, if docker volume is considered problematic, does this mean that volume-based instances have the same problem in general, and image-based instances should be used instead? When does this become a problem? For clusters with 20 nodes? 100? 200? 500? -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com From pierre at stackhpc.com Tue Jun 1 07:46:55 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 1 Jun 2021 09:46:55 +0200 Subject: [blazar] Project channel and weekly meeting moving to OFTC Message-ID: Hello, Like the rest of the OpenStack community, Blazar is moving to the OFTC network, still using the #openstack-blazar channel. Please join us over there. The bi-weekly meeting will also be on the OFTC network, still in #openstack-meeting-alt for now. For more information about the change of IRC network, read [1]. Best wishes, Pierre Riteau (priteau) [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html From manchandavishal143 at gmail.com Tue Jun 1 08:18:59 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Tue, 1 Jun 2021 13:48:59 +0530 Subject: [horizon] Project channel and weekly meeting moving to OFTC n/w Message-ID: Hello Everyone, As you may already know Openstack IRC has moved from Freenode[1] n/w to OFTC n/w. So from tomorrow onwards, our weekly team meetings will be on OFTC n/w on the same channel (#openstack-meeting-alt) as previous one at Freenode n/w. Also, please try to discuss any topics on the same channel (openstack-horizon) on OFTC n/w. Kindly register yourself on OFTC n/w[ 2] if you have not done yet. Thanks & Regards, Vishal Manchanda [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html [2] https://www.oftc.net/Services/#register-your-account -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Jun 1 08:26:22 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 1 Jun 2021 10:26:22 +0200 Subject: [qa] Weekly meeting moving to OFTC starting June 1st Message-ID: Hello, as you are probably aware, OpenStack IRC has moved from FreeNode to OFTC network during the weekend [1]. Starting this week our weekly Office Hour which is held on #openstack-qa channel *will be at OFTC* network. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html Regards, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Jun 1 12:01:58 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 01 Jun 2021 13:01:58 +0100 Subject: [docs] Request to clean up reviewers on openstack-doc-core and openstack-contributor-guide-core In-Reply-To: <58D9BB93-0264-4A7F-89D6-E91CFD4914FD@demarco.com> References: <58D9BB93-0264-4A7F-89D6-E91CFD4914FD@demarco.com> Message-ID: <224e39f37e7780c87ce6bf666985d069d084ec4c.camel@redhat.com> On Mon, 2021-05-31 at 12:57 -0500, Amy wrote: > I can help > > Amy > > > On May 31, 2021, at 11:58 AM, Radosław Piliszek wrote: > > > > On Mon, May 31, 2021 at 1:19 PM Stephen Finucane wrote: > > > > > > > On Wed, 2021-05-26 at 18:24 -0500, Ghanshyam Mann wrote: > > > > ---- On Wed, 26 May 2021 12:22:46 -0500 Julia Kreger wrote ---- > > > > I am happy to help in the openstack/contributor-guide repo (as doing as part of the upstream institute training activity). > > > > > > Added you, gmann. > > > > > > > I can help too. > > > > -yoctozepto > > > Hurrah. Added you both. Thanks :) It's very low activity now, but increasing bus factor is always a good thing. Stephen From patryk.jakuszew at gmail.com Tue Jun 1 12:11:16 2021 From: patryk.jakuszew at gmail.com (Patryk Jakuszew) Date: Tue, 1 Jun 2021 14:11:16 +0200 Subject: [nova] Proper way to regenerate request_specs of existing instances? Message-ID: Hi! I have a Rocky deployment and I want to enable AggregateInstanceExtraSpecsFilter on it. There is one slight problem I'm trying to solve in a proper way: fixing the request_specs of instances that are already running. After enabling the filter, I want to add necessary metadata keys to flavors, but this won't be propagated into request_specs of running instances, and this will cause issues later on (like scheduler selecting wrong destination hosts for migration, for example) Few years ago I encountered a similar problem on Mitaka: that deployment already had the filter enabled, but some flavors were misconfigured and lacked the metadata keys. I ended up writing a crude Python script which connected directly into the Nova database, searched for bad request_specs and manually appended the necessary extra_specs keys into request_specs JSON blob. Now, my question is: has anyone encountered a similar scenario before? Is there a more clean method for regeneration of instance request_specs, or do I have to modify the JSON blobs manually by writing directly into the database? -- Regards, Patryk Jakuszew -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Jun 1 12:34:50 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 1 Jun 2021 14:34:50 +0200 Subject: [nova] Proper way to regenerate request_specs of existing instances? In-Reply-To: References: Message-ID: On Tue, Jun 1, 2021 at 2:17 PM Patryk Jakuszew wrote: > Hi! > > I have a Rocky deployment and I want to enable > AggregateInstanceExtraSpecsFilter on it. There is one slight problem I'm > trying to solve in a proper way: fixing the request_specs of instances that > are already running. > > After enabling the filter, I want to add necessary metadata keys to > flavors, but this won't be propagated into request_specs of running > instances, and this will cause issues later on (like scheduler selecting > wrong destination hosts for migration, for example) > > Few years ago I encountered a similar problem on Mitaka: that deployment > already had the filter enabled, but some flavors were misconfigured and > lacked the metadata keys. I ended up writing a crude Python script which > connected directly into the Nova database, searched for bad request_specs > and manually appended the necessary extra_specs keys into request_specs > JSON blob. > > Now, my question is: has anyone encountered a similar scenario before? Is > there a more clean method for regeneration of instance request_specs, or do > I have to modify the JSON blobs manually by writing directly into the > database? > > As Nova looks at the RequestSpec records for knowing what the user was asking when creating the instance, and as the instance values can be modified when for example you move an instance, that's why we don't support to modify the RequestSpec directly. In general, this question is about AZs : as in general some operators want to modify the AZ value of a specific RequestSpec, this would also mean that the users using the related instance would not understand why now this instance would be on another AZ if the host is within another one. As you said, if you really want to modify the RequestSpec object, please then write a Python script that would use the objects class by getting the RequestSpec object directly and then persisting it again. -Sylvain -- > Regards, > Patryk Jakuszew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Jun 1 13:55:19 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 1 Jun 2021 10:55:19 -0300 Subject: [CLOUDKITTY] Fix tests cases broken by flask >=2.0.1 Message-ID: Hello guys, I was reviewing the patch https://review.opendev.org/c/openstack/cloudkitty/+/793790, and decided to propose an alternative patch ( https://review.opendev.org/c/openstack/cloudkitty/+/793973). Could you guys review it? The idea I am proposing is that, instead of mocking the root object ("flask.request"), we address the issue by mocking only the needed methods and attributes. This facilitates the understanding of the unit test, and also helps people to pin-point problems right away as the mocked attributes/methods are clearly seen in the unit test. -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Jun 1 14:00:36 2021 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 1 Jun 2021 10:00:36 -0400 Subject: [ops] trial ops meetups team meeting on oftc now Message-ID: let's try to kick the tires of the new IRC location (irc.oftc.net) for a ops meetups team reunion https://etherpad.opendev.org/p/ops-meetups-team -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Jun 1 14:33:51 2021 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 1 Jun 2021 10:33:51 -0400 Subject: [ops] ops meetups team meeting restarted succesfully on OFTC! Message-ID: We had a quick rehearsal meeting of the OpenStack Ops Meetups team on IRC on the new IRC host this morning. Minutes are linked below, however the key points : new team member amorin OFTC IRC worked fine openstack's meetbot instance is correctly connected and the expected commands worked (big shoutout to Jeremy Stanley and the opendev infra team for a seamless switch!) No strong preference to move away from IRC amongst those present We will try to re-animate the #openstack-operators as a channel for technical discussions between openstack operators Meeting ended Tue Jun 1 14:28:21 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:28 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2021/ops_meetup_team.2021-06-01-14.02.html 10:28 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2021/ops_meetup_team.2021-06-01-14.02.txt 10:28 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2021/ops_meetup_team.2021-06-01-14.02.log.html Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Jun 1 16:02:24 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 1 Jun 2021 09:02:24 -0700 Subject: [requirements][docs] sphinx and docutils major version update In-Reply-To: <20210528020128.earj2i2v5nxjnlu3@mthode.org> References: <20210528020128.earj2i2v5nxjnlu3@mthode.org> Message-ID: FYI, some projects may need an additional LaTeX font package for PDF rendering after the update: https://review.opendev.org/c/openstack/designate-dashboard/+/791558/3/bindep.txt Michael On Thu, May 27, 2021 at 7:05 PM Matthew Thode wrote: > > Looks like a major version update came along and broke things. I'd > appreciate if some docs people could take a look at > https://review.opendev.org/793022 > > Thanks, > > -- > Matthew Thode From patryk.jakuszew at gmail.com Tue Jun 1 16:06:01 2021 From: patryk.jakuszew at gmail.com (Patryk Jakuszew) Date: Tue, 1 Jun 2021 18:06:01 +0200 Subject: [nova] Proper way to regenerate request_specs of existing instances? In-Reply-To: References: Message-ID: On Tue, 1 Jun 2021 at 14:35, Sylvain Bauza wrote: > In general, this question is about AZs : as in general some operators want to modify the AZ value of a specific RequestSpec, this would also mean that the users using the related instance would not understand why now this instance would be on another AZ if the host is within another one. To be more specific: we do have AZs already, but we also want to add AggregateInstanceExtraSpecsFilter in order to prepare for a scenario with having multiple CPU generations in each AZ. > As you said, if you really want to modify the RequestSpec object, please then write a Python script that would use the objects class by getting the RequestSpec object directly and then persisting it again. Alright, I will try that again, but using the Nova objects class as you suggest. Thanks for the answer! -- Regards, Patryk From swamycnn at gmail.com Tue Jun 1 11:47:49 2021 From: swamycnn at gmail.com (Swamy C.N.N) Date: Tue, 1 Jun 2021 17:17:49 +0530 Subject: Is there s way, i can boot nova instance using external DHCP server ? Message-ID: Hi, I know, this does not fit in the cloud operator model nor the openstack-neutron way of doing things. However, i have this use case: - vmware is booting guest vm with underlying DHCP server for IPAM integrated with DNS for host registration as the guest vms power on. So, vm gets ip with its hostname DNS registered as it boots. - I started a few vms on openstack, I can attach guest vms on these provider n/w and I can see they are reachable just like any other vm on an enterprise network similar to vmware guest vms and this is great. However, I cannot find any docs, option where these guest vms can cross over to external DHCP for ip address management because the provider network operates at Layer3. Found this blueprint, review discussion https://blueprints.launchpad.net/neutron/+spec/dhcp-relay Which is close to what i'm looking for and not updated for sometime. Wanted to know if anyone has come across this scenario and how to get over this. As one of the options, validated openstack-designate service which solves the hostname registration not the IPAM. Also it needs site DNS server domain management . Something, I can look at if the external DHCP server is not an option for nova boot. Thanks, Swamy -------------- next part -------------- An HTML attachment was scrubbed... URL: From swamycnn at gmail.com Tue Jun 1 11:58:02 2021 From: swamycnn at gmail.com (Swamy C.N.N) Date: Tue, 1 Jun 2021 17:28:02 +0530 Subject: [ops] Is there s way, i can boot nova instance using external DHCP Message-ID: Hi I know, this does not fit in the cloud operator model nor the openstack-neutron way of doing things. However, i have this use case: - vmware is booting guest vm with underlying DHCP server for IPAM integrated with DNS for host registration as the guest vms power on. So, vm gets ip with its hostname DNS registered as it boots. - I started a few vms on openstack, I can attach guest vms on these provider n/w and I can see they are reachable just like any other vm on an enterprise network similar to vmware guest vms and this is great. However, I cannot find any docs, option where these guest vms can cross over to external DHCP for ip address management because the provider network operates at Layer3. Found this blueprint, review discussion https://blueprints.launchpad.net/neutron/+spec/dhcp-relay Which is close to what i'm looking for and not updated for sometime. Wanted to know if anyone has come across this scenario and how to get over this. As one of the options, validated openstack-designate service which solves the hostname registration not the IPAM. Also it needs site DNS server domain management . Something, I can look at if the external DHCP server is not an option for nova boot. Thanks, Swamy -------------- next part -------------- An HTML attachment was scrubbed... URL: From levonmelikbekjan at yahoo.de Tue Jun 1 12:12:45 2021 From: levonmelikbekjan at yahoo.de (levonmelikbekjan at yahoo.de) Date: Tue, 1 Jun 2021 14:12:45 +0200 Subject: AW: AW: Customization of nova-scheduler In-Reply-To: <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> Message-ID: <000001d756df$6e7ab430$4b701c90$@yahoo.de> Hello Stephen, thank you for your quick reply and your valuable information. I am well aware that this task will not be easy to accomplish. However, I like challenging tasks because you can learn a lot from them. Thanks for the warning, but I think you misunderstood me. It is not my intention to reserve ressources for anyone. Let me explain you my aim more detailed. Hosts will exist in our infrastructure that will belong to a user (owner). Each user will have an aggregate that will be assigned to his user object as an id in the "extra" attribute field. All of the compute nodes that will be owned by the owner are located within this host aggregate. If hosts from an aggregate that belong to someone are not in use, everyone else is allowed to use them (for example a user who does not have any servers in his possession). When the owner decides to create a VM and our cloud doesn't have enough resources, all servers will be deleted from his compute node based on the aggregate id which is located in his user object. Then the function "Launch instance" tries again to create his VM. You're right, the API requests will take some time. The only requests I will send are one-off: - Get user by name/id - Get server list - Get aggregate by id ... and maybe a few times the server delete call. Maybe it is possible to store aggregate information locally and access it with python?!?! Or maybe it is better to store all the host information directly in the user object without having always to call the aggregate API. Alternatively, hypervisors could be used to minimize the number of calls for deletion of servers. This would only be a onetime call at the very beginning of my python script to determine the amount of free and used resources. With hypervisors and the information of the required resources by the owner of an aggregate, I could delete specific servers without having to delete all of them. I like the feature with the aggregates very much, especially because it is possible to add new compute nodes at any time. Kind regards Levon -----Ursprüngliche Nachricht----- Von: Stephen Finucane Gesendet: Montag, 31. Mai 2021 18:21 An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org Betreff: Re: AW: Customization of nova-scheduler On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > Hello Stephen, > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > Two cases should be solved! > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. What you're describing is commonly referred to as "preemptible" or "spot" instances. This topic has a long, complicated history in nova and has yet to be implemented. Searching for "preemptible instances openstack" should yield you lots of discussion on the topic along with a few proof-of-concept approaches using external services or out-of-tree modifications to nova. > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? As hinted above, this is likely to be a very difficult project given the fraught history of the idea. I don't want to dissuade you from this work but you should be aware of what you're getting into from the start. If you're serious about pursuing this, I suggest you first do some research on prior art. As noted above, there is lots of information on the internet about this. With this research done, you'll need to decide whether this is something you want to approach within nova itself, via out-of-tree extensions or via a third party project. If you're opting for integration with nova, then you'll need to think long and hard about how you would design such a system and start working on a spec (a design document) outlining your proposed solution. Details on how to write a spec are discussed at [1]. The only extension points nova offers today are scheduler filters and weighers so your options for an out-of-tree extension approach will be limited. A third party project will arguably be the easiest approach but you will be restricted to talking to nova's REST APIs which may limit the design somewhat. This Blazar spec [2] could give you some ideas on this approach (assuming it was never actually implemented, though it may well have been). > My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. > > My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. > > What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? This could potentially work, but I suspect there will be serious performance implications with this, particularly at scale. Scheduler filters are historically used for simple things like "find me a group of hosts that have this metadata attribute I set on my image". Making API calls sounds like something that would take significant time and therefore slow down the schedule process. You'd also have to decide what your heuristic for deciding which VM(s) to delete would be, since there's nothing obvious in nova that you could use. You could use something as simple as filter extra specs or something as complicated as an external service. This should be lots to get you started. Once again, do make sure you're aware of what you're getting yourself into before you start. This could get complicated very quickly :) Cheers, Stephen > I'm very happy to have found you!!! > > Thank you really much for your time! [1] https://specs.openstack.org/openstack/nova-specs/readme.html [2] https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > Best regards > Levon > > -----Ursprüngliche Nachricht----- > Von: Stephen Finucane > Gesendet: Montag, 31. Mai 2021 12:34 > An: Levon Melikbekjan ; > openstack at lists.openstack.org > Betreff: Re: Customization of nova-scheduler > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > Hello Openstack team, > > > > is it possible to customize the nova-scheduler via Python? If yes, how? > > Yes, you can provide your own filters and weighers. This is documented at [1]. > > Hope this helps, > Stephen > > [1] > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-y > our-own-filter > > > > > Best regards > > Levon > > > > From Fadi.Badine at enghouse.com Tue Jun 1 13:45:01 2021 From: Fadi.Badine at enghouse.com (Fadi Badine) Date: Tue, 1 Jun 2021 13:45:01 +0000 Subject: Tacker Auto Scale Support Message-ID: Hello, I would like to know if VNF auto scaling is supported by Tacker and if so in which release. I tried looking at release notes but couldn't find anything. Thanks! Best regards, Fadi Badine Product Manager Office: +961 (1) 900 818 Mobile: +961 (3) 822 966 W: www.enghousenetworks.com E: fadi.badine at enghouse.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Jun 1 17:14:19 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 01 Jun 2021 19:14:19 +0200 Subject: =?UTF-8?B?562U5aSNOg==?= [Nova] Meeting time poll In-Reply-To: References: <56c234f49e7c41bb86a86af824f41187@inspur.com> Message-ID: Hi, So today we decided to have an extra meeting timeslot first Thursday of every month at 8:00 UTC on #openstack-nova on the OFTC IRC server. So the first such meeting will happen on Thursday 2021.06.03. I've update the meeting wiki with the new timings and pushed a patch to add the meeting to the IRC meeting schedule[2]. See you on the meeting! Cheers, gibi [1] https://wiki.openstack.org/wiki/Meetings/Nova [2] https://review.opendev.org/c/opendev/irc-meetings/+/794010 On Wed, May 26, 2021 at 09:50, Balazs Gibizer wrote: > Hi, > > On Tue, May 25, 2021 at 07:10, Sam Su (苏正伟) > wrote: >> Hi, gibi: >> I'm very sorry for respone later. >> A meeting around 8:00 UTC seems very appropriate to us. It is >> afternoon work time in East Asian when 8:00 UTC. >> Now my colleague, have some work on Cyborg across with Nova, >> passthroug device, TPM and so on. If they can join the irc meeting >> talking with the community , it will be much helpful. >> > > @Sam: We discussed your request yesterday[1] and it seems that the > team is not objecting against a monthly office hour in > #openstack-nova around UTC 8 or UTC 9. But we did not agreed which > day we should have it so I set up a poll[2]. > > @Team: As we discussed yesterday I opened a poll to agree on the day > of the week and the exact start time for the Asia friendly office > hours slot. Please vote in the poll[2] before next Tuesday. > > Cheers, > gibi > > [1] > http://eavesdrop.openstack.org/meetings/nova/2021/nova.2021-05-25-16.00.log.html#l-100 > [2] https://doodle.com/poll/svrnmrtn6nnknzqp > >> >> -----邮件原件----- >> 发件人: Balazs Gibizer [mailto:balazs.gibizer at est.tech] >> 发送时间: 2021年5月14日 14:12 >> 收件人: Sam Su (苏正伟) >> 抄送: alifshit at redhat.com; openstack-discuss at lists.openstack.org >> 主题: Re: [Nova] Meeting time poll >> >> >> >> On Fri, May 14, 2021 at 01:23, Sam Su (苏正伟) >> wrote: >>> From: Sam Su (苏正伟) >>> Sent: Friday, May 14, 2021 03:23 >>> To: alifshit at redhat.com >>> Cc: openstack-discuss at lists.openstack.org >>> Subject: Re: [Nova] Meeting time poll >>> >>> Hi, Nova team: >> >> Hi Sam! >> >>> There are many asian developers for Openstack community. I >>> found the current IRC time of Nova is not friendly to them, >>> especially >>> to East Asian. >>> If they >>> can take part in the IRC meeting, the Nova may have more >>> developers. >>> Of >>> cource, Central Europe and NA West Coast is firstly considerable. >>> If >>> the team could schedule the meeting once per month, time suitable >>> for >>> asians, more people would participate in the meeting discussion. >> >> You have a point. In the past Nova had alternating meeting time >> slots one for EU+NA and one for the NA+Asia timezones. Our >> experience was that the NA+Asia meeting time slot was mostly >> lacking participants. So we merged the two slots. But I can imagine >> that the situation has changed since and there might be need for an >> alternating meeting again. >> >> We can try what you suggest and do an Asia friendly meeting once a >> month. The next question is what time you would like to have that >> meeting. Or more specifically which part of the nova team you would >> like to meet more? >> >> * Do a meeting around 8:00 UTC to meet Nova devs from the EU >> >> * Do a meeting around 0:00 UTC to meet Nova devs from North America >> >> If we go for the 0:00 UTC time slot then I need somebody to chair >> that meeting as I'm from the EU. >> >> Alternatively to having a formal meeting I can offer to hold a free >> style office hour each Thursday 8:00 UTC in #openstack-nova. I made >> the same offer when we moved the nova meeting to be a non >> alternating one. >> But honestly I don't remember ever having discussion happening >> specifically due to that office hour in #openstack-nova. >> >> Cheers, >> gibi >> >> p.s.: the smime in your mail is not really mailing list friendly. >> Your mail does not appear properly in the archive. >> >> >> >> > > > From Albert.Shih at obspm.fr Tue Jun 1 18:44:35 2021 From: Albert.Shih at obspm.fr (Albert Shih) Date: Tue, 1 Jun 2021 20:44:35 +0200 Subject: [victoria][cinder ?] Dell Unity + Iscsi Message-ID: Hi everyone I've a small openstack configuration with 4 computes nodes, a Dell Unity 480F for the storage. I'm using cinder with iscsi. Everything work when I create a instance. But some instance after few time are not reponsive. When I check on the hypervisor I can see [888240.310461] sd 14:0:0:2: [sdb] tag#120 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.310493] sd 14:0:0:2: [sdb] tag#120 Sense Key : Illegal Request [current] [888240.310502] sd 14:0:0:2: [sdb] tag#120 Add. Sense: Logical unit not supported [888240.310510] sd 14:0:0:2: [sdb] tag#120 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 [888240.310519] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 [888240.311045] sd 14:0:0:2: [sdb] tag#121 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.311050] sd 14:0:0:2: [sdb] tag#121 Sense Key : Illegal Request [current] [888240.311065] sd 14:0:0:2: [sdb] tag#121 Add. Sense: Logical unit not supported [888240.311070] sd 14:0:0:2: [sdb] tag#121 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 [888240.311074] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 [888240.342482] sd 14:0:0:2: [sdb] tag#70 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.342490] sd 14:0:0:2: [sdb] tag#70 Sense Key : Illegal Request [current] [888240.342496] sd 14:0:0:2: [sdb] tag#70 Add. Sense: Logical unit not supported I check on the hypervisor, no error at all on the ethernet interface. I check on the switch, no error at all on the interface on the switch. No sure but it's seem the problem appear more often when the instance are doing nothing during some time. Every firmware, software on the Unity are uptodate. The 4 computes are exactly same, they run the same version of the nova-compute & OS & firmware on the hardware. Any clue ? Or place to search the problem ? Regards -- Albert SHIH Observatoire de Paris xmpp: jas at obspm.fr Heure local/Local time: Tue Jun 1 08:27:42 PM CEST 2021 From smooney at redhat.com Tue Jun 1 20:55:43 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 01 Jun 2021 21:55:43 +0100 Subject: AW: Customization of nova-scheduler In-Reply-To: <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> Message-ID: <0fbc1e49a3f87aadc82fa12a53454bc76a3dae4a.camel@redhat.com> On Mon, 2021-05-31 at 17:21 +0100, Stephen Finucane wrote: > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > Hello Stephen, > > > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > > > Two cases should be solved! > > > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > one thing to keep in mind is that end users are not allow to know the capstity of the cloud in terms of number of host, the resouces on a host or what host there vm is placeed on. so as a user the conceph of "a low priority uses a VM from Openstack with half performance of the available host" is not something that you can express arctecurally in nova. flavor define the size of vms in absolute term i.e. 4GB of ram not relitve "50% of the host". we have a 3 laryer schuldeing prcoess that start with a query to the placment service for a set of quantitative resouce class and qualitative traits. that produces a set fo allcoation candiate against a serise of host that could fit the instance, we then filter those host useing python filters wich are boolean fucntion that either pass the host or reject it finally after filtering we weight the remaining hosts and selecet one to boot the vm. once you have completed a steph in this processs you can nolonger go to a previous step and you can never readd a host afteer it has been elimiated by placemnt or a filter to be considered again. as a result if you get the end of the avaiable hosts and there are none that can fix your vm we cannot delete a vm and start again without redoing all the work and possible facing with concurrent api requests. this is why this is a hard problem with out an external service that can rebalance exiting workloads and free up capsity. > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. > > What you're describing is commonly referred to as "preemptible" or "spot" > instances. This topic has a long, complicated history in nova and has yet to be > implemented. Searching for "preemptible instances openstack" should yield you > lots of discussion on the topic along with a few proof-of-concept approaches > using external services or out-of-tree modifications to nova. > > > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? > > As hinted above, this is likely to be a very difficult project given the fraught > history of the idea. I don't want to dissuade you from this work but you should > be aware of what you're getting into from the start. If you're serious about > pursuing this, I suggest you first do some research on prior art. As noted > above, there is lots of information on the internet about this. With this > research done, you'll need to decide whether this is something you want to > approach within nova itself, via out-of-tree extensions or via a third party > project. If you're opting for integration with nova, then you'll need to think > long and hard about how you would design such a system and start working on a > spec (a design document) outlining your proposed solution. Details on how to > write a spec are discussed at [1]. The only extension points nova offers today > are scheduler filters and weighers so your options for an out-of-tree extension > approach will be limited. A third party project will arguably be the easiest > approach but you will be restricted to talking to nova's REST APIs which may > limit the design somewhat. This Blazar spec [2] could give you some ideas on > this approach (assuming it was never actually implemented, though it may well > have been). > > > My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. > > > > My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > > > The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. > > > > What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? > > This could potentially work, but I suspect there will be serious performance > implications with this, particularly at scale. Scheduler filters are > historically used for simple things like "find me a group of hosts that have > this metadata attribute I set on my image". Making API calls sounds like > something that would take significant time and therefore slow down the schedule > process. You'd also have to decide what your heuristic for deciding which VM(s) > to delete would be, since there's nothing obvious in nova that you could use. > You could use something as simple as filter extra specs or something as > complicated as an external service. yes implementing preemption in the scheduler as filet was disccused in the passed and discounted for the performance implication stephen hinted at. in tree we currentlyt do not allow filter to make any api or db queires. that approach also will not work toady since you would have to rexecute the query to the placment service after deleting an instance when you run out of capacity and restart the filtering which a filter cannot do as i noted above. the most recent spec in this area was https://review.opendev.org/c/openstack/nova-specs/+/438640 for the integrated approch and https://review.opendev.org/c/openstack/nova-specs/+/554212/12 which proposed adding a pending state for use with a standalone service https://gitlab.cern.ch/ttsiouts/ReaperServicePrototype ther are a number of presentation on this form cern/stackhapc https://www.stackhpc.com/scientific-sig-at-the-dublin-ptg.html http://openstack-in-production.blogspot.com/2018/02/maximizing-resource-utilization-with.html https://openlab.cern/sites/openlab.web.cern.ch/files/2018-07/Containers_on_Baremetal_and_Preemptible_VMs_at_CERN_and_SKA.pdf https://indico.cern.ch/event/739089/sessions/282073/attachments/1689073/2717151/ASDF_preemptible.pdf the current state is rebuilding from cell0 is not support but the pending state was never added and the reaper service was not upstream. work in this are has now move the blazar project as stphen noted in [2] https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html but is dont think it has made much progress. https://review.opendev.org/q/topic:%22preemptibles%22+(status:open%20OR%20status:merged) nova previously had a pluggable scheduler that would have allowed you to reimplent the scudler entirely from scratch but we removed that capability in the last year or two. at this point the only viable approach that will not take multiple upstream cycles to this is really to use an external service. > > This should be lots to get you started. Once again, do make sure you're aware of > what you're getting yourself into before you start. This could get complicated > very quickly :) yes anything other then adding the pending state to nova will be very complex due to placement interaction. you would really need to implement a fallback query mechanism in the scudler iteself. anything after the call to placement is already too late. you might be able to reuse consumer types to make some allocation preemtiblae and have a prefilter decide if an allocation should be a normal nova consumer or premtable consumer based on a flavor extra spec.https://docs.openstack.org/placement/train/specs/train/approved/2005473-support-consumer-types.html this would still require the pending state and an external reaper service to free the capsity to be clean but its a possible direction. > > Cheers, > Stephen > > > I'm very happy to have found you!!! > > > > Thank you really much for your time! > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > [2] https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > > Best regards > > Levon > > > > -----Ursprüngliche Nachricht----- > > Von: Stephen Finucane > > Gesendet: Montag, 31. Mai 2021 12:34 > > An: Levon Melikbekjan ; openstack at lists.openstack.org > > Betreff: Re: Customization of nova-scheduler > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > Hello Openstack team, > > > > > > is it possible to customize the nova-scheduler via Python? If yes, how? > > > > Yes, you can provide your own filters and weighers. This is documented at [1]. > > > > Hope this helps, > > Stephen > > > > [1] https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > > > > > > > > Best regards > > > Levon > > > > > > > > > > From smooney at redhat.com Tue Jun 1 21:14:45 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 01 Jun 2021 22:14:45 +0100 Subject: [nova] Proper way to regenerate request_specs of existing instances? In-Reply-To: References: Message-ID: On Tue, 2021-06-01 at 18:06 +0200, Patryk Jakuszew wrote: > On Tue, 1 Jun 2021 at 14:35, Sylvain Bauza wrote: > > In general, this question is about AZs : as in general some operators want to modify the AZ value of a specific RequestSpec, this would also mean that the users using the related instance would not understand why now this instance would be on another AZ if the host is within another one. > > To be more specific: we do have AZs already, but we also want to add > AggregateInstanceExtraSpecsFilter in order to prepare for a scenario > with having multiple CPU generations in each AZ. the supported way to do that woudl be to resize the instance. nova currently does not suppout updating the embedded flavor any other way. that said this is yet another usecase for a recreate api that would allow updating the embedded flavor and image metadta. nova expect flavours to be effectively immutable once an instace start to use them. the same is true of image properties so partly be design this has not been easy to support in nova because it was a usgage model we have declard out of scope. the solution that is vaiable today is rebuidl ro resize but a recreate api is really want you need. > > > As you said, if you really want to modify the RequestSpec object, please then write a Python script that would use the objects class by getting the RequestSpec object directly and then persisting it again. > > Alright, I will try that again, but using the Nova objects class as you suggest. this has come up often enough that we __might__ (im stressing might since im not sure we really want to do this) consider adding a nova manage command to do this. e.g. nova-mange instance flavor-regenerate and nova-mange instance image-regenerate those command woudl just recrate the embeded flavor and image metadta without moving the vm or otherwise restarting it. you would then have to hard reboot it or migrate it sepereatlly. im not convicned this is a capablity we should provide to operators in tree however via nova-manage. with my downstream hat on im not sure how supportable it woudl for example since like nova reset-state it woudl be very easy to render vms unbootable in there current localthouh if a tenatn did a hard reboot and cause all kinds of stange issues that are hard to debug an fix. > > Thanks for the answer! > > -- > Regards, > Patryk > From Arkady.Kanevsky at dell.com Tue Jun 1 21:24:22 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Tue, 1 Jun 2021 21:24:22 +0000 Subject: [interop] something strange Message-ID: Team, Once we merged https://review.opendev.org/c/osf/interop/+/786116 I expect that all old guidelines will move into directory "previous". I just sync my master to latest and still see old guidelines on top level directory. Any idea why? Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Jun 1 21:39:20 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 1 Jun 2021 23:39:20 +0200 Subject: [interop] something strange In-Reply-To: References: Message-ID: Hi Arkady, I had to revert it (see the latest comment or https://review.opendev.org/c/osf/interop/+/792883) as it caused troubles with the refstack server - it wasn't able to retrieve the guidelines. Reason for revert: refstack server gives 404 on the guidelines: https://refstack.openstack.org/#/guidelines .. seems like https://review.opendev.org/c/osf/refstack/+/790940 didn't handle the update of the guidelines location everywhere - I suspect that some changes in refstack-ui are needed as well, ah I'm sorry for inconvenience, On Tue, 1 Jun 2021 at 23:24, Kanevsky, Arkady wrote: > Team, > > Once we merged https://review.opendev.org/c/osf/interop/+/786116 > > I expect that all old guidelines will move into directory “previous”. > > I just sync my master to latest and still see old guidelines on top level > directory. > > Any idea why? > > > > Thanks, > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Jun 1 22:56:28 2021 From: amy at demarco.com (Amy Marrich) Date: Tue, 1 Jun 2021 17:56:28 -0500 Subject: [Meeting] RDO Community Meeting Message-ID: Just a reminder that this week's meeting will be our video meeting[0][1] as it is the first meeting of the month. Our IRC meetings will be on OFTC in the #rdo channel beginning next week. Thanks, Amy (spotz) 0 - https://meet.google.com/uzo-tfkt-top. 1 - https://etherpad.opendev.org/p/RDO-Meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From suzhengwei at inspur.com Wed Jun 2 01:56:54 2021 From: suzhengwei at inspur.com (=?utf-8?B?U2FtIFN1ICjoi4/mraPkvJ8p?=) Date: Wed, 2 Jun 2021 01:56:54 +0000 Subject: =?utf-8?B?562U5aSNOiDnrZTlpI06IFtOb3ZhXSBNZWV0aW5nIHRpbWUgcG9sbA==?= In-Reply-To: References: <56c234f49e7c41bb86a86af824f41187@inspur.com> Message-ID: <3c8c17a23ed84d8a9f6f05eb9a52b0db@inspur.com> Hi, gibi I am glad to introduce the extra meeting to my colleagues and other developers in China. I will be on the meeting. See you. -----邮件原件----- 发件人: Balazs Gibizer [mailto:balazs.gibizer at est.tech] 发送时间: 2021年6月2日 1:14 收件人: Sam Su (苏正伟) 抄送: alifshit at redhat.com; openstack-discuss at lists.openstack.org 主题: Re: 答复: [Nova] Meeting time poll Hi, So today we decided to have an extra meeting timeslot first Thursday of every month at 8:00 UTC on #openstack-nova on the OFTC IRC server. So the first such meeting will happen on Thursday 2021.06.03. I've update the meeting wiki with the new timings and pushed a patch to add the meeting to the IRC meeting schedule[2]. See you on the meeting! Cheers, gibi [1] https://wiki.openstack.org/wiki/Meetings/Nova [2] https://review.opendev.org/c/opendev/irc-meetings/+/794010 On Wed, May 26, 2021 at 09:50, Balazs Gibizer wrote: > Hi, > > On Tue, May 25, 2021 at 07:10, Sam Su (苏正伟) > wrote: >> Hi, gibi: >> I'm very sorry for respone later. >> A meeting around 8:00 UTC seems very appropriate to us. It is >> afternoon work time in East Asian when 8:00 UTC. >> Now my colleague, have some work on Cyborg across with Nova, >> passthroug device, TPM and so on. If they can join the irc meeting >> talking with the community , it will be much helpful. >> > > @Sam: We discussed your request yesterday[1] and it seems that the > team is not objecting against a monthly office hour in #openstack-nova > around UTC 8 or UTC 9. But we did not agreed which day we should have > it so I set up a poll[2]. > > @Team: As we discussed yesterday I opened a poll to agree on the day > of the week and the exact start time for the Asia friendly office > hours slot. Please vote in the poll[2] before next Tuesday. > > Cheers, > gibi > > [1] > http://eavesdrop.openstack.org/meetings/nova/2021/nova.2021-05-25-16.0 > 0.log.html#l-100 [2] https://doodle.com/poll/svrnmrtn6nnknzqp > >> >> -----邮件原件----- >> 发件人: Balazs Gibizer [mailto:balazs.gibizer at est.tech] >> 发送时间: 2021年5月14日 14:12 >> 收件人: Sam Su (苏正伟) >> 抄送: alifshit at redhat.com; openstack-discuss at lists.openstack.org >> 主题: Re: [Nova] Meeting time poll >> >> >> >> On Fri, May 14, 2021 at 01:23, Sam Su (苏正伟) >> wrote: >>> From: Sam Su (苏正伟) >>> Sent: Friday, May 14, 2021 03:23 >>> To: alifshit at redhat.com >>> Cc: openstack-discuss at lists.openstack.org >>> Subject: Re: [Nova] Meeting time poll >>> >>> Hi, Nova team: >> >> Hi Sam! >> >>> There are many asian developers for Openstack community. I >>> found the current IRC time of Nova is not friendly to them, >>> especially to East Asian. >>> If they >>> can take part in the IRC meeting, the Nova may have more >>> developers. >>> Of >>> cource, Central Europe and NA West Coast is firstly considerable. >>> If >>> the team could schedule the meeting once per month, time suitable >>> for asians, more people would participate in the meeting >>> discussion. >> >> You have a point. In the past Nova had alternating meeting time slots >> one for EU+NA and one for the NA+Asia timezones. Our experience was >> that the NA+Asia meeting time slot was mostly lacking participants. >> So we merged the two slots. But I can imagine that the situation has >> changed since and there might be need for an alternating meeting >> again. >> >> We can try what you suggest and do an Asia friendly meeting once a >> month. The next question is what time you would like to have that >> meeting. Or more specifically which part of the nova team you would >> like to meet more? >> >> * Do a meeting around 8:00 UTC to meet Nova devs from the EU >> >> * Do a meeting around 0:00 UTC to meet Nova devs from North America >> >> If we go for the 0:00 UTC time slot then I need somebody to chair >> that meeting as I'm from the EU. >> >> Alternatively to having a formal meeting I can offer to hold a free >> style office hour each Thursday 8:00 UTC in #openstack-nova. I made >> the same offer when we moved the nova meeting to be a non >> alternating one. >> But honestly I don't remember ever having discussion happening >> specifically due to that office hour in #openstack-nova. >> >> Cheers, >> gibi >> >> p.s.: the smime in your mail is not really mailing list friendly. >> Your mail does not appear properly in the archive. >> >> >> >> > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3606 bytes Desc: not available URL: From zakhar at gmail.com Wed Jun 2 03:24:55 2021 From: zakhar at gmail.com (Zakhar Kirpichenko) Date: Wed, 2 Jun 2021 06:24:55 +0300 Subject: [telemetry] Wallaby ceilometer.compute.discovery fails to get domain metadata Message-ID: Hi! I'm facing a weird situation where Ceilometer compute agent fails to get libvirt domain metadata on an Ubuntu 20.04 LTS with the latest updates, kernel 5.4.0-65-generic and Openstack Wallaby Nova compute services installed using the official Wallaby repo for Ubuntu 20.04. All components have been deployed manually. Ceilometer agent is configured with instance_discovery_method = libvirt_metadata. The agent is unable to fetch the domain metadata, and the following error messages appear in /var/log/ceilometer/ceilometer-agent-compute.log on agent start and periodic polling attempts: 2021-06-01 16:01:18.297 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid baf06f57-ac5b-4661-928c-7adaeaea0311 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.298 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid 208c0d7a-41a3-4fa6-bf72-2f9594ac6b8d metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.300 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid d979a527-c1ba-4b29-8e30-322d4d2efcf7 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.301 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid a41f21b6-766d-4979-bbe1-84f421b0c3f2 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.302 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid fd5ffe32-c6d6-4898-9ba2-2af1ffebd502 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.302 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid aff042c9-c311-4944-bc42-09ccd5a90eb7 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.303 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid 9510bc46-e4e2-490c-9cbe-c9eb5e349b8d metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.304 1835684 ERROR ceilometer.compute.discovery [-] Fail to get domain uuid 4d2c2c9b-4eff-460a-a00b-19fdbe33f5d4 metadata, libvirtError: metadata not found: Requested metadata element is not present: libvirt.libvirtError: metadata not found: Requested metadata element is not present 2021-06-01 16:01:18.305 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster cpu_l3_cache, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.305 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.305 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.305 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.306 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.306 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.306 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.306 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.306 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.307 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 2021-06-01 16:01:18.307 1835684 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3/dist-packages/ceilometer/polling/manager.py:177 All domains exist and their metadata is readily available using virsh or a simple Python script. Nova compute service is fully functional, Ceilometer agent is partially functional as it is able to export compute.node.cpu.* metrics but nothing related to libvirt domains. I already filed a bug report https://bugs.launchpad.net/ceilometer/+bug/1930446, but would appreciate feedback and/or advice. Best regards, Zakhar --- More deployment information: Installed Ceilometer-related packages: ceilometer-agent-compute 2:16.0.0-0ubuntu1~cloud0 ceilometer-common 2:16.0.0-0ubuntu1~cloud0 python3-ceilometer 2:16.0.0-0ubuntu1~cloud0 Installed Nova-related packages: nova-common 3:23.0.0-0ubuntu1~cloud0 nova-compute 3:23.0.0-0ubuntu1~cloud0 nova-compute-kvm 3:23.0.0-0ubuntu1~cloud0 nova-compute-libvirt 3:23.0.0-0ubuntu1~cloud0 python3-nova 3:23.0.0-0ubuntu1~cloud0 python3-novaclient 2:17.4.0-0ubuntu1~cloud0 Installed Libvirt-related packages: libvirt-clients 6.0.0-0ubuntu8.9 libvirt-daemon 6.0.0-0ubuntu8.9 libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.9 libvirt-daemon-driver-storage-rbd 6.0.0-0ubuntu8.9 libvirt-daemon-system 6.0.0-0ubuntu8.9 libvirt-daemon-system-systemd 6.0.0-0ubuntu8.9 libvirt0:amd64 6.0.0-0ubuntu8.9 python3-libvirt 6.1.0-1 Installed Qemu-related packages: libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.9 qemu-block-extra:amd64 1:4.2-3ubuntu6.16 qemu-kvm 1:4.2-3ubuntu6.16 qemu-system-common 1:4.2-3ubuntu6.16 qemu-system-data 1:4.2-3ubuntu6.16 qemu-system-gui:amd64 1:4.2-3ubuntu6.16 qemu-system-x86 1:4.2-3ubuntu6.16 qemu-utils 1:4.2-3ubuntu6.16 Apparmor is enabled and running the default configuration, no messages related to apparmor and libvirt, qemu, nova-compute, ceilometer-agent, etc are visible in the logs. I am also attaching the relevant Ceilometer agent and Nova configuration files: ceilometer.conf: [DEFAULT] transport_url = rabbit://WORKING-TRANSPORT-URL verbose = true debug = true auth_strategy = keystone log_dir = /var/log/ceilometer [compute] instance_discovery_method = libvirt_metadata [keystone_authtoken] www_authenticate_uri = http://CONTROLLER.VIP:5000 auth_url = http://CONTROLLER.VIP:5000 memcached_servers = LIST-OF-WORKING-MEMCACHED-SERVERS auth_type = password project_domain_name = default user_domain_name = default project_name = service username = ceilometer password = WORKING_PASSWORD [service_credentials] auth_type = password auth_url = http://CONTROLLER.VIP:5000/v3 memcached_servers = LIST-OF-WORKING-MEMCACHED-SERVERS project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = WORKING_PASSWORD interface = internalURL region_name = RegionOne [oslo_messaging_notifications] driver = messagingv2 transport_url = rabbit://WORKING-TRANSPORT-URL polling.yaml: --- sources: - name: some_pollsters interval: 300 meters: - cpu - cpu_l3_cache - memory.usage - network.incoming.bytes - network.incoming.packets - network.outgoing.bytes - network.outgoing.packets - disk.device.read.bytes - disk.device.read.requests - disk.device.write.bytes - disk.device.write.requests - hardware.cpu.util - hardware.cpu.user - hardware.cpu.nice - hardware.cpu.system - hardware.cpu.idle - hardware.cpu.wait - hardware.cpu.kernel - hardware.cpu.interrupt - hardware.memory.used - hardware.memory.total - hardware.memory.buffer - hardware.memory.cached - hardware.memory.swap.avail - hardware.memory.swap.total - hardware.system_stats.io.outgoing.blocks - hardware.system_stats.io.incoming.blocks - hardware.network.ip.incoming.datagrams - hardware.network.ip.outgoing.datagrams nova.conf: [DEFAULT] log_dir = /var/log/nova lock_path = /var/lock/nova state_path = /var/lib/nova instance_usage_audit_period = hour compute_monitors = cpu.virt_driver,numa_mem_bw.virt_driver reserved_host_memory_mb = 2048 instance_usage_audit = True resume_guests_state_on_host_boot = true my_ip = COMPUTE.HOST.IP.ADDR report_interval = 30 transport_url = rabbit://WORKING-TRANSPORT-URL [api] [api_database] [barbican] [cache] expiration_time = 600 backend = oslo_cache.memcache_pool backend_argument = memcached_expire_time:660 enabled = true memcache_servers = LIST-OF-WORKING-MEMCACHED-SERVERS [cinder] catalog_info = volumev3::internalURL [compute] [conductor] [console] [consoleauth] [cors] [cyborg] [database] connection = mysql+pymysql://WORKING-CONNECTION-STRING connection_recycle_time = 280 max_pool_size = 5 max_retries = -1 [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://CONTROLLER.VIP:9292 [guestfs] [healthcheck] [hyperv] [image_cache] [ironic] [key_manager] [keystone] [keystone_authtoken] www_authenticate_uri = http://CONTROLLER.VIP:5000 auth_url = http://CONTROLLER.VIP:5000 region_name = RegionOne memcached_servers = LIST-OF-WORKING-MEMCACHED-SERVERS auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = WORKING-PASSWORD [libvirt] live_migration_scheme = ssh live_migration_permit_post_copy = true disk_cachemodes="network=writeback,block=writeback" images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = SECRET-UUID [metrics] [mks] [neutron] auth_url = http://CONTROLLER.VIP:5000 region_name = RegionOne memcached_servers = LIST-OF-WORKING-MEMCACHED-SERVERS auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = WORKING-PASSWORD [notifications] notify_on_state_change = vm_and_task_state [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] driver = messagingv2 [oslo_messaging_rabbit] amqp_auto_delete = false rabbit_ha_queues = true [oslo_middleware] [oslo_policy] [pci] [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://CONTROLLER.VIP:5000/v3 username = placement password = WORKING-PASSWORD [powervm] [privsep] [profiler] [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = https://WORKING-URL:6080/vnc_auto.html [workarounds] [wsgi] [zvm] [cells] enable = False [os_region_name] openstack = -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaurmanpreet2620 at gmail.com Wed Jun 2 03:44:14 2021 From: kaurmanpreet2620 at gmail.com (manpreet kaur) Date: Wed, 2 Jun 2021 09:14:14 +0530 Subject: Tacker Auto Scale Support In-Reply-To: References: Message-ID: HI Fadi Badine, In the OpenStack Newton release, VNF auto-scaling and manual-scaling features were introduced in the tacker. Please check release notes for the newton release, https://docs.openstack.org/releasenotes/tacker/newton.html Feel free to revert in case of concerns. Thanks & Regards, Manpreet Kaur On Tue, Jun 1, 2021 at 10:11 PM Fadi Badine wrote: > Hello, > > > > I would like to know if VNF auto scaling is supported by Tacker and if so > in which release. > > I tried looking at release notes but couldn’t find anything. > > > > Thanks! > > > > Best regards, > > > > *Fadi Badine* > > *Product Manager* > > Office: +961 (1) 900 818 > > Mobile: +961 (3) 822 966 > > W: www.enghousenetworks.com > > E: fadi.badine at enghouse.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gthiemonge at redhat.com Wed Jun 2 08:18:38 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Wed, 2 Jun 2021 10:18:38 +0200 Subject: [Octavia] Weekly meeting moving to OFTC Message-ID: Hi team, The next Octavia team meeting (today at 16:00 UTC) will be on the OFTC network on #openstack-lbaas. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Jun 2 10:14:34 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 2 Jun 2021 11:14:34 +0100 Subject: [kolla] IRC channel -> OFTC Message-ID: Hi Koalas, As you may already know, Openstack IRC has moved IRC channels from Freenode [1] to OFTC. So from today onwards, our weekly team meetings and general project discussion will be on OFTC on the same channel (#openstack-kolla). Kindly register yourself on OFTC n/w[ 2] if you have not done so yet. Thanks, Mark [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html [2] https://www.oftc.net/Services/#register-your-account From marios at redhat.com Wed Jun 2 11:17:32 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 2 Jun 2021 14:17:32 +0300 Subject: [TripleO] Proposing ysandeep for tripleo-ci core Message-ID: Hello all Having discussed this with some members of the tripleo ci team (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: ysandeep) for core on the tripleo-ci repos (tripleo-ci, tripleo-quickstart and tripleo-quickstart-extras). Sandeep joined the team about 1.5 years ago and has from the start demonstrated his eagerness to learn and an excellent work ethic, having made many useful code submissions [1] and code reviews [2] to the CI repos and beyond. Thanks Sandeep and keep up the good work! Please reply to this mail with a +1 or -1 for objections in the usual manner. If there are no objections we can declare it official in a few days regards, marios [1] https://review.opendev.org/q/owner:sandeepyadav93 [2] https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 From sshnaidm at redhat.com Wed Jun 2 11:20:34 2021 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Wed, 2 Jun 2021 14:20:34 +0300 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1! On Wed, Jun 2, 2021 at 2:19 PM Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Wed Jun 2 11:28:29 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Wed, 2 Jun 2021 16:58:29 +0530 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: On Wed, Jun 2, 2021 at 4:55 PM Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > +1 -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssbarnea at redhat.com Wed Jun 2 12:14:23 2021 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Wed, 2 Jun 2021 12:14:23 +0000 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1 -- /zbr On 2 Jun 2021 at 12:17:32, Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Wed Jun 2 12:16:50 2021 From: mkopec at redhat.com (Martin Kopec) Date: Wed, 2 Jun 2021 14:16:50 +0200 Subject: [neutron][interop][refstack] New tests and capabilities to track in interop In-Reply-To: <6595086.PSTg7GmUaj@p1> References: <6595086.PSTg7GmUaj@p1> Message-ID: Hi Slawek, thanks for getting back to us and sharing new potential tests and capabilities from neutron-tempest-plugin. Let's first discuss tests which are in tempest directly please. We have done an analysis where we have cross checked tests we have in our guidelines with the ones (api and non-admin ones) present in tempest at the tempest checkout we currently use and here are the results: https://etherpad.opendev.org/p/refstack-test-analysis There are 110 and tempest.api.network tests which we don't have in any guideline yet. Could you please have a look at the list of the tests? Would it make sense to include them in a guideline? Would they extend any network capabilities we have in OpenStack Powered Platform program or would we need to create a new one(s)? https://opendev.org/osf/interop/src/branch/master/next.json Thank you, On Mon, 24 May 2021 at 16:33, Slawek Kaplonski wrote: > Hi, > > Dnia poniedziałek, 26 kwietnia 2021 17:48:08 CEST Martin Kopec pisze: > > > Hi everyone, > > > > > > I would like to further discuss the topics we covered with the neutron > team > > > during > > > the PTG [1]. > > > > > > * adding address_group API capability > > > It's tested by tests in neutron-tempest-plugin. First question is if > tests > > > which are > > > not directly in tempest can be a part of a non-add-on marketing program? > > > It's possible to move them to tempest though, by the time we do so, could > > > they be > > > marked as advisory? > > > > > > * Shall we include QoS tempest tests since we don't know what share of > > > vendors > > > enable QoS? Could it be an add-on? > > > These tests are also in neutron-tempest-plugin, I assume we're talking > about > > > neutron_tempest_plugin.api.test_qos tests. > > > If we want to include these tests, which program should they belong to? > Do > > > we wanna > > > create a new one? > > > > > > [1] https://etherpad.opendev.org/p/neutron-xena-ptg > > > > > > Thanks, > > > -- > > > Martin Kopec > > > Senior Software Quality Engineer > > > Red Hat EMEA > > First of all, sorry that it took so long for me but I finally looked into > Neutron related tests and capabilities and I think we can possibly add few > things there: > > - For "networks-security-groups-CRUD" we can add "address_groups" API. It > is now supported by ML2 plugin [1]. In the neutron-tempest-plugin we just > have some scenario test [2] but we would probably need also API tests for > that, correct? > > - For networks-l3-CRUD we can optionally add port_forwarding API. This can > be added by service plugin [3] so it may not be enabled in all deployments. > But maybe there is some "optional feature" category in the RefStack, and if > so, this could be included there. Tests for that are in > neutron-tempest-plugin [4] and [5]. > > - There are also 2 other service plugins, which I think could be included > as "optional feature" in the RefStack, but IMO don't fit exactly in any of > the existing groups. Those are QoS [6] and Trunks [7]. Tests for both are > in the neutron-tempest-plugin as well: Qos: [8] and [9], Trunk [10], [11] > and [12]. > > Please let me know what do You think about it and if that would be ok and > if You want me to propose some patches with that or maybe You will propose > them. > > [1] https://review.opendev.org/c/openstack/neutron-lib/+/741784 > > [2] https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/777833 > > [3] > https://github.com/openstack/neutron/blob/master/neutron/services/portforwarding/pf_plugin.py > > [4] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_port_forwardings.py > > [5] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_port_forwarding_negative.py > > [6] > https://github.com/openstack/neutron/blob/master/neutron/services/qos/qos_plugin.py > > [7] > https://github.com/openstack/neutron/blob/master/neutron/services/trunk/plugin.py > > [8] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_qos.py > > [9] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_qos_negative.py > > [10] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_trunk.py > > [11] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_trunk_details.py > > [12] > https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/test_trunk_negative.py > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Wed Jun 2 12:25:14 2021 From: james.slagle at gmail.com (James Slagle) Date: Wed, 2 Jun 2021 08:25:14 -0400 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: On Wed, Jun 2, 2021 at 7:26 AM Marios Andreou wrote: > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > +1! -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Wed Jun 2 12:25:19 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 2 Jun 2021 14:25:19 +0200 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: <422d8bfc-e258-7eee-8d11-82ce15242485@redhat.com> What, Sandeep wasn't core already? +42 :) On 6/2/21 1:17 PM, Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From vikash.kumarprasad at siemens.com Wed Jun 2 12:34:15 2021 From: vikash.kumarprasad at siemens.com (Kumar Prasad, Vikash) Date: Wed, 2 Jun 2021 12:34:15 +0000 Subject: PNDriver on openstack VM is not able to communicate to ET200SP device connected to my physical router Message-ID: Dear Community, I have installed openstack on Centos 7 on Virutalbox VM. Now I am running an application PNDriver on openstack VM(VNF), which is supposed to communicate with a hardware ET200SP, which is connected to my physical home router. Now my PNDriver is not able to communicate to ET200SP hardware device. PNDriver minimum requirements to run on an interface is using ethtool it should list the speed, duplex, and port properties, but by default speed , duplex, and port values it is Showing "unknown" on openstack VM(VNF). I tried setting these values using ethtool and somehow I was able to set duplex, speed values but port value when I am trying to set it is throwing error. My question is how we can set the port value of openstack VM(Vnf) using ethtool? Second question is that suppose if we create a VM on virtualbox, then virtualbox provides a provision for bridged type on network setting, can I not configure openstack vm (vnf) like a virtualbox VM so that my vnf can also get broadcast messages broadcasted by the connected hardware devices in my home router? Thanks Vikash kumar prasad -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Wed Jun 2 12:40:14 2021 From: ykarel at redhat.com (Yatin Karel) Date: Wed, 2 Jun 2021 18:10:14 +0530 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: On Wed, Jun 2, 2021 at 4:53 PM Marios Andreou wrote: > > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > +1 > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > From aschultz at redhat.com Wed Jun 2 12:54:58 2021 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 2 Jun 2021 06:54:58 -0600 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1 On Wed, Jun 2, 2021 at 5:27 AM Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Jun 2 13:10:10 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 2 Jun 2021 07:10:10 -0600 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1 WOOOT! On Wed, Jun 2, 2021 at 6:57 AM Alex Schultz wrote: > +1 > > On Wed, Jun 2, 2021 at 5:27 AM Marios Andreou wrote: > >> Hello all >> >> Having discussed this with some members of the tripleo ci team >> (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >> ysandeep) for core on the tripleo-ci repos (tripleo-ci, >> tripleo-quickstart and tripleo-quickstart-extras). >> >> Sandeep joined the team about 1.5 years ago and has from the start >> demonstrated his eagerness to learn and an excellent work ethic, >> having made many useful code submissions [1] and code reviews [2] to >> the CI repos and beyond. Thanks Sandeep and keep up the good work! >> >> Please reply to this mail with a +1 or -1 for objections in the usual >> manner. If there are no objections we can declare it official in a few >> days >> >> regards, marios >> >> [1] https://review.opendev.org/q/owner:sandeepyadav93 >> [2] >> https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshewale at redhat.com Wed Jun 2 14:06:17 2021 From: bshewale at redhat.com (Bhagyashri Shewale) Date: Wed, 2 Jun 2021 19:36:17 +0530 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1 :) Thanks and Regards Bhagyashri Shewale On Wed, Jun 2, 2021 at 4:48 PM Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moreira.belmiro.email.lists at gmail.com Wed Jun 2 14:12:57 2021 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Wed, 2 Jun 2021 16:12:57 +0200 Subject: AW: Customization of nova-scheduler In-Reply-To: <0fbc1e49a3f87aadc82fa12a53454bc76a3dae4a.camel@redhat.com> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <0fbc1e49a3f87aadc82fa12a53454bc76a3dae4a.camel@redhat.com> Message-ID: Hi Sean, maybe this is the time to bring up again the discussion regarding preemptible instances support in Nova. Preemptible/Spot instances are available in all of the major public clouds to allow a better resource utilization. OpenStack private clouds suffer exactly from the same issue. There was a lot of work done in this area during the last 3 years. Most of the work is summarized by the blogs/presentations/cern-gitlab that you mentioned. CERN has been running this code in production since 1 year ago. It allows us to use the spare capacity in the compute nodes dedicated for specific services to run batch workloads. I heard that "ARDC Nectar Research Cloud" is also running it. I believe the work that was done is an excellent PoC. Also, to me this looks like it should be a Nova feature. Having an external project to support this functionality it's a huge overhead. cheers, Belmiro On Tue, Jun 1, 2021 at 11:03 PM Sean Mooney wrote: > On Mon, 2021-05-31 at 17:21 +0100, Stephen Finucane wrote: > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > > Hello Stephen, > > > > > > I am a student from Germany who is currently working on his bachelor > thesis. My job is to build a cloud solution for my university with > Openstack. The functionality should include the prioritization of users. So > that you can imagine exactly how the whole thing should work, I would like > to give you an example. > > > > > > Two cases should be solved! > > > > > > Case 1: A user A with a low priority uses a VM from Openstack with > half performance of the available host. Then user B comes in with a high > priority and needs the full performance of the host for his VM. When > creating the VM of user B, the VM of user A should be deleted because there > is not enough compute power for user B. The VM of user B is successfully > created. > > > > > > Case 2: A user A with a low priority uses a VM with half the > performance of the available host, then user B comes in with a high > priority and needs half of the performance of the host for his VM. When > creating the VM of user B, user A should not be deleted, since enough > computing power is available for both users. > > > > one thing to keep in mind is that end users are not allow to know the > capstity of the cloud in terms of number of host, the resouces on a host or > what > host there vm is placeed on. so as a user the conceph of "a low priority > uses a VM from Openstack with half performance of the available host" is not > something that you can express arctecurally in nova. > flavor define the size of vms in absolute term i.e. 4GB of ram not relitve > "50% of the host". > we have a 3 laryer schuldeing prcoess that start with a query to the > placment service for a set of quantitative resouce class and qualitative > traits. > that produces a set fo allcoation candiate against a serise of host that > could fit the instance, we then filter those host useing python filters > wich are boolean fucntion that either pass the host or reject it finally > after filtering we weight the remaining hosts and selecet one to boot the > vm. > > once you have completed a steph in this processs you can nolonger go to a > previous step and you can never readd a host afteer it has been elimiated by > placemnt or a filter to be considered again. as a result if you get the > end of the avaiable hosts and there are none that can fix your vm we cannot > delete a vm and start again without redoing all the work and possible > facing with concurrent api requests. > this is why this is a hard problem with out an external service that can > rebalance exiting workloads and free up capsity. > > > > > > These cases should work for unlimited users. In order to optimize the > whole thing, I would like to write a function that precisely calculates all > performance components to determine whether enough resources are available > for the VM of the high priority user. > > > > What you're describing is commonly referred to as "preemptible" or "spot" > > instances. This topic has a long, complicated history in nova and has > yet to be > > implemented. Searching for "preemptible instances openstack" should > yield you > > lots of discussion on the topic along with a few proof-of-concept > approaches > > using external services or out-of-tree modifications to nova. > > > > > I’m new to Openstack, but I’ve already implemented cloud projects with > Microsoft Azure and have solid programming skills. Can you give me a hint > where and how I can start? > > > > As hinted above, this is likely to be a very difficult project given the > fraught > > history of the idea. I don't want to dissuade you from this work but you > should > > be aware of what you're getting into from the start. If you're serious > about > > pursuing this, I suggest you first do some research on prior art. As > noted > > above, there is lots of information on the internet about this. With this > > research done, you'll need to decide whether this is something you want > to > > approach within nova itself, via out-of-tree extensions or via a third > party > > project. If you're opting for integration with nova, then you'll need to > think > > long and hard about how you would design such a system and start working > on a > > spec (a design document) outlining your proposed solution. Details on > how to > > write a spec are discussed at [1]. The only extension points nova offers > today > > are scheduler filters and weighers so your options for an out-of-tree > extension > > approach will be limited. A third party project will arguably be the > easiest > > approach but you will be restricted to talking to nova's REST APIs which > may > > limit the design somewhat. This Blazar spec [2] could give you some > ideas on > > this approach (assuming it was never actually implemented, though it may > well > > have been). > > > > > My university gave me three compute hosts and one control host to > implement this solution for the bachelor thesis. I’m currently setting up > Openstack and all the services on the control host all by myself to > understand all the functionality (sorry for not using Packstack) 😉. All my > hosts have CentOS 7 and the minimum deployment which I configure is Train. > > > > > > My idea is to work with nova schedulers, because they seem to be > interesting for my case. I've found a whole infrastructure description of > the provisioning of an instance in Openstack > https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > > > > > > The nova scheduler > https://docs.openstack.org/operations-guide/ops-customize-compute.html is > the first component, where it is possible to implement functions via Python > and the Compute API > https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail > to check for active VMs and probably delete them if needed before a > successful request for an instantiation can be made. > > > > > > What do you guys think about it? Does it seem like a good starting > point for you or is it the wrong approach? > > > > This could potentially work, but I suspect there will be serious > performance > > implications with this, particularly at scale. Scheduler filters are > > historically used for simple things like "find me a group of hosts that > have > > this metadata attribute I set on my image". Making API calls sounds like > > something that would take significant time and therefore slow down the > schedule > > process. You'd also have to decide what your heuristic for deciding > which VM(s) > > to delete would be, since there's nothing obvious in nova that you could > use. > > You could use something as simple as filter extra specs or something as > > complicated as an external service. > yes implementing preemption in the scheduler as filet was disccused in > the passed and discounted for the performance implication stephen hinted at. > in tree we currentlyt do not allow filter to make any api or db queires. > that approach also will not work toady since you would have to rexecute the > query to the placment service after deleting an instance when you run out > of capacity and restart the filtering which a filter cannot do as i noted > above. > > the most recent spec in this area was > https://review.opendev.org/c/openstack/nova-specs/+/438640 for the > integrated approch and > https://review.opendev.org/c/openstack/nova-specs/+/554212/12 which > proposed adding a pending state for use with a standalone service > > https://gitlab.cern.ch/ttsiouts/ReaperServicePrototype > > ther are a number of presentation on this form cern/stackhapc > https://www.stackhpc.com/scientific-sig-at-the-dublin-ptg.html > > http://openstack-in-production.blogspot.com/2018/02/maximizing-resource-utilization-with.html > > https://openlab.cern/sites/openlab.web.cern.ch/files/2018-07/Containers_on_Baremetal_and_Preemptible_VMs_at_CERN_and_SKA.pdf > > https://indico.cern.ch/event/739089/sessions/282073/attachments/1689073/2717151/ASDF_preemptible.pdf > > > the current state is rebuilding from cell0 is not support but the pending > state was never added and the reaper service was not upstream. > > work in this are has now move the blazar project as stphen noted in [2] > > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > but is dont think it has made much progress. > https://review.opendev.org/q/topic:%22preemptibles%22+(status:open%20OR%20status:merged) > > nova previously had a pluggable scheduler that would have allowed you to > reimplent the scudler entirely from scratch but we removed that > capability in the last year or two. at this point the only viable approach > that will not take multiple upstream cycles to this is really to use an > external service. > > > > > This should be lots to get you started. Once again, do make sure you're > aware of > > what you're getting yourself into before you start. This could get > complicated > > very quickly :) > > yes anything other then adding the pending state to nova will be very > complex due to placement interaction. > you would really need to implement a fallback query mechanism in the > scudler iteself. > anything after the call to placement is already too late. you might be > able to reuse consumer types to make some allocation > preemtiblae and have a prefilter decide if an allocation should be a > normal nova consumer or premtable consumer based on > a flavor extra spec. > https://docs.openstack.org/placement/train/specs/train/approved/2005473-support-consumer-types.html > this would still require the pending state and an external reaper service > to free the capsity to be clean but its a possible direction. > > > > > > Cheers, > > Stephen > > > > > I'm very happy to have found you!!! > > > > > > Thank you really much for your time! > > > > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > > [2] > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > > > > Best regards > > > Levon > > > > > > -----Ursprüngliche Nachricht----- > > > Von: Stephen Finucane > > > Gesendet: Montag, 31. Mai 2021 12:34 > > > An: Levon Melikbekjan ; > openstack at lists.openstack.org > > > Betreff: Re: Customization of nova-scheduler > > > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > > Hello Openstack team, > > > > > > > > is it possible to customize the nova-scheduler via Python? If yes, > how? > > > > > > Yes, you can provide your own filters and weighers. This is documented > at [1]. > > > > > > Hope this helps, > > > Stephen > > > > > > [1] > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > > > > > > > > > > > Best regards > > > > Levon > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Wed Jun 2 14:40:36 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Wed, 2 Jun 2021 14:40:36 +0000 Subject: [interop] something strange In-Reply-To: References: Message-ID: OK. Let’s discuss it on the next week Interop call. I will postpone updating wiki page yet that points to these guidelines. Thanks, Arkady From: Martin Kopec Sent: Tuesday, June 1, 2021 4:39 PM To: Kanevsky, Arkady Cc: openstack-discuss; Goutham Pacha Ravi; Ghanshyam Mann; Vida Haririan Subject: Re: [interop] something strange [EXTERNAL EMAIL] Hi Arkady, I had to revert it (see the latest comment or https://review.opendev.org/c/osf/interop/+/792883 [review.opendev.org]) as it caused troubles with the refstack server - it wasn't able to retrieve the guidelines. Reason for revert: refstack server gives 404 on the guidelines: https://refstack.openstack.org/#/guidelines [refstack.openstack.org] .. seems like https://review.opendev.org/c/osf/refstack/+/790940 [review.opendev.org] didn't handle the update of the guidelines location everywhere - I suspect that some changes in refstack-ui are needed as well, ah I'm sorry for inconvenience, On Tue, 1 Jun 2021 at 23:24, Kanevsky, Arkady > wrote: Team, Once we merged https://review.opendev.org/c/osf/interop/+/786116 [review.opendev.org] I expect that all old guidelines will move into directory “previous”. I just sync my master to latest and still see old guidelines on top level directory. Any idea why? Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 2 15:04:16 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 02 Jun 2021 16:04:16 +0100 Subject: AW: Customization of nova-scheduler In-Reply-To: References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <0fbc1e49a3f87aadc82fa12a53454bc76a3dae4a.camel@redhat.com> Message-ID: On Wed, 2021-06-02 at 16:12 +0200, Belmiro Moreira wrote: > Hi Sean, > > maybe this is the time to bring up again the discussion regarding > preemptible instances support in Nova. maybe realistically im not sure we have the capasity to do the detailed design required this cycle but we could disucss it with an aim to having something ready for next cycle. i still think this is a valuable capablity which is partly why i brought this topic up with gibi this morning http://eavesdrop.openstack.org/irclogs/%23openstack-nova/latest.log.html#t2021-06-02T10:26:24 his reply is here http://eavesdrop.openstack.org/irclogs/%23openstack-nova/latest.log.html#t2021-06-02T12:00:03 i was exploring the question does the soon to be intoduced consumer types impact the desgin in any way. if unified limits was aware of consumer types and we had a placement:consumer_type=premtibale extra spec for example and we enhanced nova to use that we could adress some of the awkwardness in the current design where you have to have two project to do quota properly. effectively i think unified limits + consumer types shoudl probably be a prerequisite. we might want to revive the pending state also alhtough we now have rebuild form cell 0 i belvie so that may not be reuqied. if there is interst in this perhaps we should explore a subteam/popup team to pursue this again? > > Preemptible/Spot instances are available in all of the major public clouds > to allow a better resource utilization. OpenStack private clouds suffer > exactly from the same issue. > > There was a lot of work done in this area during the last 3 years. > > Most of the work is summarized by the blogs/presentations/cern-gitlab that > you mentioned. > > CERN has been running this code in production since 1 year ago. It allows > us to use the spare capacity in the compute nodes dedicated for specific > services to run batch workloads. yep i see utility in it for providing extra cloud capacity for ci also. > > I heard that "ARDC Nectar Research Cloud" is also running it. > > I believe the work that was done is an excellent PoC. well since cern and netcar are potentialy running it already is that not an endorsement of an external agent approch :) > > Also, to me this looks like it should be a Nova feature. Having an external > project to support this functionality it's a huge overhead. so we have been debaiting addign a new agent to nova for a while that would be responsible for runing some of the period healing type task. we were calling it nova audit as a place holder but it woudld basicaly do thing like arciving delete rows healing allocation ectra. the other logical approch would be to incoperate it into the nova conductor but im still not sold that it shoudl be in the nova tree. im not againt that either but perhaps a better apprcoh would be to create seperate repo that is a deliverable of nova based on the poc code and incubate it there. im really conviced that an external process is a huge overhead but also haveing to maintain the project release it ectra probably is. with that said i have always been a fan of the idea of having a common agent on a node that ran multiple services. e.g. a way to deploy nova api, nova conductor and nova scheduler as a singel binary to reduce the number of service you need to manage but i think that is a seperate topic. > > cheers, > > Belmiro > > > On Tue, Jun 1, 2021 at 11:03 PM Sean Mooney wrote: > > > On Mon, 2021-05-31 at 17:21 +0100, Stephen Finucane wrote: > > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > > > Hello Stephen, > > > > > > > > I am a student from Germany who is currently working on his bachelor > > thesis. My job is to build a cloud solution for my university with > > Openstack. The functionality should include the prioritization of users. So > > that you can imagine exactly how the whole thing should work, I would like > > to give you an example. > > > > > > > > Two cases should be solved! > > > > > > > > Case 1: A user A with a low priority uses a VM from Openstack with > > half performance of the available host. Then user B comes in with a high > > priority and needs the full performance of the host for his VM. When > > creating the VM of user B, the VM of user A should be deleted because there > > is not enough compute power for user B. The VM of user B is successfully > > created. > > > > > > > > Case 2: A user A with a low priority uses a VM with half the > > performance of the available host, then user B comes in with a high > > priority and needs half of the performance of the host for his VM. When > > creating the VM of user B, user A should not be deleted, since enough > > computing power is available for both users. > > > > > > one thing to keep in mind is that end users are not allow to know the > > capstity of the cloud in terms of number of host, the resouces on a host or > > what > > host there vm is placeed on. so as a user the conceph of "a low priority > > uses a VM from Openstack with half performance of the available host" is not > > something that you can express arctecurally in nova. > > flavor define the size of vms in absolute term i.e. 4GB of ram not relitve > > "50% of the host". > > we have a 3 laryer schuldeing prcoess that start with a query to the > > placment service for a set of quantitative resouce class and qualitative > > traits. > > that produces a set fo allcoation candiate against a serise of host that > > could fit the instance, we then filter those host useing python filters > > wich are boolean fucntion that either pass the host or reject it finally > > after filtering we weight the remaining hosts and selecet one to boot the > > vm. > > > > once you have completed a steph in this processs you can nolonger go to a > > previous step and you can never readd a host afteer it has been elimiated by > > placemnt or a filter to be considered again. as a result if you get the > > end of the avaiable hosts and there are none that can fix your vm we cannot > > delete a vm and start again without redoing all the work and possible > > facing with concurrent api requests. > > this is why this is a hard problem with out an external service that can > > rebalance exiting workloads and free up capsity. > > > > > > > > > > These cases should work for unlimited users. In order to optimize the > > whole thing, I would like to write a function that precisely calculates all > > performance components to determine whether enough resources are available > > for the VM of the high priority user. > > > > > > What you're describing is commonly referred to as "preemptible" or "spot" > > > instances. This topic has a long, complicated history in nova and has > > yet to be > > > implemented. Searching for "preemptible instances openstack" should > > yield you > > > lots of discussion on the topic along with a few proof-of-concept > > approaches > > > using external services or out-of-tree modifications to nova. > > > > > > > I’m new to Openstack, but I’ve already implemented cloud projects with > > Microsoft Azure and have solid programming skills. Can you give me a hint > > where and how I can start? > > > > > > As hinted above, this is likely to be a very difficult project given the > > fraught > > > history of the idea. I don't want to dissuade you from this work but you > > should > > > be aware of what you're getting into from the start. If you're serious > > about > > > pursuing this, I suggest you first do some research on prior art. As > > noted > > > above, there is lots of information on the internet about this. With this > > > research done, you'll need to decide whether this is something you want > > to > > > approach within nova itself, via out-of-tree extensions or via a third > > party > > > project. If you're opting for integration with nova, then you'll need to > > think > > > long and hard about how you would design such a system and start working > > on a > > > spec (a design document) outlining your proposed solution. Details on > > how to > > > write a spec are discussed at [1]. The only extension points nova offers > > today > > > are scheduler filters and weighers so your options for an out-of-tree > > extension > > > approach will be limited. A third party project will arguably be the > > easiest > > > approach but you will be restricted to talking to nova's REST APIs which > > may > > > limit the design somewhat. This Blazar spec [2] could give you some > > ideas on > > > this approach (assuming it was never actually implemented, though it may > > well > > > have been). > > > > > > > My university gave me three compute hosts and one control host to > > implement this solution for the bachelor thesis. I’m currently setting up > > Openstack and all the services on the control host all by myself to > > understand all the functionality (sorry for not using Packstack) 😉. All my > > hosts have CentOS 7 and the minimum deployment which I configure is Train. > > > > > > > > My idea is to work with nova schedulers, because they seem to be > > interesting for my case. I've found a whole infrastructure description of > > the provisioning of an instance in Openstack > > https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > > > > > > > > > The nova scheduler > > https://docs.openstack.org/operations-guide/ops-customize-compute.html is > > the first component, where it is possible to implement functions via Python > > and the Compute API > > https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail > > to check for active VMs and probably delete them if needed before a > > successful request for an instantiation can be made. > > > > > > > > What do you guys think about it? Does it seem like a good starting > > point for you or is it the wrong approach? > > > > > > This could potentially work, but I suspect there will be serious > > performance > > > implications with this, particularly at scale. Scheduler filters are > > > historically used for simple things like "find me a group of hosts that > > have > > > this metadata attribute I set on my image". Making API calls sounds like > > > something that would take significant time and therefore slow down the > > schedule > > > process. You'd also have to decide what your heuristic for deciding > > which VM(s) > > > to delete would be, since there's nothing obvious in nova that you could > > use. > > > You could use something as simple as filter extra specs or something as > > > complicated as an external service. > > yes implementing preemption in the scheduler as filet was disccused in > > the passed and discounted for the performance implication stephen hinted at. > > in tree we currentlyt do not allow filter to make any api or db queires. > > that approach also will not work toady since you would have to rexecute the > > query to the placment service after deleting an instance when you run out > > of capacity and restart the filtering which a filter cannot do as i noted > > above. > > > > the most recent spec in this area was > > https://review.opendev.org/c/openstack/nova-specs/+/438640 for the > > integrated approch and > > https://review.opendev.org/c/openstack/nova-specs/+/554212/12 which > > proposed adding a pending state for use with a standalone service > > > > https://gitlab.cern.ch/ttsiouts/ReaperServicePrototype > > > > ther are a number of presentation on this form cern/stackhapc > > https://www.stackhpc.com/scientific-sig-at-the-dublin-ptg.html > > > > http://openstack-in-production.blogspot.com/2018/02/maximizing-resource-utilization-with.html > > > > https://openlab.cern/sites/openlab.web.cern.ch/files/2018-07/Containers_on_Baremetal_and_Preemptible_VMs_at_CERN_and_SKA.pdf > > > > https://indico.cern.ch/event/739089/sessions/282073/attachments/1689073/2717151/ASDF_preemptible.pdf > > > > > > the current state is rebuilding from cell0 is not support but the pending > > state was never added and the reaper service was not upstream. > > > > work in this are has now move the blazar project as stphen noted in [2] > > > > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > but is dont think it has made much progress. > > https://review.opendev.org/q/topic:%22preemptibles%22+(status:open%20OR%20status:merged) > > > > nova previously had a pluggable scheduler that would have allowed you to > > reimplent the scudler entirely from scratch but we removed that > > capability in the last year or two. at this point the only viable approach > > that will not take multiple upstream cycles to this is really to use an > > external service. > > > > > > > > This should be lots to get you started. Once again, do make sure you're > > aware of > > > what you're getting yourself into before you start. This could get > > complicated > > > very quickly :) > > > > yes anything other then adding the pending state to nova will be very > > complex due to placement interaction. > > you would really need to implement a fallback query mechanism in the > > scudler iteself. > > anything after the call to placement is already too late. you might be > > able to reuse consumer types to make some allocation > > preemtiblae and have a prefilter decide if an allocation should be a > > normal nova consumer or premtable consumer based on > > a flavor extra spec. > > https://docs.openstack.org/placement/train/specs/train/approved/2005473-support-consumer-types.html > > this would still require the pending state and an external reaper service > > to free the capsity to be clean but its a possible direction. > > > > > > > > > > Cheers, > > > Stephen > > > > > > > I'm very happy to have found you!!! > > > > > > > > Thank you really much for your time! > > > > > > > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > > > [2] > > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > > > > > > Best regards > > > > Levon > > > > > > > > -----Ursprüngliche Nachricht----- > > > > Von: Stephen Finucane > > > > Gesendet: Montag, 31. Mai 2021 12:34 > > > > An: Levon Melikbekjan ; > > openstack at lists.openstack.org > > > > Betreff: Re: Customization of nova-scheduler > > > > > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > > > Hello Openstack team, > > > > > > > > > > is it possible to customize the nova-scheduler via Python? If yes, > > how? > > > > > > > > Yes, you can provide your own filters and weighers. This is documented > > at [1]. > > > > > > > > Hope this helps, > > > > Stephen > > > > > > > > [1] > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > > > > > > > > > > > > > > Best regards > > > > > Levon > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From patryk.jakuszew at gmail.com Wed Jun 2 15:08:37 2021 From: patryk.jakuszew at gmail.com (Patryk Jakuszew) Date: Wed, 2 Jun 2021 17:08:37 +0200 Subject: [nova] Proper way to regenerate request_specs of existing instances? In-Reply-To: References: Message-ID: On Tue, 1 Jun 2021 at 23:14, Sean Mooney wrote: > this has come up often enough that we __might__ (im stressing might since im not sure we really want to do this) > consider adding a nova manage command to do this. > > e.g. nova-mange instance flavor-regenerate and nova-mange instance image-regenerate > > those command woudl just recrate the embeded flavor and image metadta without moving the vm or otherwise restarting it. > you would then have to hard reboot it or migrate it sepereatlly. > > im not convicned this is a capablity we should provide to operators in tree however via nova-manage. > > with my downstream hat on im not sure how supportable it woudl for example since like nova reset-state it woudl be > very easy to render vms unbootable in there current localthouh if a tenatn did a hard reboot and cause all kinds of stange issues > that are hard to debug an fix. I have the same thoughts - initially I wanted to figure out whether such feature could be added to nova-manage toolset, but I'm not sure it would be a welcome contribution due to the risks it creates. *Maybe* it would help to add some warnings around it and add an obligatory '--yes-i-really-really-mean-it' switch, but still - it may cause undesired long-term consequences if used improperly. On the other hand, other projects do have options that one can consider to be similiar in nature ('cinder-manage volume update_host' comes to mind), and I think nova-manage is considered to be a low-level utility that shouldn't be used in day-to-day operations anyway... From jmlineb at sandia.gov Wed Jun 2 15:31:23 2021 From: jmlineb at sandia.gov (Linebarger, John) Date: Wed, 2 Jun 2021 15:31:23 +0000 Subject: Is the server Action Log immutable? Message-ID: Hello! Is the server Action Log absolutely immutable? Meaning, if you make a mistake in handling a server (VM) and it shows up in the Action Log, is there any way to remove that entry? I understand that the Action Log is kept in a database but am searching in vain for an API call that will allow such entries to be modified or deleted as opposed to merely displayed. What workarounds might exist? Thanks! Enjoy! John M. Linebarger, PhD, MBA Principal Member of Technical Staff Sandia National Laboratories (Office) 505-845-8282 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 2 17:10:03 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 02 Jun 2021 18:10:03 +0100 Subject: Is the server Action Log immutable? In-Reply-To: References: Message-ID: <7cbf6a8cba49ca8b7653ccc7d4e6511f42c18ba3.camel@redhat.com> On Wed, 2021-06-02 at 15:31 +0000, Linebarger, John wrote: > Hello! Is the server Action Log absolutely immutable? Meaning, if you make a mistake in handling a server (VM) and it shows up in the Action Log, is there any way to remove that entry? I understand that the Action Log is kept in a database but am searching in vain for an API call that will allow such entries to be modified or deleted as opposed to merely displayed. What workarounds might exist? > it is intended to be immutable yes as a form of audit log. it is provided by the instance action api https://docs.openstack.org/api-ref/compute/#servers-actions-servers-os-instance-actions > Thanks! Enjoy! > > John M. Linebarger, PhD, MBA > Principal Member of Technical Staff > Sandia National Laboratories > (Office) 505-845-8282 From rlandy at redhat.com Wed Jun 2 21:19:57 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Wed, 2 Jun 2021 17:19:57 -0400 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: absolutely +1 On Wed, Jun 2, 2021 at 10:08 AM Bhagyashri Shewale wrote: > +1 :) > > Thanks and Regards > Bhagyashri Shewale > > On Wed, Jun 2, 2021 at 4:48 PM Marios Andreou wrote: > >> Hello all >> >> Having discussed this with some members of the tripleo ci team >> (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >> ysandeep) for core on the tripleo-ci repos (tripleo-ci, >> tripleo-quickstart and tripleo-quickstart-extras). >> >> Sandeep joined the team about 1.5 years ago and has from the start >> demonstrated his eagerness to learn and an excellent work ethic, >> having made many useful code submissions [1] and code reviews [2] to >> the CI repos and beyond. Thanks Sandeep and keep up the good work! >> >> Please reply to this mail with a +1 or -1 for objections in the usual >> manner. If there are no objections we can declare it official in a few >> days >> >> regards, marios >> >> [1] https://review.opendev.org/q/owner:sandeepyadav93 >> [2] >> https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dvd at redhat.com Wed Jun 2 22:24:39 2021 From: dvd at redhat.com (David Vallee Delisle) Date: Wed, 2 Jun 2021 18:24:39 -0400 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: +1 indeed DVD On Wed, Jun 2, 2021 at 5:26 PM Ronelle Landy wrote: > absolutely +1 > > On Wed, Jun 2, 2021 at 10:08 AM Bhagyashri Shewale > wrote: > >> +1 :) >> >> Thanks and Regards >> Bhagyashri Shewale >> >> On Wed, Jun 2, 2021 at 4:48 PM Marios Andreou wrote: >> >>> Hello all >>> >>> Having discussed this with some members of the tripleo ci team >>> (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >>> ysandeep) for core on the tripleo-ci repos (tripleo-ci, >>> tripleo-quickstart and tripleo-quickstart-extras). >>> >>> Sandeep joined the team about 1.5 years ago and has from the start >>> demonstrated his eagerness to learn and an excellent work ethic, >>> having made many useful code submissions [1] and code reviews [2] to >>> the CI repos and beyond. Thanks Sandeep and keep up the good work! >>> >>> Please reply to this mail with a +1 or -1 for objections in the usual >>> manner. If there are no objections we can declare it official in a few >>> days >>> >>> regards, marios >>> >>> [1] https://review.opendev.org/q/owner:sandeepyadav93 >>> [2] >>> https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From atikoo at bloomberg.net Wed Jun 2 22:39:54 2021 From: atikoo at bloomberg.net (Ajay Tikoo (BLOOMBERG/ 120 PARK)) Date: Wed, 2 Jun 2021 22:39:54 -0000 Subject: =?UTF-8?B?W29wc10gcmFiYml0bXEgcXVldWVzIGZvciBub3ZhIHZlcnNpb25lZCBub3RpZmljYXRpbw==?= =?UTF-8?B?bnMgcXVldWVzIGtlZXAgZmlsbGluZyB1cA==?= Message-ID: <60B808BA00D0068401D80001_0_3025859@msclnypmsgsv04> I am not sure if this is the right channel/format to post this question, so my apologies in advance if this is not the right place. We are using Openstack Rocky. Watcher needs versioned notifications to be enabled. However after enabling versioned notifications, the queues for versioned_notifications (info and error) keep filling up Based on the updates the the Watchers cluster data model, it appears that Watcher is consuming messages, but they still linger in these queues. So with nova versioned notifications disabled, Watcher is unable to update the cluster data model (between rebuild intervals), and with them enabled, it keeps filling up the MQ queues. What is the best way to resolve this? Thank you, Ajay Tikoo -------------- next part -------------- An HTML attachment was scrubbed... URL: From forums at mossakowski.ch Wed Jun 2 22:05:52 2021 From: forums at mossakowski.ch (forums at mossakowski.ch) Date: Wed, 02 Jun 2021 22:05:52 +0000 Subject: [Neutron] sriov network setup for victoria - clarification needed In-Reply-To: References: Message-ID: Muchas gracias Alonso para tu ayuda! I've commented out the decorator line, new exception popped out, I've updated my gist: https://gist.github.com/8e6272cbe7748b2c5210fab291360e0b BR, Piotr Mossakowski Sent from ProtonMail mobile \-------- Original Message -------- On 31 May 2021, 18:08, Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > > > > Hello Piotr: > > > > > Maybe you should update the pyroute2 library, but this is a blind shot. > > > > > What I recommend you do is to find the error you have when retrieving the interface VFs. In the same compute node, use this method \[1\] but remove the decorator \[2\]. Then, in a root shell, run python again: > > >>> from neutron.privileged.agent.linux import ip\_lib > >>> ip\_lib.get\_link\_vfs('ens2f0', '') > > > > > That will execute the pyroute2 code without the privsep decorator. You'll see what error is returning the method. > > > > > Regards. > > > > > > \[1\][https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip\_lib.py\#L396-L410][https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L396-L410] > > \[2\][https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip\_lib.py\#L395][https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L395] > > > > > > > > On Mon, May 31, 2021 at 5:50 PM <[forums at mossakowski.ch][forums_mossakowski.ch]> wrote: > > > > Hello, > > > > > > I have two victoria environments: > > > > > > 1) a working one, standard setup with separate dedicated interface for sriov (pt0 and pt1) > > > > > > 2) a broken one, where I'm trying to reuse one of already used interfaces (ens2f0 or ens2f1) for sriov. ens2f0 is used for several VLANs (mgmt and storage) and ens2f1 is a neutron external interface which I bridged for VLAN tenant networks. On both I have enabled 63 VFs, it's a standard intetl 10Gb x540 adapter. > > > > > > > > > > > > On broken environment, when I'm trying to boot a VM with sriov port that I created before, I see this error shown on below gist: > > > > > > https://gist.github.com/moss2k13/8e6272cbe7748b2c5210fab291360e0b > > > > > > > > > > > > I'm investigating this for couple days now but I'm out of ideas so I'd like to ask for your support. Is this possible to achieve what I'm trying to do on 2nd environment? To use PF as normal interface and use its VFs for sriov-agent at the same time? > > > > > > > > > > > > Regards, > > > > > > Piotr Mossakowski > > [https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L396-L410]: https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L396-L410 [https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L395]: https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L395 [forums_mossakowski.ch]: mailto:forums at mossakowski.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: publickey - EmailAddress(s=forums at mossakowski.ch) - 0xDC035524.asc Type: application/pgp-keys Size: 648 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 294 bytes Desc: OpenPGP digital signature URL: From gmann at ghanshyammann.com Thu Jun 3 02:20:09 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 02 Jun 2021 21:20:09 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 3rd at 1500 UTC In-Reply-To: <179c5124664.d7d11855244381.7893037772801020341@ghanshyammann.com> References: <179c5124664.d7d11855244381.7893037772801020341@ghanshyammann.com> Message-ID: <179cfabbca3.10336c8f3100883.9067913380315024841@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on June 3rd at 1500 UTC in #openstack-tc IRC OFTC channel. -https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Xena cycle tracker status check ** https://etherpad.opendev.org/p/tc-xena-tracker * Migration from 'Freenode' to 'OFTC' (gmann) ** https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc ** TC resolution *** https://review.opendev.org/c/openstack/governance/+/793260 *OpenStack Newsletters ** https://etherpad.opendev.org/p/newsletter-openstack-news * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Mon, 31 May 2021 19:56:19 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > NOTE: FROM THIS WEEK ONWARDS, TC MEETINGS WILL BE HELD IN #openstack-tc CHANNEL ON OFTC NETWORK (NOT FREENODE) > > Technical Committee's next weekly meeting is scheduled for June 3rd at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, June 2nd, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > > -gmann > > From zaitcev at redhat.com Thu Jun 3 06:22:30 2021 From: zaitcev at redhat.com (Pete Zaitcev) Date: Thu, 3 Jun 2021 01:22:30 -0500 Subject: [Swift] Object replication failures on newly upgraded servers In-Reply-To: References: Message-ID: <20210603012230.65f2bc33@suzdal.zaitcev.lan> On Fri, 28 May 2021 16:58:10 +1200 Mark Kirkwood wrote: > Examining the logs (/var/log/swift/object.log and /var/log/syslog) these > are not throwing up any red flags (i.e no failing rsyncs noted). You should be seeing tracebacks and "Error syncing partition", "Error syncing handoff partition", or "Exception in top-level replication loop". -- Pete From skaplons at redhat.com Thu Jun 3 07:10:44 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 03 Jun 2021 09:10:44 +0200 Subject: [neutron] Drivers meetning 04.06.2021 - agenda Message-ID: <12880716.CG0u9eRpRN@p1> Hi, We have one new RFE to discuss on the tomorrows drivers meeting: https://bugs.launchpad.net/neutron/+bug/1930200 - [RFE] Add support for Node-Local virtual IP[1] Please check it, ask any questions You will have regarding this proposal and see You on the meeting tomorrow. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://bugs.launchpad.net/neutron/+bug/1930200 - [RFE] Add support for Node-Local virtual IP -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ralonsoh at redhat.com Thu Jun 3 07:12:10 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Thu, 3 Jun 2021 09:12:10 +0200 Subject: [Neutron] sriov network setup for victoria - clarification needed In-Reply-To: References: Message-ID: Hi Piotr: I think you are hitting [1]. As you said, each PF has 63 VFs configured. Your error looks very similar to this one reported. Try updating pyroute2 to version 0.6.2. That should contain the fix for this error. Regards. [1]https://github.com/svinota/pyroute2/issues/751 On Thu, Jun 3, 2021 at 12:06 AM wrote: > Muchas gracias Alonso para tu ayuda! > > > I've commented out the decorator line, new exception popped out, I've > updated my gist: > > https://gist.github.com/8e6272cbe7748b2c5210fab291360e0b > > > BR, > > Piotr Mossakowski > > Sent from ProtonMail mobile > > > -------- Original Message -------- > On 31 May 2021, 18:08, Rodolfo Alonso Hernandez < ralonsoh at redhat.com> > wrote: > > > Hello Piotr: > > Maybe you should update the pyroute2 library, but this is a blind shot. > > What I recommend you do is to find the error you have when retrieving the > interface VFs. In the same compute node, use this method [1] but remove the > decorator [2]. Then, in a root shell, run python again: > >>> from neutron.privileged.agent.linux import ip_lib > >>> ip_lib.get_link_vfs('ens2f0', '') > > That will execute the pyroute2 code without the privsep decorator. You'll > see what error is returning the method. > > Regards. > > [1] > https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L396-L410 > [2] > https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L395 > > > On Mon, May 31, 2021 at 5:50 PM wrote: > >> Hello, >> I have two victoria environments: >> 1) a working one, standard setup with separate dedicated interface for >> sriov (pt0 and pt1) >> 2) a broken one, where I'm trying to reuse one of already used interfaces >> (ens2f0 or ens2f1) for sriov. ens2f0 is used for several VLANs (mgmt and >> storage) and ens2f1 is a neutron external interface which I bridged for >> VLAN tenant networks. On both I have enabled 63 VFs, it's a standard intetl >> 10Gb x540 adapter. >> >> On broken environment, when I'm trying to boot a VM with sriov port that >> I created before, I see this error shown on below gist: >> https://gist.github.com/moss2k13/8e6272cbe7748b2c5210fab291360e0b >> >> I'm investigating this for couple days now but I'm out of ideas so I'd >> like to ask for your support. Is this possible to achieve what I'm trying >> to do on 2nd environment? To use PF as normal interface and use its VFs for >> sriov-agent at the same time? >> >> Regards, >> Piotr Mossakowski >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Jun 3 08:05:35 2021 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 3 Jun 2021 10:05:35 +0200 Subject: [largescale-sig] Next meeting: June 2, 15utc In-Reply-To: <33a1e2d5-88fe-826c-47b9-2b01f06163a7@openstack.org> References: <33a1e2d5-88fe-826c-47b9-2b01f06163a7@openstack.org> Message-ID: <941e3204-e7a1-abbc-1632-4d8c0dda91f5@openstack.org> We held our meeting yesterday. We agreed to do on June 10 a continuation of the "upgrades in large scale openstack infra" show on OpenInfra.Live. We also plan to do another episode around running openstack control plane on openstack, tentatively scheduled for July 15. Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-06-02-15.00.html Our next IRC meeting will be June 23, at 1500utc on #openstack-operators on OFTC. Regards, -- Thierry Carrez (ttx) From mark at stackhpc.com Thu Jun 3 08:24:53 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 3 Jun 2021 09:24:53 +0100 Subject: [kolla] [kolla-ansible] fluentd doesn't forward OpenStack logs to Elasticsearch In-Reply-To: References: Message-ID: On Sat, 29 May 2021 at 11:24, Bernd Bausch wrote: > > I might have found a bug in Kolla-Ansible (Victoria version) but don't know where to file it. Hi Bernd, you can file kolla-ansible bugs on Launchpad [1]. [1] https://bugs.launchpad.net/kolla-ansible/+filebug > > This is about central logging. In my installation, none of the interesting logs (Nova, Cinder, Neutron...) are sent to Elasticsearch. I confirmed that using tcpdump. > > I found that fluentd's config file /etc/kolla/fluentd/td-agent.conf tags these logs with "kolla.*". But later in the file, one finds filters like this: > > # Included from conf/filter/01-rewrite-0.14.conf.j2: > > @type rewrite_tag_filter > capitalize_regex_backreference yes > ... > > key programname > pattern ^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$ > tag openstack_python > > > If I understand this right, this basically re-tags all nova logs with "openstack_python". > > The same config file has an output rule at the very end. I think the intention is to make this a catch-all rule (or "match anything else"): > > # Included from conf/output/01-es.conf.j2: > > @type copy > > @type elasticsearch > host 192.168.122.209 > port 9200 > scheme http > > etc. > > Unfortunately, the openstack_python tag doesn't match *.**, since it contains no dot. I fixed this with . Now I receive all logs, but I am not sure if this is the right way to fix it. I have seen log aggregation working, although possibly haven't tried it with Victoria. I can't see any obviously relevant changes, so please file a bug. > > The error, if it is one, is in https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/roles/common/templates/conf/output/01-es.conf.j2. > > If you want me to file a bug, please let me know how. > > Bernd. > > From soumplis at admin.grnet.gr Thu Jun 3 08:47:50 2021 From: soumplis at admin.grnet.gr (Alexandros Soumplis) Date: Thu, 3 Jun 2021 11:47:50 +0300 Subject: [kolla] [kolla-ansible] Magnum UI Message-ID: Hi all, Before submitting a bug against launchpad I would like to ask if anyone else can confirm this issue. I deploy Magnum on Victoria release using the ubuntu binary containers and I do not have the UI installed. Changing to the source binaries, the UI is installed and working as expected. Is this a configerror, a bug or a feature maybe :) Thank you, a. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3620 bytes Desc: S/MIME Cryptographic Signature URL: From mark at stackhpc.com Thu Jun 3 08:53:28 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 3 Jun 2021 09:53:28 +0100 Subject: [kolla] [kolla-ansible] fluentd doesn't forward OpenStack logs to Elasticsearch In-Reply-To: References: Message-ID: On Thu, 3 Jun 2021 at 09:24, Mark Goddard wrote: > > On Sat, 29 May 2021 at 11:24, Bernd Bausch wrote: > > > > I might have found a bug in Kolla-Ansible (Victoria version) but don't know where to file it. > > Hi Bernd, you can file kolla-ansible bugs on Launchpad [1]. > > [1] https://bugs.launchpad.net/kolla-ansible/+filebug > > > > > This is about central logging. In my installation, none of the interesting logs (Nova, Cinder, Neutron...) are sent to Elasticsearch. I confirmed that using tcpdump. > > > > I found that fluentd's config file /etc/kolla/fluentd/td-agent.conf tags these logs with "kolla.*". But later in the file, one finds filters like this: > > > > # Included from conf/filter/01-rewrite-0.14.conf.j2: > > > > @type rewrite_tag_filter > > capitalize_regex_backreference yes > > ... > > > > key programname > > pattern ^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$ > > tag openstack_python > > > > > > If I understand this right, this basically re-tags all nova logs with "openstack_python". > > > > The same config file has an output rule at the very end. I think the intention is to make this a catch-all rule (or "match anything else"): > > > > # Included from conf/output/01-es.conf.j2: > > > > @type copy > > > > @type elasticsearch > > host 192.168.122.209 > > port 9200 > > scheme http > > > > etc. > > > > Unfortunately, the openstack_python tag doesn't match *.**, since it contains no dot. I fixed this with . Now I receive all logs, but I am not sure if this is the right way to fix it. > > I have seen log aggregation working, although possibly haven't tried > it with Victoria. I can't see any obviously relevant changes, so > please file a bug. I tried this out on a recent (CentOS) Victoria deployment. I couldn't reproduce the issue. My test case was nova-scheduler. I restarted it and verified that shutdown/startup logs appear in Elastic. Could you verify whether that case also works for you, and if so, provide a broken case. > > > > > The error, if it is one, is in https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/roles/common/templates/conf/output/01-es.conf.j2. > > > > If you want me to file a bug, please let me know how. > > > > Bernd. > > > > From mark at stackhpc.com Thu Jun 3 09:03:51 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 3 Jun 2021 10:03:51 +0100 Subject: [kolla] [kolla-ansible] fluentd doesn't forward OpenStack logs to Elasticsearch In-Reply-To: References: Message-ID: On Thu, 3 Jun 2021 at 09:53, Mark Goddard wrote: > > On Thu, 3 Jun 2021 at 09:24, Mark Goddard wrote: > > > > On Sat, 29 May 2021 at 11:24, Bernd Bausch wrote: > > > > > > I might have found a bug in Kolla-Ansible (Victoria version) but don't know where to file it. > > > > Hi Bernd, you can file kolla-ansible bugs on Launchpad [1]. > > > > [1] https://bugs.launchpad.net/kolla-ansible/+filebug > > > > > > > > This is about central logging. In my installation, none of the interesting logs (Nova, Cinder, Neutron...) are sent to Elasticsearch. I confirmed that using tcpdump. > > > > > > I found that fluentd's config file /etc/kolla/fluentd/td-agent.conf tags these logs with "kolla.*". But later in the file, one finds filters like this: > > > > > > # Included from conf/filter/01-rewrite-0.14.conf.j2: > > > > > > @type rewrite_tag_filter > > > capitalize_regex_backreference yes > > > ... > > > > > > key programname > > > pattern ^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$ > > > tag openstack_python > > > > > > > > > If I understand this right, this basically re-tags all nova logs with "openstack_python". > > > > > > The same config file has an output rule at the very end. I think the intention is to make this a catch-all rule (or "match anything else"): > > > > > > # Included from conf/output/01-es.conf.j2: > > > > > > @type copy > > > > > > @type elasticsearch > > > host 192.168.122.209 > > > port 9200 > > > scheme http > > > > > > etc. > > > > > > Unfortunately, the openstack_python tag doesn't match *.**, since it contains no dot. I fixed this with . Now I receive all logs, but I am not sure if this is the right way to fix it. > > > > I have seen log aggregation working, although possibly haven't tried > > it with Victoria. I can't see any obviously relevant changes, so > > please file a bug. > > I tried this out on a recent (CentOS) Victoria deployment. I couldn't > reproduce the issue. My test case was nova-scheduler. I restarted it > and verified that shutdown/startup logs appear in Elastic. Could you > verify whether that case also works for you, and if so, provide a > broken case. Could you provide your version of fluentd/td-agent? docker exec -it fluentd td-agent --version I have 1.11.2, although we have just confirmed a broken case with 1.12.1. John Garbutt is planning to develop a patch based on your suggested fix. > > > > > > > > > The error, if it is one, is in https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/roles/common/templates/conf/output/01-es.conf.j2. > > > > > > If you want me to file a bug, please let me know how. > > > > > > Bernd. > > > > > > From malikobaidadil at gmail.com Thu Jun 3 12:01:35 2021 From: malikobaidadil at gmail.com (Malik Obaid) Date: Thu, 3 Jun 2021 17:01:35 +0500 Subject: [wallaby][nova] Change Time Zone Message-ID: Hi, I am using Openstack Wallaby release on Ubuntu 20.04. When I try to list openstack compute services the time zone in 'Updated At' column is in UTC. root at controller-khi01 ~(keystone)# openstack compute service list +----+----------------+------------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+------------------+----------+---------+-------+----------------------------+ | 4 | nova-conductor | controller-khi01 | internal | enabled | up | 2021-06-03T11:59:59.000000 | | 5 | nova-scheduler | controller-khi01 | internal | enabled | up | 2021-06-03T12:00:08.000000 | | 8 | nova-compute | kvm03-a1-khi01 | nova | enabled | up | 2021-06-03T12:00:02.000000 | | 9 | nova-compute | kvm01-a1-khi01 | nova | enabled | up | 2021-06-03T12:00:02.000000 | +----+----------------+------------------+----------+---------+-------+----------------------------+ I want to change it to some other time zone. Is there a way to do it? I would really appreciate any input in this regard. Thank you. Regards, Malik Obaid -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Jun 3 12:57:46 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 3 Jun 2021 15:57:46 +0300 Subject: [TripleO] tripleo repos going Extended Maintenance stable/train OK? (not yet IMO) In-Reply-To: References: Message-ID: Hello all, as discussed at [1] the train branch across all tripleo repos will be moving to extended maintenance at the end of this week. Before we make the train-em tag I have pushed one last release on train with [2] and rebased [1] onto that. As discussed in [1] train is still an active branch for tripleo and we can and will continue to merge fixes there. It just means that we will no longer be making tagged releases for train branches. If you have any questions or concerns about any of this please reach out, regards, marios [1] https://review.opendev.org/c/openstack/releases/+/790778/2#message-e8ee1f6febb4780ccbb703bf378bcfc08776a49a [2] https://review.opendev.org/c/openstack/releases/+/794583 On Thu, May 13, 2021 at 3:47 PM Marios Andreou wrote: > > Hello TripleO o/ > > per [1] and the proposal at [2] the stable/train branch for all > tripleo repos [3] is going to transition to extended maintenance [4]. > > Once [2] merges, we can still merge things to stable/train but it > means we can no longer make official openstack tagged releases for > stable/train. > > TripleO is a trailing project so if we want to hold on this for a > while longer I think that is OK and that would also be my personal > preference. > > From a quick check just now e.g. tripleo-heat-templates @ [5] and at > current time there are 87 commits since last September which isn't a > tiny amount. So I don't think TripleO is ready to declare stable/train > as extended maintenance, but perhaps I am wrong, what do you think? > > Please comment here or directly at [2] if you prefer > > regards, marios > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022287.html > [2] https://review.opendev.org/c/openstack/releases/+/790778/2#message-e981f749aeca64ea971f4e697dd16ba5100ca4a4 > [3] https://releases.openstack.org/teams/tripleo.html#train > [4] https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > [5] https://github.com/openstack/tripleo-heat-templates/compare/11.5.0...stable/train From smooney at redhat.com Thu Jun 3 13:02:03 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 03 Jun 2021 14:02:03 +0100 Subject: [wallaby][nova] Change Time Zone In-Reply-To: References: Message-ID: <9c5db04b47bfe0fe51d343121273847731dd0179.camel@redhat.com> On Thu, 2021-06-03 at 17:01 +0500, Malik Obaid wrote: > Hi, > > I am using Openstack Wallaby release on Ubuntu 20.04. > > When I try to list openstack compute services the time zone in 'Updated At' > column is in UTC. i suspect that that is based on the default timzone of the server where the conductor is running or perhaps the individual services. nova does not have any configuration for this and the updated_at time filed is generally provded by the NovaPersistentObject mixin https://github.com/openstack/nova/blob/master/nova/objects/base.py#L134-L149 for the compute nodes table its set by https://github.com/openstack/nova/blob/da57eebc9e1ab7e48d4c4ef6ec1eeba80d867d81/nova/db/sqlalchemy/api.py#L737-L738 and we are converting explitly to utc ltaher here https://github.com/openstack/nova/blob/da57eebc9e1ab7e48d4c4ef6ec1eeba80d867d81/nova/db/sqlalchemy/api.py#L302-L318 while i have not explictly found where the compute service record update_at is set i would guess it a deliberate design descisn to only store data information in utc format and leave it to the clinets to convert to local timezones if desired. i think that is proably the correct approch to take. although i guess you could confvert it at the api potentially though im not sure that would genrelaly be a good impromenbt to make > root at controller-khi01 ~(keystone)# openstack compute service list > +----+----------------+------------------+----------+---------+-------+----------------------------+ > > ID | Binary | Host | Zone | Status | State | > Updated At | > +----+----------------+------------------+----------+---------+-------+----------------------------+ > > 4 | nova-conductor | controller-khi01 | internal | enabled | up | > 2021-06-03T11:59:59.000000 | > > 5 | nova-scheduler | controller-khi01 | internal | enabled | up | > 2021-06-03T12:00:08.000000 | > > 8 | nova-compute | kvm03-a1-khi01 | nova | enabled | up | > 2021-06-03T12:00:02.000000 | > > 9 | nova-compute | kvm01-a1-khi01 | nova | enabled | up | > 2021-06-03T12:00:02.000000 | > +----+----------------+------------------+----------+---------+-------+----------------------------+ > > I want to change it to some other time zone. Is there a way to do it? not that i am aware no. > I would really appreciate any input in this regard. > > Thank you. > > Regards, > Malik Obaid From katonalala at gmail.com Thu Jun 3 13:05:43 2021 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 3 Jun 2021 15:05:43 +0200 Subject: [Neutron] sriov network setup for victoria - clarification needed In-Reply-To: References: Message-ID: Hi, 0.6.3 has another increase for the DEFAULT_RCVBUF: https://github.com/svinota/pyroute2/issues/813 Regards Lajos Katona (lajoskatona) Rodolfo Alonso Hernandez ezt írta (időpont: 2021. jún. 3., Cs, 9:16): > Hi Piotr: > > I think you are hitting [1]. As you said, each PF has 63 VFs configured. > Your error looks very similar to this one reported. > > Try updating pyroute2 to version 0.6.2. That should contain the fix for > this error. > > Regards. > > [1]https://github.com/svinota/pyroute2/issues/751 > > On Thu, Jun 3, 2021 at 12:06 AM wrote: > >> Muchas gracias Alonso para tu ayuda! >> >> >> I've commented out the decorator line, new exception popped out, I've >> updated my gist: >> >> https://gist.github.com/8e6272cbe7748b2c5210fab291360e0b >> >> >> BR, >> >> Piotr Mossakowski >> >> Sent from ProtonMail mobile >> >> >> -------- Original Message -------- >> On 31 May 2021, 18:08, Rodolfo Alonso Hernandez < ralonsoh at redhat.com> >> wrote: >> >> >> Hello Piotr: >> >> Maybe you should update the pyroute2 library, but this is a blind shot. >> >> What I recommend you do is to find the error you have when retrieving the >> interface VFs. In the same compute node, use this method [1] but remove the >> decorator [2]. Then, in a root shell, run python again: >> >>> from neutron.privileged.agent.linux import ip_lib >> >>> ip_lib.get_link_vfs('ens2f0', '') >> >> That will execute the pyroute2 code without the privsep decorator. You'll >> see what error is returning the method. >> >> Regards. >> >> [1] >> https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L396-L410 >> [2] >> https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L395 >> >> >> On Mon, May 31, 2021 at 5:50 PM wrote: >> >>> Hello, >>> I have two victoria environments: >>> 1) a working one, standard setup with separate dedicated interface for >>> sriov (pt0 and pt1) >>> 2) a broken one, where I'm trying to reuse one of already used >>> interfaces (ens2f0 or ens2f1) for sriov. ens2f0 is used for several VLANs >>> (mgmt and storage) and ens2f1 is a neutron external interface which I >>> bridged for VLAN tenant networks. On both I have enabled 63 VFs, it's a >>> standard intetl 10Gb x540 adapter. >>> >>> On broken environment, when I'm trying to boot a VM with sriov port that >>> I created before, I see this error shown on below gist: >>> https://gist.github.com/moss2k13/8e6272cbe7748b2c5210fab291360e0b >>> >>> I'm investigating this for couple days now but I'm out of ideas so I'd >>> like to ask for your support. Is this possible to achieve what I'm trying >>> to do on 2nd environment? To use PF as normal interface and use its VFs for >>> sriov-agent at the same time? >>> >>> Regards, >>> Piotr Mossakowski >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Jun 3 14:51:09 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 3 Jun 2021 17:51:09 +0300 Subject: [TripleO] tripleo repos going Extended Maintenance stable/train OK? (not yet IMO) In-Reply-To: <2F244667-94A4-4048-A6B1-96D5DC692B39@redhat.com> References: <2F244667-94A4-4048-A6B1-96D5DC692B39@redhat.com> Message-ID: On Thursday, June 3, 2021, Jesse Pretorius wrote: > > > > On 3 Jun 2021, at 13:57, Marios Andreou wrote: > > > > Hello all, > > > > as discussed at [1] the train branch across all tripleo repos will be > > moving to extended maintenance at the end of this week. Before we make > > the train-em tag I have pushed one last release on train with [2] and > > rebased [1] onto that. > > > > As discussed in [1] train is still an active branch for tripleo and we > > can and will continue to merge fixes there. It just means that we will > > no longer be making tagged releases for train branches. > > > > If you have any questions or concerns about any of this please reach out, > > I think this would be problematic for us. We’re still actively submitting > changes to stable/train for tripleo and will likely be for some time. > > yes agree but this does not stop us from continuing to merge whatever we need across train tripleo repos. It only affects tagged releases. > I don’t know what the effect is to us downstream for not being able to tag > upstream. I think the RDO folks (who do the packaging) would need to > respond to that for us to make a suitable final call. As far as I know there is no direct correlation between upstream git repo tags and downstream packaging. I believe the import point used is a particular commit hash for a given repo. I'll reach out to rhos delivery folks and point at this email though to confirm. If there is a problem I am not sure how we can resolve it as it sounds like this is a mandatory move for us per the discussion at https://review.opendev.org/c/openstack/releases/+/790778/2#message-e8ee1f6febb4780ccbb703bf378bcfc08776a49a But let's see what packaging folks think thanks for the suggestion regards, marios -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu Jun 3 15:40:48 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 3 Jun 2021 16:40:48 +0100 Subject: [kolla] Reorganization of kolla-ansible documentation In-Reply-To: References: Message-ID: On Mon, 24 May 2021 at 09:30, Mark Goddard wrote: > > On Fri, 14 May 2021 at 21:27, Klemen Pogacnik wrote: > > > > Hello! > > Hi Klemen, > > Thank you for your evaluation of the documentation. I think a lot of > it aligns with the discussions we had in the Kolla Kalls [1] some time > ago. I'll add notes inline. > > It's worth looking at other similar projects for inspiration, e.g. OSA > [2] and TripleO [3]. > > [1] https://etherpad.opendev.org/p/kollakall > [2] https://docs.openstack.org/openstack-ansible/latest/ > [3] https://docs.openstack.org/tripleo-docs/latest/ > > Mark > > > > > I promised to prepare my view as a user of kolla-ansible on its documentation. In my opinion the division between admin guides and user guides is artificial, as the user of kolla-ansible is actually the cloud administrator. > > Absolutely agreed. > > > > > Maybe it would be good to think about reorganizing the structure of documentation. Many good chapters are already written, they only have to be positioned in the right place to be found more easily. > > Agreed also. We now have redirect support [4] in place to keep old > links working, assuming only whole pages are moved. > > [4] doc/source/_extra/.htaccess > > > > > So here is my proposal of kolla-ansible doc's structure: > > > > 1. Introduction > > 1.1. mission > > 1.2. benefits > > 1.3. support matrix > > How about a 'getting started' page, similar to [5]? > > [5] https://docs.openstack.org/kayobe/latest/getting-started.html > > > 2. Architecture > > 2.1. basic architecture > > 2.2. HA architecture > > 2.3. network architecture > > 2.4. storage architecture > > 3. Workflows > > 3.1. preparing the surroundings (networking, docker registry, ...) > > 3.2. preparing servers (packages installation) > > Installation of kolla-ansible should go here. > > > 3.3. configuration (of kolla-ansible and description of basic logic for configuration of Openstack modules) > > 3.4. 1st day procedures (bootstrap, deploy, destroy) > > 3.5. 2nd day procedures (reconfigure, upgrade, add, remove nodes ...) > > 3.6. multiple regions > > 3.7. multiple cloud > > 3.8. security > > 3.9. troubleshooting (how to check, if cloud works, what to do, if it doesn't) > > > 4. Use Cases > > 4.1. all-in-one > > 4.2. basic vm multinode > > 4.3. some production use cases > > What do these pages contain? Something like the current quickstart? > > > 5. Reference guide > > Mostly the same structure as already is. Except it would be desirable that description of each module has: > > - purpose of the module > > - configuration of the module > > - how to use it with links to module docs > > - basic troubleshooting > > 6. Contributor guide > > > > > > The documentation also needs figures, pictures, diagrams to be more understandable. So at least in the first chapters some of them shall be added. > > This is a common request from users. We have lots of reference > documentation, but need more high level architectural information and > diagrams. Unfortunately this type of documentation is quite hard to > create, but we would welcome improvements. > > > > > > > I'm also thinking about convergence of documentation of kayobe, kolla and kolla-ansible projects. It's true that there's no strict connection between kayobe and other two and kolla containers can be used without kolla-ansible playbooks. But the real benefit the user can get is to use all three projects together. But let's leave that for the second phase. > > > > I'm not so sure about converging them into one set of docs. They are > each fairly separate tools. We added a short section [6] to each > covering related projects. Perhaps we should make this a dedicated > page, and provide more information about the Kolla ecosystem? > > [6] https://docs.openstack.org/kolla/latest/#related-projects > > > > > > > So please comment on this proposal. Do you think it's going in the right direction? If yes, I can refine it. Following up on this, we discussed it in this week's IRC meeting [1]. We agreed that a good first step would be a simple refactor to remove the artificial user/admin split. Some more challenging additions and reworking could follow that, starting with the intro/architecture sections. [1] http://eavesdrop.openstack.org/meetings/kolla/2021/kolla.2021-06-02-15.02.log.html#l-136 > > > > From dangerzonen at gmail.com Thu Jun 3 01:55:05 2021 From: dangerzonen at gmail.com (dangerzone ar) Date: Thu, 3 Jun 2021 09:55:05 +0800 Subject: [Tacker] Tacker Not able to create VIM Message-ID: Hi all, I just deployed Tacker and tried to add my 1st VIM but I’m getting errors as per attached file. Pls advise how to resolve this problem. Thanks 1. *Error: *Failed to register VIM: {"error": {"message": "( http://192.168.0.121:5000/v3/tokens): The resource could not be found.", "code": 404, "title": "Not Found"}} 1. *Error as below**à** WARNING keystonemiddleware.auth_token [-] Authorization failed for token: InvalidToken* *{"vim": {"vim_project": {"name": "admin", "project_domain_name": "Default"}, "description": "d", "is_default": false, "auth_cred": {"username": "admin", "user_domain_name": "Default", "password": "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 **", "type": "openstack", "name": "d"}} process_request /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] Authorization failed for token: InvalidToken* *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* Below is my tacker.conf [DEFAULT] auth_strategy = keystone policy_file = /etc/tacker/policy.json debug = True use_syslog = False bind_host = 192.168.0.121 bind_port = 9890 service_plugins = nfvo,vnfm state_path = /var/lib/tacker [nfvo] vim_drivers = openstack [keystone_authtoken] region_name = RegionOne auth_type = password project_domain_name = Default user_domain_name = Default username = tacker password = password auth_url = http://192.168.0.121:35357 auth_uri = http://192.168.0.121:5000 [agent] root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf [database] connection = mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: er1.jpg Type: image/jpeg Size: 61973 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: er2.jpg Type: image/jpeg Size: 214252 bytes Desc: not available URL: From jpretori at redhat.com Thu Jun 3 13:02:57 2021 From: jpretori at redhat.com (Jesse Pretorius) Date: Thu, 3 Jun 2021 14:02:57 +0100 Subject: [rhos-dev] [TripleO] tripleo repos going Extended Maintenance stable/train OK? (not yet IMO) In-Reply-To: References: Message-ID: <2F244667-94A4-4048-A6B1-96D5DC692B39@redhat.com> > On 3 Jun 2021, at 13:57, Marios Andreou wrote: > > Hello all, > > as discussed at [1] the train branch across all tripleo repos will be > moving to extended maintenance at the end of this week. Before we make > the train-em tag I have pushed one last release on train with [2] and > rebased [1] onto that. > > As discussed in [1] train is still an active branch for tripleo and we > can and will continue to merge fixes there. It just means that we will > no longer be making tagged releases for train branches. > > If you have any questions or concerns about any of this please reach out, I think this would be problematic for us. We’re still actively submitting changes to stable/train for tripleo and will likely be for some time. I don’t know what the effect is to us downstream for not being able to tag upstream. I think the RDO folks (who do the packaging) would need to respond to that for us to make a suitable final call. From hjensas at redhat.com Thu Jun 3 18:07:07 2021 From: hjensas at redhat.com (Harald Jensas) Date: Thu, 3 Jun 2021 20:07:07 +0200 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: <731c69a4-bbe8-fd5b-22b4-c8c52686a021@redhat.com> On 6/2/21 1:17 PM, Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > +1 From victoria at vmartinezdelacruz.com Thu Jun 3 18:19:43 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Thu, 3 Jun 2021 20:19:43 +0200 Subject: [Manila ] Upcoming Bug Squash starting June 7th through June 11th 2021 In-Reply-To: References: Message-ID: Hi all, Just dropping you a line to remind you about this event and also to let you know that we doubled down and we will have two calls for this. * Monday June 7th, 2021 at 15:00 UTC * in the aforementioned Jitsi bridge [1] to go over the list of bugs we have for this bug squash [2] And join us again on * Thursday June 10th, 2021 at 15:00 UTC * (instead of the weekly meeting) to do a live review session with some of the core reviewers. The goal of this second session is to show live how a bug review is done: what we look at when doing a bug review, coding best practices, commit messages, release notes, and more. We will use same Jitsi bridge we use for the session on Monday [1] We will remind you about this again on our IRC channel (#openstack-manila in OFTC) when we are closer to the event :) Everybody is invited to join us. Cheers, V [1] https://meetpad.opendev.org/ManilaX-ReleaseBugSquash [2] https://ethercalc.openstack.org/i3vwocrkk776 On Wed, May 26, 2021 at 9:14 PM Vida Haririan wrote: > Hi everyone, > > > As discussed, a new Bug Squash event is around the corner! > > > The event will be held from 7th to 11th June, 2021, providing an extended > contribution window. There will be a synchronous call held simultaneously on > IRC, Thursday June 10th, 2021 at 15:00 UTC and we will use this Jitsi > bridge [1]. > > > A list of selected bugs will be shared here [2]. Please feel free to add > any additional bugs you would like to address during the event. > > Thanks for your participation in advance. > > > Vida > > > [1] https://meetpad.opendev.org/ManilaX-ReleaseBugSquash > > [2] https://ethercalc.openstack.org/i3vwocrkk776 > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Jun 3 20:52:20 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 3 Jun 2021 22:52:20 +0200 Subject: [tc] Moving IRC meetings to project channels Message-ID: Hello, In the guidance to PTLs for the freenode to OFTC migration, there was this guideline: > The TC is asking that projects take advantage of this time of change to consider moving project meetings from the #openstack-meeting* channels to their project channel. I was surprised since it was the first time I heard about this suggested change. The project team guide [1] actually still states the following: > The OpenStack infrastructure team maintains a limited number of channels dedicated to meetings. While teams can hold meetings on their own team IRC channels, they are encouraged to use those common meeting channels to give their meeting some external exposure. The limited number of meeting channels encourages teams to spread their meetings around and reduce conflicts. Is there any background regarding this proposed change? Not that I am against it in any way: I have participated in meetings in both kinds of channels and haven't really seen any difference. Thanks, Pierre Riteau (priteau) [1] https://docs.openstack.org/project-team-guide/open-community.html#public-meetings-on-irc From gmann at ghanshyammann.com Thu Jun 3 22:29:36 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 03 Jun 2021 17:29:36 -0500 Subject: [tc] Moving IRC meetings to project channels In-Reply-To: References: Message-ID: <179d3ff053d.c7797314160734.5796180177007608281@ghanshyammann.com> ---- On Thu, 03 Jun 2021 15:52:20 -0500 Pierre Riteau wrote ---- > Hello, > > In the guidance to PTLs for the freenode to OFTC migration, there was > this guideline: > > > The TC is asking that projects take advantage of this time of change to consider moving project meetings from the #openstack-meeting* channels to their project channel. > > I was surprised since it was the first time I heard about this > suggested change. The project team guide [1] actually still states the > following: > > > The OpenStack infrastructure team maintains a limited number of channels dedicated to meetings. While teams can hold meetings on their own team IRC channels, they are encouraged to use those common meeting channels to give their meeting some external exposure. The limited number of meeting channels encourages teams to spread their meetings around and reduce conflicts. > > Is there any background regarding this proposed change? Not that I am > against it in any way: I have participated in meetings in both kinds > of channels and haven't really seen any difference. Idea behind this is to avoid confusion over which channel has which project meeting. There are multiple meeting channel #openstack-meeting-3, #openstack-meeting-4, #openstack-meeting-5, #openstack-meeting-alt, #openstack-meeting and sometime it is difficult to remember which channel has which project meeting until you go and check the project doc/wiki page or so. Having meeting in channel itself avoid such confusion. We have been doing this for QA, TC since many year and it work perfectly. But this is project side choice, TC is suggesting this option. I will make project-team-guide changes to add this suggestion. -gmann > > Thanks, > Pierre Riteau (priteau) > > [1] https://docs.openstack.org/project-team-guide/open-community.html#public-meetings-on-irc > > From gmann at ghanshyammann.com Thu Jun 3 22:39:23 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 03 Jun 2021 17:39:23 -0500 Subject: [all] CRITICAL: Upcoming changes to the OpenStack Community IRC this weekend In-Reply-To: <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> References: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> Message-ID: <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> ---- On Mon, 31 May 2021 09:06:11 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Updates: > > As you might have seen in the Fungi email reply on service-discuss ML, all the bot and logging migration is complete now. > > * Now onwards every discussion or meeting now needs to be done on OFTC, not on Freenode. As you can see many projects PTL started sending email on their next meeting on OFTC, please do if you have not done yet. > > * I have started a new etherpad for tracking all the migration tasks (all action items we collected from Wed TC meeting.). Please plan the work needed from the project team side and mark the progress. > > - https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc Hello Everyone, There were two question in openstack-tc this morning which we discussed in today TC meeting and agreed on below points: 1. Backporting OFTC reference changes * Agreed to backport the changes as much as possible. * On keeping doc/source/contributor/contributing.rst on stable branches: ** We do not need to maintain this on stable as such, because master version of it can be referred from doc or top level CONTRIBUTING.rst ** Fungi will add the global redirect link to master/latest version in openstack-manual. Project does not need to do this explicitly. ** Project can remove doc/source/contributor/contributing.rst from stable branch as per their convenience 2. Topic change on Freenode channel * We decided to do this on June 11th and until then continue redirecting people from old channel to OFTC. -gmann > > -gmann > > > ---- On Wed, 26 May 2021 12:19:26 -0500 Ghanshyam Mann wrote ---- > > Greetings contributors & community members! > > > > With recent events, the Technical Committee held an emergency meeting today (Wednesday, May 26th, 2021) > > regarding Freenode IRC and what our decision would be [1]. Earlier in the week, the consensus amongst the TC > > was to gather more information from the individual projects, and make a decision from there[2]. With #rdo, > > #ubuntu, and #wikipedia having been hijacked, the consensus amongst the TC and the community members > > who were able to attend the meeting was to move away from Freenode as soon as possible. The TC agreed > > that this move away from Freenode needs to be a community-wide move to the same, new IRC network for > > all projects to avoid splintering of the community. As has been long-planned in the event of a contingency, we > > will be moving to OFTC. > > > > We recognize this is a contentious topic, and ultimately we seek to ensure community continuity before evolution > > to something beyond IRC, as many have expressed interest in doing via Mailing List discussions. At this point, we > > had to make a decision to solve the immediate problem in the simplest and most expedient way possible, so this is > > that announcement. We welcome continued discussion about future alternatives on the other threads. > > > > With this in mind, we suggest the following steps. > > > > Everyone: > > ======= > > 1. Do NOT change any channel topics to represent this change. This is likely to result in the channel being taken > > over by Freenode and will disrupt communications within our community. > > 2. Register your nicknames on OFTC [3][4] > > 3. Be *prepared* to join your channels on OFTC[4]. The OpenStack community channels have already been > > registered on OFTC and await you. > > 4. Continue to use Freenode for OpenStack discussions until the bots have been moved and the official cut-over > > takes place this coming weekend. We anticipate using OFTC starting Monday, May 31st. > > > > Projects/Project Leaders: > > ==================== > > 1. Projects should work to get a few volunteers to staff their project channels on Freenode, for the near future to help > > redirect people to OFTC. This should occur via private messages to avoid a ban. > > 2. Continue to hold project meetings on Freenode until the bots are enabled on OFTC. > > 3. Update project wikis/documentation with the new IRC network information. We ask that you consider referring to > > the central contributor guide[5]. > > 4. The TC is asking that projects take advantage of this time of change to consider moving project meetings from > > the #openstack-meeting* channels to their project channel. > > 5. Please avoid discussing the move to OFTC in Freenode channels as this may also trigger a takeover of the channel. > > > > We are working on getting our bots over to OFTC, and they will be moved over the weekend. Starting Monday May 31, > > the bots will be on OFTC. Communication regarding this migration will take place on OFTC[4] in #openstack-dev, and > > we're working on updating the contributor guide[5] to reflect this migration. > > > > Sincerely, > > > > The OpenStack TC and community leaders who came together to agree on a path forward. > > > > [1]: https://etherpad.opendev.org/p/openstack-irc > > [2]: https://etherpad.opendev.org/p/feedback-on-freenode > > [3]: https://www.oftc.net/Services/#register-your-account > > [4]: https://www.oftc.net/ > > [5]: https://docs.openstack.org/contributors/common/irc.html > > > > > > From fungi at yuggoth.org Thu Jun 3 23:09:46 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 3 Jun 2021 23:09:46 +0000 Subject: [tc] Moving IRC meetings to project channels In-Reply-To: <179d3ff053d.c7797314160734.5796180177007608281@ghanshyammann.com> References: <179d3ff053d.c7797314160734.5796180177007608281@ghanshyammann.com> Message-ID: <20210603230946.bwjmx5pnpz6zd5ig@yuggoth.org> On 2021-06-03 17:29:36 -0500 (-0500), Ghanshyam Mann wrote: [...] > Idea behind this is to avoid confusion over which channel has > which project meeting. There are multiple meeting channel > #openstack-meeting-3, #openstack-meeting-4, #openstack-meeting-5, > #openstack-meeting-alt, #openstack-meeting and sometime it is > difficult to remember which channel has which project meeting > until you go and check the project doc/wiki page or so. > > Having meeting in channel itself avoid such confusion. We have > been doing this for QA, TC since many year and it work perfectly. [...] The idea behind having meetings in common channels is that it reduces the number of channels people need to join if they just want to lurk the team meetings but not necessarily be in the team channels, it avoids people distracting the meeting with unrelated in-channel banter or noise from notification bots about things like change uploads to Gerrit, and it slightly decreases the chances that too many meetings get scheduled into the same timeslots. I also participate in some projects which do it that way and some which have their meetings in-channel. For the most part, meetings for smaller teams without a lot of overlap with other projects and low volumes of normal discussion in their channels seem to be happy with in-channel meetings. Large teams with a bunch of tendrils to and from other projects and lots of crosstalk in their channel tend to prefer the option of a separate meeting channel. Also there are no -4 and -5 meeting channels any longer, since at least a year if not more; we're down to just the other three you listed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From vuk.gojnic at gmail.com Fri Jun 4 06:22:36 2021 From: vuk.gojnic at gmail.com (Vuk Gojnic) Date: Fri, 4 Jun 2021 08:22:36 +0200 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: I found where was my issue. After using different GRUBX64.efi and BOOTX64.efi binaries (this time from https://vault.centos.org/8.3.2011/BaseOS/x86_64/kickstart/EFI/BOOT/ instead from Ubuntu Bionic LiveCD) everything worked normally and the large initrd was successfully loaded. It seems that used EFI binaries from Ubuntu had that issue. Advice in such case is: check with another bootloader variant/version if the problem persists. Thanks! -Vuk On Mon, May 17, 2021 at 4:14 PM Dmitry Tantsur wrote: > > Hi, > > I'm not sure. We have never hit this problem with DIB-built images before. I know that TripleO uses an even larger image than one we publish on tarballs.o.o. From skaplons at redhat.com Fri Jun 4 06:41:48 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 04 Jun 2021 08:41:48 +0200 Subject: [neutron] Meetings channel changes Message-ID: <4434280.3qEIF5uYtV@p1> Hi, As we discussed on our last team meeting, I just proposed change to move our meetings from the openstack-meeeting-* channels to the openstack-neutron channel [1]. Let's have today's drivers meeting still on the #openstack-meeting channel @OFTC but starting next week all our meetings will take place on the #openstack-neutron channel. [1] https://review.opendev.org/c/opendev/irc-meetings/+/794711[1] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/opendev/irc-meetings/+/794711 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From jpodivin at redhat.com Fri Jun 4 06:46:10 2021 From: jpodivin at redhat.com (Jiri Podivin) Date: Fri, 4 Jun 2021 08:46:10 +0200 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: <731c69a4-bbe8-fd5b-22b4-c8c52686a021@redhat.com> References: <731c69a4-bbe8-fd5b-22b4-c8c52686a021@redhat.com> Message-ID: +1 On Thu, Jun 3, 2021 at 8:14 PM Harald Jensas wrote: > On 6/2/21 1:17 PM, Marios Andreou wrote: > > Hello all > > > > Having discussed this with some members of the tripleo ci team > > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > > tripleo-quickstart and tripleo-quickstart-extras). > > > > Sandeep joined the team about 1.5 years ago and has from the start > > demonstrated his eagerness to learn and an excellent work ethic, > > having made many useful code submissions [1] and code reviews [2] to > > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > > > Please reply to this mail with a +1 or -1 for objections in the usual > > manner. If there are no objections we can declare it official in a few > > days > > > > +1 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gchamoul at redhat.com Fri Jun 4 08:18:37 2021 From: gchamoul at redhat.com (=?utf-8?B?R2HDq2w=?= Chamoulaud) Date: Fri, 4 Jun 2021 10:18:37 +0200 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: Message-ID: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> Of course, a big +1! On 02/Jun/2021 14:17, Marios Andreou wrote: > Hello all > > Having discussed this with some members of the tripleo ci team > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > tripleo-quickstart and tripleo-quickstart-extras). > > Sandeep joined the team about 1.5 years ago and has from the start > demonstrated his eagerness to learn and an excellent work ethic, > having made many useful code submissions [1] and code reviews [2] to > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > Please reply to this mail with a +1 or -1 for objections in the usual > manner. If there are no objections we can declare it official in a few > days > > regards, marios > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > [2] https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > Best Regards, Gaël -- Gaël Chamoulaud - (He/Him/His) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From malikobaidadil at gmail.com Fri Jun 4 09:20:31 2021 From: malikobaidadil at gmail.com (Malik Obaid) Date: Fri, 4 Jun 2021 14:20:31 +0500 Subject: [wallaby][neutron][ovn] MTU in Neutron for Production Message-ID: Hi, I am using Openstack Wallaby release on Ubuntu 20.04. I am configuring openstack neutron for production. While setting mtu I am a bit confused between the 2 use cases. *Case 1* External (public) network mtu 1500 self service (tenent) network geneve mtu 9000 VLAN external network mtu 1500 *Case 2* External (public) network mtu 9000 self service (tenent) network geneve mtu 9000 VLAN external network mtu 9000 I just want to know which case would be better for production. I would really appreciate any input in this regard. Thank you. Regards, Malik Obaid -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.canonico at uniupo.it Fri Jun 4 10:50:15 2021 From: massimo.canonico at uniupo.it (Massimo Canonico) Date: Fri, 4 Jun 2021 12:50:15 +0200 Subject: libcloud Message-ID: <09f1daf4-83c3-0d5c-2a38-ad2379b35d37@uniupo.it> Hi, I'm new and I'm not sure if here is the right place to post the questions related to OpenStack and Libcloud. I've used for years OpenStack provided by Chameleon project and recently they change the autentication procedure (they use a federate login). Since them, I'm having problem to use my script with libcloud. This script was working with the legacy login: provider = get_driver(Provider.OPENSTACK) conn = provider(auth_username,auth_password,ex_force_auth_url=auth_url,         ex_force_auth_version='3.x_password',     ex_tenant_name=project_name,     ex_force_service_region=region_name,api_version='2.1') Now it is not working. If I take a look at the openrc file I can note this: export OS_AUTH_TYPE="v3oidcpassword" Maybe, this is the problem. Any idea about how can I fix my script? Thanks, Massimo From pierre at stackhpc.com Fri Jun 4 11:02:17 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 4 Jun 2021 13:02:17 +0200 Subject: [blazar] IRC meeting moving to #openstack-blazar Message-ID: Hello, Following the latest recommendation from the TC, the bi-weekly IRC meeting of the Blazar project is moving to the #openstack-blazar channel on OFTC. This will be effective from the next meeting on June 17. Pierre Riteau (priteau) From mnaser at vexxhost.com Fri Jun 4 11:31:53 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 4 Jun 2021 07:31:53 -0400 Subject: libcloud In-Reply-To: <09f1daf4-83c3-0d5c-2a38-ad2379b35d37@uniupo.it> References: <09f1daf4-83c3-0d5c-2a38-ad2379b35d37@uniupo.it> Message-ID: I think you’re having problems because libcloud might not support OIDC with natively. On Fri, Jun 4, 2021 at 6:54 AM Massimo Canonico wrote: > Hi, > > I'm new and I'm not sure if here is the right place to post the > questions related to OpenStack and Libcloud. > > I've used for years OpenStack provided by Chameleon project and recently > they change the autentication procedure (they use a federate login). > Since them, I'm having problem to use my script with libcloud. > > This script was working with the legacy login: > > provider = get_driver(Provider.OPENSTACK) > conn = provider(auth_username,auth_password,ex_force_auth_url=auth_url, > > ex_force_auth_version='3.x_password', > > ex_tenant_name=project_name, > > ex_force_service_region=region_name,api_version='2.1') > > > Now it is not working. If I take a look at the openrc file I can note this: > > export OS_AUTH_TYPE="v3oidcpassword" > > Maybe, this is the problem. > > Any idea about how can I fix my script? > > Thanks, > > Massimo > > > > -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Fri Jun 4 12:52:09 2021 From: bkslash at poczta.onet.pl (at) Date: Fri, 4 Jun 2021 14:52:09 +0200 Subject: [glance] How to limit access to particular store Message-ID: <49F175A2-A993-424B-97BF-F4EFB8129321@poczta.onet.pl> Hi, I have Glance with multi-store config and I want one store (not default) to be read-only for everyone except cloud Admin. How can I do it? Is there any way to limit store names visibility (which are visible i.e. in properties section of "openstack image show IMAGE_NAME" output)? Best regards Adam Tomas From rosmaita.fossdev at gmail.com Fri Jun 4 13:34:23 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 4 Jun 2021 09:34:23 -0400 Subject: [cinder] xena R-18 mid-cycle summary available Message-ID: In case you missed the Xena cinder R-18 virtual mid-cycle session earlier this week, I've posted a summary: https://wiki.openstack.org/wiki/CinderWallabyMidCycleSummary It includes a link to the recording if you want more context for any topic that interests you. We're planning to have another mid-cycle session the week of R-9, namely, on Wednesday 4 August 2021, 1400-1600 UTC. As always, you can add topics to the planning etherpad: https://etherpad.opendev.org/p/cinder-xena-mid-cycles cheers, brian From rosmaita.fossdev at gmail.com Fri Jun 4 13:39:17 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 4 Jun 2021 09:39:17 -0400 Subject: [cinder] xena R-18 mid-cycle summary available In-Reply-To: References: Message-ID: <433902db-a799-89ff-ab60-b495908ca175@gmail.com> On 6/4/21 9:34 AM, Brian Rosmaita wrote: > In case you missed the Xena cinder R-18 virtual mid-cycle session > earlier this week, I've posted a summary: >   https://wiki.openstack.org/wiki/CinderWallabyMidCycleSummary The attentive reader will note that I mistakenly linked to the wallaby summary, which though well worth reading, is off topic. The Xena summary is here: https://wiki.openstack.org/wiki/CinderXenaMidCycleSummary > It includes a link to the recording if you want more context for any > topic that interests you. > > We're planning to have another mid-cycle session the week of R-9, > namely, on Wednesday 4 August 2021, 1400-1600 UTC.  As always, you can > add topics to the planning etherpad: >   https://etherpad.opendev.org/p/cinder-xena-mid-cycles > > > cheers, > brian From rosmaita.fossdev at gmail.com Fri Jun 4 13:52:06 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 4 Jun 2021 09:52:06 -0400 Subject: [security-sig][cinder] propose vulnerability:managed tag for os-brick Message-ID: <746fb327-dcd8-d479-e0da-8facf400e780@gmail.com> I've posted a patch to add the 'vulnerablity:managed' tag to the os-brick library: https://review.opendev.org/c/openstack/governance/+/794680 I just want to give a heads-up to the OpenStack Vulnerablity Management Team, since this will impact the VMT, though hopefully not very much. The Cinder team was under the impression that the VMT was already managing private security bugs for os-brick. The issue may not have come up before because usually there's a driver + connector involved and the bug gets filed under cinder (which is already tagged vulnerablity:managed). In any case, the cinder team discussed this at our recent midcycle meeting and decided that we appreciate the extra eyes and long-term perspective the VMT brings to the table, and we'd like to formalize a relation between the VMT and the os-brick library. cheers, brian From bkslash at poczta.onet.pl Fri Jun 4 13:53:42 2021 From: bkslash at poczta.onet.pl (at) Date: Fri, 4 Jun 2021 15:53:42 +0200 Subject: [kolla-ansible] kolla-ansible destroy Message-ID: <476495C0-A42E-4B74-AF46-13FF814C974B@poczta.onet.pl> Hi is kolla-ansible destroy "--tags" aware? What is the best way to remove all unwanted containers, configuration files, logs, etc. when you want to remove some service or move it to another node? Regards Adam Tomas From rosmaita.fossdev at gmail.com Fri Jun 4 14:02:32 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 4 Jun 2021 10:02:32 -0400 Subject: [cinder] reminder: xena spec freeze 25 June Message-ID: This is a reminder that all Cinder Specs for features to be implemented in Xena must be approved by Friday 25 June 2021 (23:59 UTC). We discussed several specs at the R-18 virtual midcycle meeting. Please take a look at the summary and take any appropriate action for your spec proposal: https://wiki.openstack.org/wiki/CinderXenaMidCycleSummary#Xena_Specs_Review Anyone with a spec proposal that wasn't discussed, and who needs more feedback than is currently on the Gerrit review, should reach out to the Cinder team for help by putting a topic on the weekly meeting agenda, asking in the OFTC #openstack-cinder channel, or via the openstack-discuss mailing list. cheers, brian From fungi at yuggoth.org Fri Jun 4 14:14:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 4 Jun 2021 14:14:35 +0000 Subject: [security-sig][cinder] propose vulnerability:managed tag for os-brick In-Reply-To: <746fb327-dcd8-d479-e0da-8facf400e780@gmail.com> References: <746fb327-dcd8-d479-e0da-8facf400e780@gmail.com> Message-ID: <20210604141435.da5x2lrmubrfbpqv@yuggoth.org> On 2021-06-04 09:52:06 -0400 (-0400), Brian Rosmaita wrote: [...] > I just want to give a heads-up to the OpenStack Vulnerablity Management > Team, since this will impact the VMT, though hopefully not very much. [...] Thanks! We loosened up the requirements well over a year ago with https://review.opendev.org/678426 in hopes more projects would check whether their deliverables met the requirements and formally enlist our assistance, but so far there's been little uptake there. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Yong.Huang at Dell.com Fri Jun 4 00:45:55 2021 From: Yong.Huang at Dell.com (Huang, Yong) Date: Fri, 4 Jun 2021 00:45:55 +0000 Subject: [victoria][cinder ?] Dell Unity + Iscsi In-Reply-To: References: Message-ID: Hi Albert, Did you configure multipath? Could you attach the output of `multipath -ll` and the content of ` /etc/multipath.conf`? Thanks Yong Huang -----Original Message----- From: Albert Shih Sent: Wednesday, June 2, 2021 2:45 AM To: openstack-discuss at lists.openstack.org Subject: [victoria][cinder ?] Dell Unity + Iscsi [EXTERNAL EMAIL] Hi everyone I've a small openstack configuration with 4 computes nodes, a Dell Unity 480F for the storage. I'm using cinder with iscsi. Everything work when I create a instance. But some instance after few time are not reponsive. When I check on the hypervisor I can see [888240.310461] sd 14:0:0:2: [sdb] tag#120 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.310493] sd 14:0:0:2: [sdb] tag#120 Sense Key : Illegal Request [current] [888240.310502] sd 14:0:0:2: [sdb] tag#120 Add. Sense: Logical unit not supported [888240.310510] sd 14:0:0:2: [sdb] tag#120 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 [888240.310519] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 [888240.311045] sd 14:0:0:2: [sdb] tag#121 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.311050] sd 14:0:0:2: [sdb] tag#121 Sense Key : Illegal Request [current] [888240.311065] sd 14:0:0:2: [sdb] tag#121 Add. Sense: Logical unit not supported [888240.311070] sd 14:0:0:2: [sdb] tag#121 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 [888240.311074] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 [888240.342482] sd 14:0:0:2: [sdb] tag#70 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [888240.342490] sd 14:0:0:2: [sdb] tag#70 Sense Key : Illegal Request [current] [888240.342496] sd 14:0:0:2: [sdb] tag#70 Add. Sense: Logical unit not supported I check on the hypervisor, no error at all on the ethernet interface. I check on the switch, no error at all on the interface on the switch. No sure but it's seem the problem appear more often when the instance are doing nothing during some time. Every firmware, software on the Unity are uptodate. The 4 computes are exactly same, they run the same version of the nova-compute & OS & firmware on the hardware. Any clue ? Or place to search the problem ? Regards -- Albert SHIH Observatoire de Paris xmpp: jas at obspm.fr Heure local/Local time: Tue Jun 1 08:27:42 PM CEST 2021 From wangtaihao at inspur.com Fri Jun 4 03:44:07 2021 From: wangtaihao at inspur.com (=?gb2312?B?VGFob2UgV2FuZyAozfXMq7rGKQ==?=) Date: Fri, 4 Jun 2021 03:44:07 +0000 Subject: [vitrage]The vitrage api "vitrage alarm list" get wrong response Message-ID: <9b8b00abf9dc450bab65cd14f34ab950@inspur.com> Hello. I have successfully installed vitrage, configured the datasources from nova. host and Prometheus, also configured the mapping file from ALARM to RESOURCE, and seen the alarm received from the request log of vitrage API. However, when I use the “vitrage alarm list” command through cli, the list returned is empty. Why? Look forward to your reply. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3774 bytes Desc: not available URL: From yasufum.o at gmail.com Fri Jun 4 13:39:34 2021 From: yasufum.o at gmail.com (yasufum) Date: Fri, 4 Jun 2021 22:39:34 +0900 Subject: [Tacker] Tacker Not able to create VIM In-Reply-To: References: Message-ID: Hi, It might be a failure of not tacker but authentication because I've run VIM registration as you tried and no failure happened although it's just a bit different from your environment. Could you run it from CLI again referring [1] if you cannot register from horizon? [1] https://docs.openstack.org/tacker/latest/install/getting_started.html Thanks, Yasufumi On 2021/06/03 10:55, dangerzone ar wrote: > Hi all, > > I just deployed Tacker and tried to add my 1^st VIM but I’m getting > errors as per attached file. Pls advise how to resolve this problem. Thanks > > 1. *Error: *Failed to register VIM: {"error": {"message": > "(http://192.168.0.121:5000/v3/tokens > ): The resource could not be > found.", "code": 404, "title": "Not Found"}} > > 2. *Error as below**à**WARNING keystonemiddleware.auth_token [-] > Authorization failed for token: InvalidToken*** > > ** > > *{"vim": {"vim_project": {"name": "admin", "project_domain_name": > "Default"}, "description": "d", "is_default": false, "auth_cred": > {"username": "admin", "user_domain_name": "Default", "password": > "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 > **", "type": "openstack", "name": "d"}} > process_request > /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* > > *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] > Authorization failed for token: InvalidToken* > > *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - > [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* > > ** > > Below is my tacker.conf > > [DEFAULT] > auth_strategy = keystone > policy_file = /etc/tacker/policy.json > debug = True > use_syslog = False > bind_host = 192.168.0.121 > bind_port = 9890 > service_plugins = nfvo,vnfm > state_path = /var/lib/tacker > > > [nfvo] > vim_drivers = openstack > > [keystone_authtoken] > region_name = RegionOne > auth_type = password > project_domain_name = Default > user_domain_name = Default > username = tacker > password = password > auth_url = http://192.168.0.121:35357 > auth_uri = http://192.168.0.121:5000 > > [agent] > root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf > > > [database] > connection = > mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 > ** > -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2021-06-04 22-16-47.png Type: image/png Size: 51040 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2021-06-04 22-17-42.png Type: image/png Size: 47649 bytes Desc: not available URL: -------------- next part -------------- [DEFAULT] auth_strategy = keystone debug = True logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s %(instance)s logging_debug_format_suffix = from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d logging_default_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [-%(color)s] %(instance)s%(color)s%(message)s logging_context_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [%(request_id)s %(project_name)s %(user_name)s%(color)s] %(instance)s%(color)s%(message)s use_syslog = False state_path = /opt/stack/data/tacker transport_url = rabbit://stackrabbit:devstack at 192.168.33.11:5672/ # # From oslo.log # # If set to true, the logging level will be set to DEBUG instead of the default # INFO level. (boolean value) # Note: This option can be changed without restarting. #debug = false # The name of a logging configuration file. This file is appended to any # existing logging configuration files. For details about logging configuration # files, see the Python logging module documentation. Note that when logging # configuration files are used then all logging configuration is set in the # configuration file and other logging configuration options are ignored (for # example, log-date-format). (string value) # Note: This option can be changed without restarting. # Deprecated group/name - [DEFAULT]/log_config #log_config_append = # Defines the format string for %%(asctime)s in log records. Default: # %(default)s . This option is ignored if log_config_append is set. (string # value) #log_date_format = %Y-%m-%d %H:%M:%S # (Optional) Name of log file to send logging output to. If no default is set, # logging will go to stderr as defined by use_stderr. This option is ignored if # log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logfile #log_file = # (Optional) The base directory used for relative log_file paths. This option # is ignored if log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logdir #log_dir = # Uses logging handler designed to watch file system. When log file is moved or # removed this handler will open a new log file with specified path # instantaneously. It makes sense only if log_file option is specified and # Linux platform is used. This option is ignored if log_config_append is set. # (boolean value) #watch_log_file = false # Use syslog for logging. Existing syslog format is DEPRECATED and will be # changed later to honor RFC5424. This option is ignored if log_config_append # is set. (boolean value) #use_syslog = false # Enable journald for logging. If running in a systemd environment you may wish # to enable journal support. Doing so will use the journal native protocol # which includes structured metadata in addition to log messages.This option is # ignored if log_config_append is set. (boolean value) #use_journal = false # Syslog facility to receive log lines. This option is ignored if # log_config_append is set. (string value) #syslog_log_facility = LOG_USER # Use JSON formatting for logging. This option is ignored if log_config_append # is set. (boolean value) #use_json = false # Log output to standard error. This option is ignored if log_config_append is # set. (boolean value) #use_stderr = false # Log output to Windows Event Log. (boolean value) #use_eventlog = false # The amount of time before the log files are rotated. This option is ignored # unless log_rotation_type is set to "interval". (integer value) #log_rotate_interval = 1 # Rotation interval type. The time of the last file change (or the time when # the service was started) is used when scheduling the next rotation. (string # value) # Possible values: # Seconds - # Minutes - # Hours - # Days - # Weekday - # Midnight - #log_rotate_interval_type = days # Maximum number of rotated log files. (integer value) #max_logfile_count = 30 # Log file maximum size in MB. This option is ignored if "log_rotation_type" is # not set to "size". (integer value) #max_logfile_size_mb = 200 # Log rotation type. (string value) # Possible values: # interval - Rotate logs at predefined time intervals. # size - Rotate logs once they reach a predefined size. # none - Do not rotate log files. #log_rotation_type = none # Format string to use for log messages with context. Used by # oslo_log.formatters.ContextFormatter (string value) #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s # Format string to use for log messages when context is undefined. Used by # oslo_log.formatters.ContextFormatter (string value) #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s # Additional data to append to log message when logging level for the message # is DEBUG. Used by oslo_log.formatters.ContextFormatter (string value) #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d # Prefix each line of exception output with this format. Used by # oslo_log.formatters.ContextFormatter (string value) #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s # Defines the format string for %(user_identity)s that is used in # logging_context_format_string. Used by oslo_log.formatters.ContextFormatter # (string value) #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s # List of package logging levels in logger=LEVEL pairs. This option is ignored # if log_config_append is set. (list value) #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,oslo_policy=INFO,dogpile.core.dogpile=INFO # Enables or disables publication of error events. (boolean value) #publish_errors = false # The format for an instance that is passed with the log message. (string # value) #instance_format = "[instance: %(uuid)s] " # The format for an instance UUID that is passed with the log message. (string # value) #instance_uuid_format = "[instance: %(uuid)s] " # Interval, number of seconds, of log rate limiting. (integer value) #rate_limit_interval = 0 # Maximum number of logged messages per rate_limit_interval. (integer value) #rate_limit_burst = 0 # Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG # or empty string. Logs with level greater or equal to rate_limit_except_level # are not filtered. An empty string means that all levels are filtered. (string # value) #rate_limit_except_level = CRITICAL # Enables or disables fatal status of deprecations. (boolean value) #fatal_deprecations = false # # From oslo.messaging # # Size of RPC connection pool. (integer value) # Minimum value: 1 #rpc_conn_pool_size = 30 # The pool size limit for connections expiration policy (integer value) #conn_pool_min_size = 2 # The time-to-live in sec of idle connections in the pool (integer value) #conn_pool_ttl = 1200 # Size of executor thread pool when executor is threading or eventlet. (integer # value) # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size #executor_thread_pool_size = 64 # Seconds to wait for a response from a call. (integer value) #rpc_response_timeout = 60 # The network address and optional user credentials for connecting to the # messaging backend, in URL format. The expected format is: # # driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query # # Example: rabbit://rabbitmq:password at 127.0.0.1:5672// # # For full details on the fields in the URL see the documentation of # oslo_messaging.TransportURL at # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html # (string value) #transport_url = rabbit:// # The default exchange under which topics are scoped. May be overridden by an # exchange name specified in the transport_url option. (string value) #control_exchange = tacker # Add an endpoint to answer to ping calls. Endpoint is named # oslo_rpc_server_ping (boolean value) #rpc_ping_enabled = false # # From oslo.service.service # # Enable eventlet backdoor. Acceptable values are 0, , and # :, where 0 results in listening on a random tcp port number; # results in listening on the specified port number (and not enabling # backdoor if that port is in use); and : results in listening on # the smallest unused port number within the specified range of port numbers. # The chosen port is displayed in the service's log file. (string value) #backdoor_port = # Enable eventlet backdoor, using the provided path as a unix socket that can # receive connections. This option is mutually exclusive with 'backdoor_port' # in that only one should be provided. If both are provided then the existence # of this option overrides the usage of that option. Inside the path {pid} will # be replaced with the PID of the current process. (string value) #backdoor_socket = # Enables or disables logging values of all registered options when starting a # service (at DEBUG level). (boolean value) #log_options = true # Specify a timeout after which a gracefully shutdown server will exit. Zero # value means endless wait. (integer value) #graceful_shutdown_timeout = 60 # # From tacker.common.config # # The host IP to bind to (host address value) #bind_host = 0.0.0.0 # The port to bind to (integer value) #bind_port = 9890 # The API paste config file to use (string value) #api_paste_config = api-paste.ini # The path for API extensions (string value) #api_extensions_path = # The service plugins Tacker will use (list value) #service_plugins = nfvo,vnfm # The type of authentication to use (string value) #auth_strategy = keystone # Allow the usage of the bulk API (boolean value) #allow_bulk = true # Allow the usage of the pagination (boolean value) #allow_pagination = false # Allow the usage of the sorting (boolean value) #allow_sorting = false # The maximum number of items returned in a single response, value was # 'infinite' or negative integer means no limit (string value) #pagination_max_limit = -1 # The hostname Tacker is running on (host address value) #host = controller # Where to store Tacker state files. This directory must be writable by the # agent. (string value) #state_path = /var/lib/tacker # # From tacker.conf # # Seconds between running periodic tasks to cleanup residues of deleted vnf # packages (integer value) #vnf_package_delete_interval = 1800 # # From tacker.service # # Seconds between running components report states (integer value) #report_interval = 10 # Seconds between running periodic tasks (integer value) #periodic_interval = 40 # Number of separate worker processes for service (integer value) #api_workers = 0 # Range of seconds to randomly delay when starting the periodic task scheduler # to reduce stampeding. (Disable by setting to 0) (integer value) #periodic_fuzzy_delay = 5 # # From tacker.wsgi # # Number of backlog requests to configure the socket with (integer value) #backlog = 4096 # Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not # supported on OS X. (integer value) #tcp_keepidle = 600 # Number of seconds to keep retrying to listen (integer value) #retry_until_window = 30 # Max header line to accommodate large tokens (integer value) #max_header_line = 16384 # Enable SSL on the API server (boolean value) #use_ssl = false # CA certificate file to use to verify connecting clients (string value) #ssl_ca_file = # Certificate file to use when starting the server securely (string value) #ssl_cert_file = # Private key file to use when starting the server securely (string value) #ssl_key_file = [alarm_auth] url = http://192.168.33.11:5000/v3 project_name = admin password = devstack username = admin # # From tacker.alarm_receiver # # User name for alarm monitoring (string value) #username = admin # Password for alarm monitoring (string value) #password = devstack # Project name for alarm monitoring (string value) #project_name = admin # User domain name for alarm monitoring (string value) #user_domain_name = default # Project domain name for alarm monitoring (string value) #project_domain_name = default [ceilometer] # # From tacker.vnfm.monitor_drivers.ceilometer.ceilometer # # Address which drivers use to trigger (host address value) #host = controller # port number which drivers use to trigger (port value) # Minimum value: 0 # Maximum value: 65535 #port = 9890 [coordination] # # From tacker.conf # # The backend URL to use for distributed coordination. (string value) #backend_url = file://$state_path [cors] # # From oslo.middleware # # Indicate whether this resource may be shared with the domain received in the # requests "origin" header. Format: "://[:]", no trailing # slash. Example: https://horizon.example.com (list value) #allowed_origin = # Indicate that the actual request can include user credentials (boolean value) #allow_credentials = true # Indicate which headers are safe to expose to the API. Defaults to HTTP Simple # Headers. (list value) #expose_headers = # Maximum cache age of CORS preflight requests. (integer value) #max_age = 3600 # Indicate which methods can be used during the actual request. (list value) #allow_methods = OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PATCH # Indicate which header field names may be used during the actual request. # (list value) #allow_headers = [database] connection = mysql+pymysql://root:devstack at 127.0.0.1/tacker?charset=utf8 # # From oslo.db # # If True, SQLite uses synchronous mode. (boolean value) #sqlite_synchronous = true # The back end to use for the database. (string value) # Deprecated group/name - [DEFAULT]/db_backend #backend = sqlalchemy # The SQLAlchemy connection string to use to connect to the database. (string # value) # Deprecated group/name - [DEFAULT]/sql_connection # Deprecated group/name - [DATABASE]/sql_connection # Deprecated group/name - [sql]/connection #connection = # The SQLAlchemy connection string to use to connect to the slave database. # (string value) #slave_connection = # The SQL mode to be used for MySQL sessions. This option, including the # default, overrides any server-set SQL mode. To use whatever SQL mode is set # by the server configuration, set this to no value. Example: mysql_sql_mode= # (string value) #mysql_sql_mode = TRADITIONAL # If True, transparently enables support for handling MySQL Cluster (NDB). # (boolean value) #mysql_enable_ndb = false # Connections which have been present in the connection pool longer than this # number of seconds will be replaced with a new one the next time they are # checked out from the pool. (integer value) # Deprecated group/name - [DATABASE]/idle_timeout # Deprecated group/name - [database]/idle_timeout # Deprecated group/name - [DEFAULT]/sql_idle_timeout # Deprecated group/name - [DATABASE]/sql_idle_timeout # Deprecated group/name - [sql]/idle_timeout #connection_recycle_time = 3600 # Maximum number of SQL connections to keep open in a pool. Setting a value of # 0 indicates no limit. (integer value) #max_pool_size = 5 # Maximum number of database connection retries during startup. Set to -1 to # specify an infinite retry count. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_retries # Deprecated group/name - [DATABASE]/sql_max_retries #max_retries = 10 # Interval between retries of opening a SQL connection. (integer value) # Deprecated group/name - [DEFAULT]/sql_retry_interval # Deprecated group/name - [DATABASE]/reconnect_interval #retry_interval = 10 # If set, use this value for max_overflow with SQLAlchemy. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_overflow # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow #max_overflow = 50 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer # value) # Minimum value: 0 # Maximum value: 100 # Deprecated group/name - [DEFAULT]/sql_connection_debug #connection_debug = 0 # Add Python stack traces to SQL as comment strings. (boolean value) # Deprecated group/name - [DEFAULT]/sql_connection_trace #connection_trace = false # If set, use this value for pool_timeout with SQLAlchemy. (integer value) # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout #pool_timeout = # Enable the experimental use of database reconnect on connection lost. # (boolean value) #use_db_reconnect = false # Seconds between retries of a database transaction. (integer value) #db_retry_interval = 1 # If True, increases the interval between retries of a database operation up to # db_max_retry_interval. (boolean value) #db_inc_retry_interval = true # If db_inc_retry_interval is set, the maximum seconds between retries of a # database operation. (integer value) #db_max_retry_interval = 10 # Maximum retries in case of connection error or deadlock error before error is # raised. Set to -1 to specify an infinite retry count. (integer value) #db_max_retries = 20 # Optional URL parameters to append onto the connection URL at connect time; # specify as param1=value1¶m2=value2&... (string value) #connection_parameters = [glance_store] default_backend = fast filesystem_store_datadir = /opt/stack/data/tacker/csar_files # # From glance.store # # DEPRECATED: # List of enabled Glance stores. # # Register the storage backends to use for storing disk images # as a comma separated list. The default stores enabled for # storing disk images with Glance are ``file`` and ``http``. # # Possible values: # * A comma separated list that could include: # * file # * http # * swift # * rbd # * cinder # * vmware # * s3 # # Related Options: # * default_store # # (list value) # This option is deprecated for removal since Rocky. # Its value may be silently ignored in the future. # Reason: # This option is deprecated against new config option # ``enabled_backends`` which helps to configure multiple backend stores # of different schemes. # # This option is scheduled for removal in the U development # cycle. #stores = file,http # DEPRECATED: # The default scheme to use for storing images. # # Provide a string value representing the default scheme to use for # storing images. If not set, Glance uses ``file`` as the default # scheme to store images with the ``file`` store. # # NOTE: The value given for this configuration option must be a valid # scheme for a store registered with the ``stores`` configuration # option. # # Possible values: # * file # * filesystem # * http # * https # * swift # * swift+http # * swift+https # * swift+config # * rbd # * cinder # * vsphere # * s3 # # Related Options: # * stores # # (string value) # Possible values: # file - # filesystem - # http - # https - # swift - # swift+http - # swift+https - # swift+config - # rbd - # cinder - # vsphere - # s3 - # This option is deprecated for removal since Rocky. # Its value may be silently ignored in the future. # Reason: # This option is deprecated against new config option # ``default_backend`` which acts similar to ``default_store`` config # option. # # This option is scheduled for removal in the U development # cycle. #default_store = file # # Information to match when looking for cinder in the service catalog. # # When the ``cinder_endpoint_template`` is not set and any of # ``cinder_store_auth_address``, ``cinder_store_user_name``, # ``cinder_store_project_name``, ``cinder_store_password`` is not set, # cinder store uses this information to lookup cinder endpoint from the service # catalog in the current context. ``cinder_os_region_name``, if set, is taken # into consideration to fetch the appropriate endpoint. # # The service catalog can be listed by the ``openstack catalog list`` command. # # Possible values: # * A string of of the following form: # ``::`` # At least ``service_type`` and ``interface`` should be specified. # ``service_name`` can be omitted. # # Related options: # * cinder_os_region_name # * cinder_endpoint_template # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # * cinder_store_password # # (string value) #cinder_catalog_info = volumev3::publicURL # # Override service catalog lookup with template for cinder endpoint. # # When this option is set, this value is used to generate cinder endpoint, # instead of looking up from the service catalog. # This value is ignored if ``cinder_store_auth_address``, # ``cinder_store_user_name``, ``cinder_store_project_name``, and # ``cinder_store_password`` are specified. # # If this configuration option is set, ``cinder_catalog_info`` will be ignored. # # Possible values: # * URL template string for cinder endpoint, where ``%%(tenant)s`` is # replaced with the current tenant (project) name. # For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s`` # # Related options: # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # * cinder_store_password # * cinder_catalog_info # # (string value) #cinder_endpoint_template = # # Region name to lookup cinder service from the service catalog. # # This is used only when ``cinder_catalog_info`` is used for determining the # endpoint. If set, the lookup for cinder endpoint by this node is filtered to # the specified region. It is useful when multiple regions are listed in the # catalog. If this is not set, the endpoint is looked up from every region. # # Possible values: # * A string that is a valid region name. # # Related options: # * cinder_catalog_info # # (string value) # Deprecated group/name - [glance_store]/os_region_name #cinder_os_region_name = # # Location of a CA certificates file used for cinder client requests. # # The specified CA certificates file, if set, is used to verify cinder # connections via HTTPS endpoint. If the endpoint is HTTP, this value is # ignored. # ``cinder_api_insecure`` must be set to ``True`` to enable the verification. # # Possible values: # * Path to a ca certificates file # # Related options: # * cinder_api_insecure # # (string value) #cinder_ca_certificates_file = # # Number of cinderclient retries on failed http calls. # # When a call failed by any errors, cinderclient will retry the call up to the # specified times after sleeping a few seconds. # # Possible values: # * A positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #cinder_http_retries = 3 # # Time period, in seconds, to wait for a cinder volume transition to # complete. # # When the cinder volume is created, deleted, or attached to the glance node to # read/write the volume data, the volume's state is changed. For example, the # newly created volume status changes from ``creating`` to ``available`` after # the creation process is completed. This specifies the maximum time to wait # for # the status change. If a timeout occurs while waiting, or the status is # changed # to an unexpected value (e.g. `error``), the image creation fails. # # Possible values: # * A positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #cinder_state_transition_timeout = 300 # # Allow to perform insecure SSL requests to cinder. # # If this option is set to True, HTTPS endpoint connection is verified using # the # CA certificates file specified by ``cinder_ca_certificates_file`` option. # # Possible values: # * True # * False # # Related options: # * cinder_ca_certificates_file # # (boolean value) #cinder_api_insecure = false # # The address where the cinder authentication service is listening. # # When all of ``cinder_store_auth_address``, ``cinder_store_user_name``, # ``cinder_store_project_name``, and ``cinder_store_password`` options are # specified, the specified values are always used for the authentication. # This is useful to hide the image volumes from users by storing them in a # project/tenant specific to the image service. It also enables users to share # the image volume among other projects under the control of glance's ACL. # # If either of these options are not set, the cinder endpoint is looked up # from the service catalog, and current context's user and project are used. # # Possible values: # * A valid authentication service address, for example: # ``http://openstack.example.org/identity/v2.0`` # # Related options: # * cinder_store_user_name # * cinder_store_password # * cinder_store_project_name # # (string value) #cinder_store_auth_address = # # User name to authenticate against cinder. # # This must be used with all the following related options. If any of these are # not specified, the user of the current context is used. # # Possible values: # * A valid user name # # Related options: # * cinder_store_auth_address # * cinder_store_password # * cinder_store_project_name # # (string value) #cinder_store_user_name = # # Password for the user authenticating against cinder. # # This must be used with all the following related options. If any of these are # not specified, the user of the current context is used. # # Possible values: # * A valid password for the user specified by ``cinder_store_user_name`` # # Related options: # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # # (string value) #cinder_store_password = # # Project name where the image volume is stored in cinder. # # If this configuration option is not set, the project in current context is # used. # # This must be used with all the following related options. If any of these are # not specified, the project of the current context is used. # # Possible values: # * A valid project name # # Related options: # * ``cinder_store_auth_address`` # * ``cinder_store_user_name`` # * ``cinder_store_password`` # # (string value) #cinder_store_project_name = # # Path to the rootwrap configuration file to use for running commands as root. # # The cinder store requires root privileges to operate the image volumes (for # connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). # The configuration file should allow the required commands by cinder store and # os-brick library. # # Possible values: # * Path to the rootwrap config file # # Related options: # * None # # (string value) #rootwrap_config = /etc/glance/rootwrap.conf # # Volume type that will be used for volume creation in cinder. # # Some cinder backends can have several volume types to optimize storage usage. # Adding this option allows an operator to choose a specific volume type # in cinder that can be optimized for images. # # If this is not set, then the default volume type specified in the cinder # configuration will be used for volume creation. # # Possible values: # * A valid volume type from cinder # # Related options: # * None # # NOTE: You cannot use an encrypted volume_type associated with an NFS backend. # An encrypted volume stored on an NFS backend will raise an exception whenever # glance_store tries to write or access image data stored in that volume. # Consult your Cinder administrator to determine an appropriate volume_type. # # (string value) #cinder_volume_type = # # If this is set to True, attachment of volumes for image transfer will # be aborted when multipathd is not running. Otherwise, it will fallback # to single path. # # Possible values: # * True or False # # Related options: # * cinder_use_multipath # # (boolean value) #cinder_enforce_multipath = false # # Flag to identify mutipath is supported or not in the deployment. # # Set it to False if multipath is not supported. # # Possible values: # * True or False # # Related options: # * cinder_enforce_multipath # # (boolean value) #cinder_use_multipath = false # # Directory where the NFS volume is mounted on the glance node. # # Possible values: # # * A string representing absolute path of mount point. # (string value) #cinder_mount_point_base = /var/lib/glance/mnt # # Directory to which the filesystem backend store writes images. # # Upon start up, Glance creates the directory if it doesn't already # exist and verifies write access to the user under which # ``glance-api`` runs. If the write access isn't available, a # ``BadStoreConfiguration`` exception is raised and the filesystem # store may not be available for adding new images. # # NOTE: This directory is used only when filesystem store is used as a # storage backend. Either ``filesystem_store_datadir`` or # ``filesystem_store_datadirs`` option must be specified in # ``glance-api.conf``. If both options are specified, a # ``BadStoreConfiguration`` will be raised and the filesystem store # may not be available for adding new images. # # Possible values: # * A valid path to a directory # # Related options: # * ``filesystem_store_datadirs`` # * ``filesystem_store_file_perm`` # # (string value) #filesystem_store_datadir = /var/lib/glance/images # # List of directories and their priorities to which the filesystem # backend store writes images. # # The filesystem store can be configured to store images in multiple # directories as opposed to using a single directory specified by the # ``filesystem_store_datadir`` configuration option. When using # multiple directories, each directory can be given an optional # priority to specify the preference order in which they should # be used. Priority is an integer that is concatenated to the # directory path with a colon where a higher value indicates higher # priority. When two directories have the same priority, the directory # with most free space is used. When no priority is specified, it # defaults to zero. # # More information on configuring filesystem store with multiple store # directories can be found at # https://docs.openstack.org/glance/latest/configuration/configuring.html # # NOTE: This directory is used only when filesystem store is used as a # storage backend. Either ``filesystem_store_datadir`` or # ``filesystem_store_datadirs`` option must be specified in # ``glance-api.conf``. If both options are specified, a # ``BadStoreConfiguration`` will be raised and the filesystem store # may not be available for adding new images. # # Possible values: # * List of strings of the following form: # * ``:`` # # Related options: # * ``filesystem_store_datadir`` # * ``filesystem_store_file_perm`` # # (multi valued) #filesystem_store_datadirs = # # Filesystem store metadata file. # # The path to a file which contains the metadata to be returned with any # location # associated with the filesystem store. Once this option is set, it is used for # new images created afterward only - previously existing images are not # affected. # # The file must contain a valid JSON object. The object should contain the keys # ``id`` and ``mountpoint``. The value for both keys should be a string. # # Possible values: # * A valid path to the store metadata file # # Related options: # * None # # (string value) #filesystem_store_metadata_file = # # File access permissions for the image files. # # Set the intended file access permissions for image data. This provides # a way to enable other services, e.g. Nova, to consume images directly # from the filesystem store. The users running the services that are # intended to be given access to could be made a member of the group # that owns the files created. Assigning a value less then or equal to # zero for this configuration option signifies that no changes be made # to the default permissions. This value will be decoded as an octal # digit. # # For more information, please refer the documentation at # https://docs.openstack.org/glance/latest/configuration/configuring.html # # Possible values: # * A valid file access permission # * Zero # * Any negative integer # # Related options: # * None # # (integer value) #filesystem_store_file_perm = 0 # # Chunk size, in bytes. # # The chunk size used when reading or writing image files. Raising this value # may improve the throughput but it may also slightly increase the memory usage # when handling a large number of requests. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #filesystem_store_chunk_size = 65536 # # Enable or not thin provisioning in this backend. # # This configuration option enable the feature of not really write null byte # sequences on the filesystem, the holes who can appear will automatically # be interpreted by the filesystem as null bytes, and do not really consume # your storage. # Enabling this feature will also speed up image upload and save network trafic # in addition to save space in the backend, as null bytes sequences are not # sent over the network. # # Possible Values: # * True # * False # # Related options: # * None # # (boolean value) #filesystem_thin_provisioning = false # # Path to the CA bundle file. # # This configuration option enables the operator to use a custom # Certificate Authority file to verify the remote server certificate. If # this option is set, the ``https_insecure`` option will be ignored and # the CA file specified will be used to authenticate the server # certificate and establish a secure connection to the server. # # Possible values: # * A valid path to a CA file # # Related options: # * https_insecure # # (string value) #https_ca_certificates_file = # # Set verification of the remote server certificate. # # This configuration option takes in a boolean value to determine # whether or not to verify the remote server certificate. If set to # True, the remote server certificate is not verified. If the option is # set to False, then the default CA truststore is used for verification. # # This option is ignored if ``https_ca_certificates_file`` is set. # The remote server certificate will then be verified using the file # specified using the ``https_ca_certificates_file`` option. # # Possible values: # * True # * False # # Related options: # * https_ca_certificates_file # # (boolean value) #https_insecure = true # # The http/https proxy information to be used to connect to the remote # server. # # This configuration option specifies the http/https proxy information # that should be used to connect to the remote server. The proxy # information should be a key value pair of the scheme and proxy, for # example, http:10.0.0.1:3128. You can also specify proxies for multiple # schemes by separating the key value pairs with a comma, for example, # http:10.0.0.1:3128, https:10.0.0.1:1080. # # Possible values: # * A comma separated list of scheme:proxy pairs as described above # # Related options: # * None # # (dict value) #http_proxy_information = # # Size, in megabytes, to chunk RADOS images into. # # Provide an integer value representing the size in megabytes to chunk # Glance images into. The default chunk size is 8 megabytes. For optimal # performance, the value should be a power of two. # # When Ceph's RBD object storage system is used as the storage backend # for storing Glance images, the images are chunked into objects of the # size set using this option. These chunked objects are then stored # across the distributed block data store to use for Glance. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #rbd_store_chunk_size = 8 # # RADOS pool in which images are stored. # # When RBD is used as the storage backend for storing Glance images, the # images are stored by means of logical grouping of the objects (chunks # of images) into a ``pool``. Each pool is defined with the number of # placement groups it can contain. The default pool that is used is # 'images'. # # More information on the RBD storage backend can be found here: # http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ # # Possible Values: # * A valid pool name # # Related options: # * None # # (string value) #rbd_store_pool = images # # RADOS user to authenticate as. # # This configuration option takes in the RADOS user to authenticate as. # This is only needed when RADOS authentication is enabled and is # applicable only if the user is using Cephx authentication. If the # value for this option is not set by the user or is set to None, a # default value will be chosen, which will be based on the client. # section in rbd_store_ceph_conf. # # Possible Values: # * A valid RADOS user # # Related options: # * rbd_store_ceph_conf # # (string value) #rbd_store_user = # # Ceph configuration file path. # # This configuration option specifies the path to the Ceph configuration # file to be used. If the value for this option is not set by the user # or is set to the empty string, librados will read the standard ceph.conf # file by searching the default Ceph configuration file locations in # sequential order. See the Ceph documentation for details. # # NOTE: If using Cephx authentication, this file should include a reference # to the right keyring in a client. section # # NOTE 2: If you leave this option empty (the default), the actual Ceph # configuration file used may change depending on what version of librados # is being used. If it is important for you to know exactly which # configuration # file is in effect, you may specify that file here using this option. # # Possible Values: # * A valid path to a configuration file # # Related options: # * rbd_store_user # # (string value) #rbd_store_ceph_conf = # # Timeout value for connecting to Ceph cluster. # # This configuration option takes in the timeout value in seconds used # when connecting to the Ceph cluster i.e. it sets the time to wait for # glance-api before closing the connection. This prevents glance-api # hangups during the connection to RBD. If the value for this option # is set to less than or equal to 0, no timeout is set and the default # librados value is used. # # Possible Values: # * Any integer value # # Related options: # * None # # (integer value) #rados_connect_timeout = 0 # # Enable or not thin provisioning in this backend. # # This configuration option enable the feature of not really write null byte # sequences on the RBD backend, the holes who can appear will automatically # be interpreted by Ceph as null bytes, and do not really consume your storage. # Enabling this feature will also speed up image upload and save network trafic # in addition to save space in the backend, as null bytes sequences are not # sent over the network. # # Possible Values: # * True # * False # # Related options: # * None # # (boolean value) #rbd_thin_provisioning = false # # The host where the S3 server is listening. # # This configuration option sets the host of the S3 or S3 compatible storage # Server. This option is required when using the S3 storage backend. # The host can contain a DNS name (e.g. s3.amazonaws.com, my-object- # storage.com) # or an IP address (127.0.0.1). # # Possible values: # * A valid DNS name # * A valid IPv4 address # # Related Options: # * s3_store_access_key # * s3_store_secret_key # # (string value) #s3_store_host = # # The S3 query token access key. # # This configuration option takes the access key for authenticating with the # Amazon S3 or S3 compatible storage server. This option is required when using # the S3 storage backend. # # Possible values: # * Any string value that is the access key for a user with appropriate # privileges # # Related Options: # * s3_store_host # * s3_store_secret_key # # (string value) #s3_store_access_key = # # The S3 query token secret key. # # This configuration option takes the secret key for authenticating with the # Amazon S3 or S3 compatible storage server. This option is required when using # the S3 storage backend. # # Possible values: # * Any string value that is a secret key corresponding to the access key # specified using the ``s3_store_host`` option # # Related Options: # * s3_store_host # * s3_store_access_key # # (string value) #s3_store_secret_key = # # The S3 bucket to be used to store the Glance data. # # This configuration option specifies where the glance images will be stored # in the S3. If ``s3_store_create_bucket_on_put`` is set to true, it will be # created automatically even if the bucket does not exist. # # Possible values: # * Any string value # # Related Options: # * s3_store_create_bucket_on_put # * s3_store_bucket_url_format # # (string value) #s3_store_bucket = # # Determine whether S3 should create a new bucket. # # This configuration option takes boolean value to indicate whether Glance # should # create a new bucket to S3 if it does not exist. # # Possible values: # * Any Boolean value # # Related Options: # * None # # (boolean value) #s3_store_create_bucket_on_put = false # # The S3 calling format used to determine the object. # # This configuration option takes access model that is used to specify the # address of an object in an S3 bucket. # # NOTE: # In ``path``-style, the endpoint for the object looks like # 'https://s3.amazonaws.com/bucket/example.img'. # And in ``virtual``-style, the endpoint for the object looks like # 'https://bucket.s3.amazonaws.com/example.img'. # If you do not follow the DNS naming convention in the bucket name, you can # get objects in the path style, but not in the virtual style. # # Possible values: # * Any string value of ``auto``, ``virtual``, or ``path`` # # Related Options: # * s3_store_bucket # # (string value) #s3_store_bucket_url_format = auto # # What size, in MB, should S3 start chunking image files and do a multipart # upload in S3. # # This configuration option takes a threshold in MB to determine whether to # upload the image to S3 as is or to split it (Multipart Upload). # # Note: You can only split up to 10,000 images. # # Possible values: # * Any positive integer value # # Related Options: # * s3_store_large_object_chunk_size # * s3_store_thread_pools # # (integer value) #s3_store_large_object_size = 100 # # What multipart upload part size, in MB, should S3 use when uploading parts. # # This configuration option takes the image split size in MB for Multipart # Upload. # # Note: You can only split up to 10,000 images. # # Possible values: # * Any positive integer value (must be greater than or equal to 5M) # # Related Options: # * s3_store_large_object_size # * s3_store_thread_pools # # (integer value) #s3_store_large_object_chunk_size = 10 # # The number of thread pools to perform a multipart upload in S3. # # This configuration option takes the number of thread pools when performing a # Multipart Upload. # # Possible values: # * Any positive integer value # # Related Options: # * s3_store_large_object_size # * s3_store_large_object_chunk_size # # (integer value) #s3_store_thread_pools = 10 # # Set verification of the server certificate. # # This boolean determines whether or not to verify the server # certificate. If this option is set to True, swiftclient won't check # for a valid SSL certificate when authenticating. If the option is set # to False, then the default CA truststore is used for verification. # # Possible values: # * True # * False # # Related options: # * swift_store_cacert # # (boolean value) #swift_store_auth_insecure = false # # Path to the CA bundle file. # # This configuration option enables the operator to specify the path to # a custom Certificate Authority file for SSL verification when # connecting to Swift. # # Possible values: # * A valid path to a CA file # # Related options: # * swift_store_auth_insecure # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #swift_store_cacert = /etc/ssl/certs/ca-certificates.crt # # The region of Swift endpoint to use by Glance. # # Provide a string value representing a Swift region where Glance # can connect to for image storage. By default, there is no region # set. # # When Glance uses Swift as the storage backend to store images # for a specific tenant that has multiple endpoints, setting of a # Swift region with ``swift_store_region`` allows Glance to connect # to Swift in the specified region as opposed to a single region # connectivity. # # This option can be configured for both single-tenant and # multi-tenant storage. # # NOTE: Setting the region with ``swift_store_region`` is # tenant-specific and is necessary ``only if`` the tenant has # multiple endpoints across different regions. # # Possible values: # * A string value representing a valid Swift region. # # Related Options: # * None # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #swift_store_region = RegionTwo # # The URL endpoint to use for Swift backend storage. # # Provide a string value representing the URL endpoint to use for # storing Glance images in Swift store. By default, an endpoint # is not set and the storage URL returned by ``auth`` is used. # Setting an endpoint with ``swift_store_endpoint`` overrides the # storage URL and is used for Glance image storage. # # NOTE: The URL should include the path up to, but excluding the # container. The location of an object is obtained by appending # the container and object to the configured URL. # # Possible values: # * String value representing a valid URL path up to a Swift container # # Related Options: # * None # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name # # Endpoint Type of Swift service. # # This string value indicates the endpoint type to use to fetch the # Swift endpoint. The endpoint type determines the actions the user will # be allowed to perform, for instance, reading and writing to the Store. # This setting is only used if swift_store_auth_version is greater than # 1. # # Possible values: # * publicURL # * adminURL # * internalURL # # Related options: # * swift_store_endpoint # # (string value) # Possible values: # publicURL - # adminURL - # internalURL - #swift_store_endpoint_type = publicURL # # Type of Swift service to use. # # Provide a string value representing the service type to use for # storing images while using Swift backend storage. The default # service type is set to ``object-store``. # # NOTE: If ``swift_store_auth_version`` is set to 2, the value for # this configuration option needs to be ``object-store``. If using # a higher version of Keystone or a different auth scheme, this # option may be modified. # # Possible values: # * A string representing a valid service type for Swift storage. # # Related Options: # * None # # (string value) #swift_store_service_type = object-store # # Name of single container to store images/name prefix for multiple containers # # When a single container is being used to store images, this configuration # option indicates the container within the Glance account to be used for # storing all images. When multiple containers are used to store images, this # will be the name prefix for all containers. Usage of single/multiple # containers can be controlled using the configuration option # ``swift_store_multiple_containers_seed``. # # When using multiple containers, the containers will be named after the value # set for this configuration option with the first N chars of the image UUID # as the suffix delimited by an underscore (where N is specified by # ``swift_store_multiple_containers_seed``). # # Example: if the seed is set to 3 and swift_store_container = ``glance``, then # an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed # in # the container ``glance_fda``. All dashes in the UUID are included when # creating the container name but do not count toward the character limit, so # when N=10 the container name would be ``glance_fdae39a1-ba.`` # # Possible values: # * If using single container, this configuration option can be any string # that is a valid swift container name in Glance's Swift account # * If using multiple containers, this configuration option can be any # string as long as it satisfies the container naming rules enforced by # Swift. The value of ``swift_store_multiple_containers_seed`` should be # taken into account as well. # # Related options: # * ``swift_store_multiple_containers_seed`` # * ``swift_store_multi_tenant`` # * ``swift_store_create_container_on_put`` # # (string value) #swift_store_container = glance # # The size threshold, in MB, after which Glance will start segmenting image # data. # # Swift has an upper limit on the size of a single uploaded object. By default, # this is 5GB. To upload objects bigger than this limit, objects are segmented # into multiple smaller objects that are tied together with a manifest file. # For more detail, refer to # https://docs.openstack.org/swift/latest/overview_large_objects.html # # This configuration option specifies the size threshold over which the Swift # driver will start segmenting image data into multiple smaller files. # Currently, the Swift driver only supports creating Dynamic Large Objects. # # NOTE: This should be set by taking into account the large object limit # enforced by the Swift cluster in consideration. # # Possible values: # * A positive integer that is less than or equal to the large object limit # enforced by the Swift cluster in consideration. # # Related options: # * ``swift_store_large_object_chunk_size`` # # (integer value) # Minimum value: 1 #swift_store_large_object_size = 5120 # # The maximum size, in MB, of the segments when image data is segmented. # # When image data is segmented to upload images that are larger than the limit # enforced by the Swift cluster, image data is broken into segments that are no # bigger than the size specified by this configuration option. # Refer to ``swift_store_large_object_size`` for more detail. # # For example: if ``swift_store_large_object_size`` is 5GB and # ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will # be # segmented into 7 segments where the first six segments will be 1GB in size # and # the seventh segment will be 0.2GB. # # Possible values: # * A positive integer that is less than or equal to the large object limit # enforced by Swift cluster in consideration. # # Related options: # * ``swift_store_large_object_size`` # # (integer value) # Minimum value: 1 #swift_store_large_object_chunk_size = 200 # # Create container, if it doesn't already exist, when uploading image. # # At the time of uploading an image, if the corresponding container doesn't # exist, it will be created provided this configuration option is set to True. # By default, it won't be created. This behavior is applicable for both single # and multiple containers mode. # # Possible values: # * True # * False # # Related options: # * None # # (boolean value) #swift_store_create_container_on_put = false # # Store images in tenant's Swift account. # # This enables multi-tenant storage mode which causes Glance images to be # stored # in tenant specific Swift accounts. If this is disabled, Glance stores all # images in its own account. More details multi-tenant store can be found at # https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage # # NOTE: If using multi-tenant swift store, please make sure # that you do not set a swift configuration file with the # 'swift_store_config_file' option. # # Possible values: # * True # * False # # Related options: # * swift_store_config_file # # (boolean value) #swift_store_multi_tenant = false # # Seed indicating the number of containers to use for storing images. # # When using a single-tenant store, images can be stored in one or more than # one # containers. When set to 0, all images will be stored in one single container. # When set to an integer value between 1 and 32, multiple containers will be # used to store images. This configuration option will determine how many # containers are created. The total number of containers that will be used is # equal to 16^N, so if this config option is set to 2, then 16^2=256 containers # will be used to store images. # # Please refer to ``swift_store_container`` for more detail on the naming # convention. More detail about using multiple containers can be found at # https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store- # multiple-containers.html # # NOTE: This is used only when swift_store_multi_tenant is disabled. # # Possible values: # * A non-negative integer less than or equal to 32 # # Related options: # * ``swift_store_container`` # * ``swift_store_multi_tenant`` # * ``swift_store_create_container_on_put`` # # (integer value) # Minimum value: 0 # Maximum value: 32 #swift_store_multiple_containers_seed = 0 # # List of tenants that will be granted admin access. # # This is a list of tenants that will be granted read/write access on # all Swift containers created by Glance in multi-tenant mode. The # default value is an empty list. # # Possible values: # * A comma separated list of strings representing UUIDs of Keystone # projects/tenants # # Related options: # * None # # (list value) #swift_store_admin_tenants = # # SSL layer compression for HTTPS Swift requests. # # Provide a boolean value to determine whether or not to compress # HTTPS Swift requests for images at the SSL layer. By default, # compression is enabled. # # When using Swift as the backend store for Glance image storage, # SSL layer compression of HTTPS Swift requests can be set using # this option. If set to False, SSL layer compression of HTTPS # Swift requests is disabled. Disabling this option may improve # performance for images which are already in a compressed format, # for example, qcow2. # # Possible values: # * True # * False # # Related Options: # * None # # (boolean value) #swift_store_ssl_compression = true # # The number of times a Swift download will be retried before the # request fails. # # Provide an integer value representing the number of times an image # download must be retried before erroring out. The default value is # zero (no retry on a failed image download). When set to a positive # integer value, ``swift_store_retry_get_count`` ensures that the # download is attempted this many more times upon a download failure # before sending an error message. # # Possible values: # * Zero # * Positive integer value # # Related Options: # * None # # (integer value) # Minimum value: 0 #swift_store_retry_get_count = 0 # # Time in seconds defining the size of the window in which a new # token may be requested before the current token is due to expire. # # Typically, the Swift storage driver fetches a new token upon the # expiration of the current token to ensure continued access to # Swift. However, some Swift transactions (like uploading image # segments) may not recover well if the token expires on the fly. # # Hence, by fetching a new token before the current token expiration, # we make sure that the token does not expire or is close to expiry # before a transaction is attempted. By default, the Swift storage # driver requests for a new token 60 seconds or less before the # current token expiration. # # Possible values: # * Zero # * Positive integer value # # Related Options: # * None # # (integer value) # Minimum value: 0 #swift_store_expire_soon_interval = 60 # # Use trusts for multi-tenant Swift store. # # This option instructs the Swift store to create a trust for each # add/get request when the multi-tenant store is in use. Using trusts # allows the Swift store to avoid problems that can be caused by an # authentication token expiring during the upload or download of data. # # By default, ``swift_store_use_trusts`` is set to ``True``(use of # trusts is enabled). If set to ``False``, a user token is used for # the Swift connection instead, eliminating the overhead of trust # creation. # # NOTE: This option is considered only when # ``swift_store_multi_tenant`` is set to ``True`` # # Possible values: # * True # * False # # Related options: # * swift_store_multi_tenant # # (boolean value) #swift_store_use_trusts = true # # Buffer image segments before upload to Swift. # # Provide a boolean value to indicate whether or not Glance should # buffer image data to disk while uploading to swift. This enables # Glance to resume uploads on error. # # NOTES: # When enabling this option, one should take great care as this # increases disk usage on the API node. Be aware that depending # upon how the file system is configured, the disk space used # for buffering may decrease the actual disk space available for # the glance image cache. Disk utilization will cap according to # the following equation: # (``swift_store_large_object_chunk_size`` * ``workers`` * 1000) # # Possible values: # * True # * False # # Related options: # * swift_upload_buffer_dir # # (boolean value) #swift_buffer_on_upload = false # # Reference to default Swift account/backing store parameters. # # Provide a string value representing a reference to the default set # of parameters required for using swift account/backing store for # image storage. The default reference value for this configuration # option is 'ref1'. This configuration option dereferences the # parameters and facilitates image storage in Swift storage backend # every time a new image is added. # # Possible values: # * A valid string value # # Related options: # * None # # (string value) #default_swift_reference = ref1 # DEPRECATED: Version of the authentication service to use. Valid versions are # 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'auth_version' in the Swift back-end configuration file is # used instead. #swift_store_auth_version = 2 # DEPRECATED: The address where the Swift authentication service is listening. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'auth_address' in the Swift back-end configuration file is # used instead. #swift_store_auth_address = # DEPRECATED: The user to authenticate against the Swift authentication # service. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'user' in the Swift back-end configuration file is set instead. #swift_store_user = # DEPRECATED: Auth key for the user authenticating against the Swift # authentication service. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'key' in the Swift back-end configuration file is used # to set the authentication key instead. #swift_store_key = # # Absolute path to the file containing the swift account(s) # configurations. # # Include a string value representing the path to a configuration # file that has references for each of the configured Swift # account(s)/backing stores. By default, no file path is specified # and customized Swift referencing is disabled. Configuring this # option is highly recommended while using Swift storage backend for # image storage as it avoids storage of credentials in the database. # # NOTE: Please do not configure this option if you have set # ``swift_store_multi_tenant`` to ``True``. # # Possible values: # * String value representing an absolute path on the glance-api # node # # Related options: # * swift_store_multi_tenant # # (string value) #swift_store_config_file = # # Directory to buffer image segments before upload to Swift. # # Provide a string value representing the absolute path to the # directory on the glance node where image segments will be # buffered briefly before they are uploaded to swift. # # NOTES: # * This is required only when the configuration option # ``swift_buffer_on_upload`` is set to True. # * This directory should be provisioned keeping in mind the # ``swift_store_large_object_chunk_size`` and the maximum # number of images that could be uploaded simultaneously by # a given glance node. # # Possible values: # * String value representing an absolute directory path # # Related options: # * swift_buffer_on_upload # * swift_store_large_object_chunk_size # # (string value) #swift_upload_buffer_dir = # # Address of the ESX/ESXi or vCenter Server target system. # # This configuration option sets the address of the ESX/ESXi or vCenter # Server target system. This option is required when using the VMware # storage backend. The address can contain an IP address (127.0.0.1) or # a DNS name (www.my-domain.com). # # Possible Values: # * A valid IPv4 or IPv6 address # * A valid DNS name # # Related options: # * vmware_server_username # * vmware_server_password # # (host address value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #vmware_server_host = 127.0.0.1 # # Server username. # # This configuration option takes the username for authenticating with # the VMware ESX/ESXi or vCenter Server. This option is required when # using the VMware storage backend. # # Possible Values: # * Any string that is the username for a user with appropriate # privileges # # Related options: # * vmware_server_host # * vmware_server_password # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #vmware_server_username = root # # Server password. # # This configuration option takes the password for authenticating with # the VMware ESX/ESXi or vCenter Server. This option is required when # using the VMware storage backend. # # Possible Values: # * Any string that is a password corresponding to the username # specified using the "vmware_server_username" option # # Related options: # * vmware_server_host # * vmware_server_username # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #vmware_server_password = vmware # # The number of VMware API retries. # # This configuration option specifies the number of times the VMware # ESX/VC server API must be retried upon connection related issues or # server API call overload. It is not possible to specify 'retry # forever'. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #vmware_api_retry_count = 10 # # Interval in seconds used for polling remote tasks invoked on VMware # ESX/VC server. # # This configuration option takes in the sleep time in seconds for polling an # on-going async task as part of the VMWare ESX/VC server API call. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #vmware_task_poll_interval = 5 # # The directory where the glance images will be stored in the datastore. # # This configuration option specifies the path to the directory where the # glance images will be stored in the VMware datastore. If this option # is not set, the default directory where the glance images are stored # is openstack_glance. # # Possible Values: # * Any string that is a valid path to a directory # # Related options: # * None # # (string value) #vmware_store_image_dir = /openstack_glance # # Set verification of the ESX/vCenter server certificate. # # This configuration option takes a boolean value to determine # whether or not to verify the ESX/vCenter server certificate. If this # option is set to True, the ESX/vCenter server certificate is not # verified. If this option is set to False, then the default CA # truststore is used for verification. # # This option is ignored if the "vmware_ca_file" option is set. In that # case, the ESX/vCenter server certificate will then be verified using # the file specified using the "vmware_ca_file" option . # # Possible Values: # * True # * False # # Related options: # * vmware_ca_file # # (boolean value) # Deprecated group/name - [glance_store]/vmware_api_insecure #vmware_insecure = false # # Absolute path to the CA bundle file. # # This configuration option enables the operator to use a custom # Cerificate Authority File to verify the ESX/vCenter certificate. # # If this option is set, the "vmware_insecure" option will be ignored # and the CA file specified will be used to authenticate the ESX/vCenter # server certificate and establish a secure connection to the server. # # Possible Values: # * Any string that is a valid absolute path to a CA file # # Related options: # * vmware_insecure # # (string value) # # This option has a sample default set, which means that # its actual default value may vary from the one documented # below. #vmware_ca_file = /etc/ssl/certs/ca-certificates.crt # # The datastores where the image can be stored. # # This configuration option specifies the datastores where the image can # be stored in the VMWare store backend. This option may be specified # multiple times for specifying multiple datastores. The datastore name # should be specified after its datacenter path, separated by ":". An # optional weight may be given after the datastore name, separated again # by ":" to specify the priority. Thus, the required format becomes # ::. # # When adding an image, the datastore with highest weight will be # selected, unless there is not enough free space available in cases # where the image size is already known. If no weight is given, it is # assumed to be zero and the directory will be considered for selection # last. If multiple datastores have the same weight, then the one with # the most free space available is selected. # # Possible Values: # * Any string of the format: # :: # # Related options: # * None # # (multi valued) #vmware_datastores = [healthcheck] # # From oslo.middleware # # DEPRECATED: The path to respond to healtcheck requests on. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #path = /healthcheck # Show more detailed information as part of the response. Security note: # Enabling this option may expose sensitive details about the service being # monitored. Be sure to verify that it will not violate your security policies. # (boolean value) #detailed = false # Additional backends that can perform health checks and report that # information back as part of a request. (list value) #backends = # Check the presence of a file to determine if an application is running on a # port. Used by DisableByFileHealthcheck plugin. (string value) #disable_by_file_path = # Check the presence of a file based on a port to determine if an application # is running on a port. Expects a "port:path" list of strings. Used by # DisableByFilesPortsHealthcheck plugin. (list value) #disable_by_file_paths = [k8s_vim] # # From tacker.nfvo.drivers.vim.kubernetes_driver # # Use barbican to encrypt vim password if True, save vim credentials in local # file system if False (boolean value) #use_barbican = true [key_manager] # # From tacker.keymgr # # The full class name of the key manager API class (string value) #api_class = tacker.keymgr.barbican_key_manager.BarbicanKeyManager [keystone_authtoken] memcached_servers = localhost:11211 cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = devstack username = tacker auth_url = http://192.168.33.11/identity interface = public auth_type = password # # From keystonemiddleware.auth_token # # Complete "public" Identity API endpoint. This endpoint should not be an # "admin" endpoint, as it should be accessible by all end users. # Unauthenticated clients are redirected to this endpoint to authenticate. # Although this endpoint should ideally be unversioned, client support in the # wild varies. If you're using a versioned v2 endpoint here, then this should # *not* be the same endpoint the service user utilizes for validating tokens, # because normal end users may not be able to reach that endpoint. (string # value) # Deprecated group/name - [keystone_authtoken]/auth_uri #www_authenticate_uri = # DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not # be an "admin" endpoint, as it should be accessible by all end users. # Unauthenticated clients are redirected to this endpoint to authenticate. # Although this endpoint should ideally be unversioned, client support in the # wild varies. If you're using a versioned v2 endpoint here, then this should # *not* be the same endpoint the service user utilizes for validating tokens, # because normal end users may not be able to reach that endpoint. This option # is deprecated in favor of www_authenticate_uri and will be removed in the S # release. (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: The auth_uri option is deprecated in favor of www_authenticate_uri # and will be removed in the S release. #auth_uri = # API version of the Identity API endpoint. (string value) #auth_version = # Interface to use for the Identity API endpoint. Valid values are "public", # "internal" (default) or "admin". (string value) #interface = internal # Do not handle authorization requests within the middleware, but delegate the # authorization decision to downstream WSGI components. (boolean value) #delay_auth_decision = false # Request timeout value for communicating with Identity API server. (integer # value) #http_connect_timeout = # How many times are we trying to reconnect when communicating with Identity # API Server. (integer value) #http_request_max_retries = 3 # Request environment key where the Swift cache object is stored. When # auth_token middleware is deployed with a Swift cache, use this option to have # the middleware share a caching backend with swift. Otherwise, use the # ``memcached_servers`` option instead. (string value) #cache = # Required if identity server requires client certificate (string value) #certfile = # Required if identity server requires client certificate (string value) #keyfile = # A PEM encoded Certificate Authority to use when verifying HTTPs connections. # Defaults to system CAs. (string value) #cafile = # Verify HTTPS connections. (boolean value) #insecure = false # The region in which the identity server can be found. (string value) #region_name = # Optionally specify a list of memcached server(s) to use for caching. If left # undefined, tokens will instead be cached in-process. (list value) # Deprecated group/name - [keystone_authtoken]/memcache_servers #memcached_servers = # In order to prevent excessive effort spent validating tokens, the middleware # caches previously-seen tokens for a configurable duration (in seconds). Set # to -1 to disable caching completely. (integer value) #token_cache_time = 300 # (Optional) If defined, indicate whether token data should be authenticated or # authenticated and encrypted. If MAC, token data is authenticated (with HMAC) # in the cache. If ENCRYPT, token data is encrypted and authenticated in the # cache. If the value is not one of these options or empty, auth_token will # raise an exception on initialization. (string value) # Possible values: # None - # MAC - # ENCRYPT - #memcache_security_strategy = None # (Optional, mandatory if memcache_security_strategy is defined) This string is # used for key derivation. (string value) #memcache_secret_key = # (Optional) Number of seconds memcached server is considered dead before it is # tried again. (integer value) #memcache_pool_dead_retry = 300 # (Optional) Maximum total number of open connections to every memcached # server. (integer value) #memcache_pool_maxsize = 10 # (Optional) Socket timeout in seconds for communicating with a memcached # server. (integer value) #memcache_pool_socket_timeout = 3 # (Optional) Number of seconds a connection to memcached is held unused in the # pool before it is closed. (integer value) #memcache_pool_unused_timeout = 60 # (Optional) Number of seconds that an operation will wait to get a memcached # client connection from the pool. (integer value) #memcache_pool_conn_get_timeout = 10 # (Optional) Use the advanced (eventlet safe) memcached client pool. (boolean # value) #memcache_use_advanced_pool = true # (Optional) Indicate whether to set the X-Service-Catalog header. If False, # middleware will not ask for service catalog on token validation and will not # set the X-Service-Catalog header. (boolean value) #include_service_catalog = true # Used to control the use and type of token binding. Can be set to: "disabled" # to not check token binding. "permissive" (default) to validate binding # information if the bind type is of a form known to the server and ignore it # if not. "strict" like "permissive" but if the bind type is unknown the token # will be rejected. "required" any form of token binding is needed to be # allowed. Finally the name of a binding method that must be present in tokens. # (string value) #enforce_token_bind = permissive # A choice of roles that must be present in a service token. Service tokens are # allowed to request that an expired token can be used and so this check should # tightly control that only actual services should be sending this token. Roles # here are applied as an ANY check so any role in this list must be present. # For backwards compatibility reasons this currently only affects the # allow_expired check. (list value) #service_token_roles = service # For backwards compatibility reasons we must let valid service tokens pass # that don't pass the service_token_roles check as valid. Setting this true # will become the default in a future release and should be enabled if # possible. (boolean value) #service_token_roles_required = false # The name or type of the service as it appears in the service catalog. This is # used to validate tokens that have restricted access rules. (string value) #service_type = # Authentication type to load (string value) # Deprecated group/name - [keystone_authtoken]/auth_plugin #auth_type = # Config Section from which to load plugin specific options (string value) #auth_section = [kubernetes_vim] # # From tacker.vnfm.infra_drivers.kubernetes.kubernetes_driver # # Number of attempts to retry for stack creation/deletion (integer value) #stack_retries = 100 # Wait time (in seconds) between consecutive stack create/delete retries # (integer value) #stack_retry_wait = 5 [monitor] # # From tacker.vnfm.monitor # # check interval for monitor (integer value) #check_intvl = 10 [monitor_http_ping] # # From tacker.vnfm.monitor_drivers.http_ping.http_ping # # Number of times to retry (integer value) #retry = 5 # Number of seconds to wait for a response (integer value) #timeout = 1 # HTTP port number to send request (integer value) #port = 80 [monitor_ping] # # From tacker.vnfm.monitor_drivers.ping.ping # # Number of ICMP packets to send (integer value) #count = 5 # Number of seconds to wait for a response (floating point value) #timeout = 5 # Number of seconds to wait between packets (floating point value) #interval = 1 # Number of ping retries (integer value) #retry = 1 [nfvo_vim] # # From tacker.nfvo.nfvo_plugin # # VIM driver for launching VNFs (list value) #vim_drivers = openstack,kubernetes # Interval to check for VIM health (integer value) #monitor_interval = 30 [openstack_vim] # # From tacker.vnfm.infra_drivers.openstack.openstack # # Number of attempts to retry for stack creation/deletion (integer value) #stack_retries = 60 # Wait time (in seconds) between consecutive stack create/delete retries # (integer value) #stack_retry_wait = 10 [openwrt] # # From tacker.vnfm.mgmt_drivers.openwrt.openwrt # # User name to login openwrt (string value) #user = root # Password to login openwrt (string value) #password = [oslo_messaging_amqp] # # From oslo.messaging # # Name for the AMQP container. must be globally unique. Defaults to a generated # UUID (string value) #container_name = # Timeout for inactive connections (in seconds) (integer value) #idle_timeout = 0 # Debug: dump AMQP frames to stdout (boolean value) #trace = false # Attempt to connect via SSL. If no other ssl-related parameters are given, it # will use the system's CA-bundle to verify the server's certificate. (boolean # value) #ssl = false # CA certificate PEM file used to verify the server's certificate (string # value) #ssl_ca_file = # Self-identifying certificate PEM file for client authentication (string # value) #ssl_cert_file = # Private key PEM file used to sign ssl_cert_file certificate (optional) # (string value) #ssl_key_file = # Password for decrypting ssl_key_file (if encrypted) (string value) #ssl_key_password = # By default SSL checks that the name in the server's certificate matches the # hostname in the transport_url. In some configurations it may be preferable to # use the virtual hostname instead, for example if the server uses the Server # Name Indication TLS extension (rfc6066) to provide a certificate per virtual # host. Set ssl_verify_vhost to True if the server's SSL certificate uses the # virtual host name instead of the DNS name. (boolean value) #ssl_verify_vhost = false # Space separated list of acceptable SASL mechanisms (string value) #sasl_mechanisms = # Path to directory that contains the SASL configuration (string value) #sasl_config_dir = # Name of configuration file (without .conf suffix) (string value) #sasl_config_name = # SASL realm to use if no realm present in username (string value) #sasl_default_realm = # Seconds to pause before attempting to re-connect. (integer value) # Minimum value: 1 #connection_retry_interval = 1 # Increase the connection_retry_interval by this many seconds after each # unsuccessful failover attempt. (integer value) # Minimum value: 0 #connection_retry_backoff = 2 # Maximum limit for connection_retry_interval + connection_retry_backoff # (integer value) # Minimum value: 1 #connection_retry_interval_max = 30 # Time to pause between re-connecting an AMQP 1.0 link that failed due to a # recoverable error. (integer value) # Minimum value: 1 #link_retry_delay = 10 # The maximum number of attempts to re-send a reply message which failed due to # a recoverable error. (integer value) # Minimum value: -1 #default_reply_retry = 0 # The deadline for an rpc reply message delivery. (integer value) # Minimum value: 5 #default_reply_timeout = 30 # The deadline for an rpc cast or call message delivery. Only used when caller # does not provide a timeout expiry. (integer value) # Minimum value: 5 #default_send_timeout = 30 # The deadline for a sent notification message delivery. Only used when caller # does not provide a timeout expiry. (integer value) # Minimum value: 5 #default_notify_timeout = 30 # The duration to schedule a purge of idle sender links. Detach link after # expiry. (integer value) # Minimum value: 1 #default_sender_link_timeout = 600 # Indicates the addressing mode used by the driver. # Permitted values: # 'legacy' - use legacy non-routable addressing # 'routable' - use routable addresses # 'dynamic' - use legacy addresses if the message bus does not support routing # otherwise use routable addressing (string value) #addressing_mode = dynamic # Enable virtual host support for those message buses that do not natively # support virtual hosting (such as qpidd). When set to true the virtual host # name will be added to all message bus addresses, effectively creating a # private 'subnet' per virtual host. Set to False if the message bus supports # virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative # as the name of the virtual host. (boolean value) #pseudo_vhost = true # address prefix used when sending to a specific server (string value) #server_request_prefix = exclusive # address prefix used when broadcasting to all servers (string value) #broadcast_prefix = broadcast # address prefix when sending to any server in group (string value) #group_request_prefix = unicast # Address prefix for all generated RPC addresses (string value) #rpc_address_prefix = openstack.org/om/rpc # Address prefix for all generated Notification addresses (string value) #notify_address_prefix = openstack.org/om/notify # Appended to the address prefix when sending a fanout message. Used by the # message bus to identify fanout messages. (string value) #multicast_address = multicast # Appended to the address prefix when sending to a particular RPC/Notification # server. Used by the message bus to identify messages sent to a single # destination. (string value) #unicast_address = unicast # Appended to the address prefix when sending to a group of consumers. Used by # the message bus to identify messages that should be delivered in a round- # robin fashion across consumers. (string value) #anycast_address = anycast # Exchange name used in notification addresses. # Exchange name resolution precedence: # Target.exchange if set # else default_notification_exchange if set # else control_exchange if set # else 'notify' (string value) #default_notification_exchange = # Exchange name used in RPC addresses. # Exchange name resolution precedence: # Target.exchange if set # else default_rpc_exchange if set # else control_exchange if set # else 'rpc' (string value) #default_rpc_exchange = # Window size for incoming RPC Reply messages. (integer value) # Minimum value: 1 #reply_link_credit = 200 # Window size for incoming RPC Request messages (integer value) # Minimum value: 1 #rpc_server_credit = 100 # Window size for incoming Notification messages (integer value) # Minimum value: 1 #notify_server_credit = 100 # Send messages of this type pre-settled. # Pre-settled messages will not receive acknowledgement # from the peer. Note well: pre-settled messages may be # silently discarded if the delivery fails. # Permitted values: # 'rpc-call' - send RPC Calls pre-settled # 'rpc-reply'- send RPC Replies pre-settled # 'rpc-cast' - Send RPC Casts pre-settled # 'notify' - Send Notifications pre-settled # (multi valued) #pre_settled = rpc-cast #pre_settled = rpc-reply [oslo_messaging_kafka] # # From oslo.messaging # # Max fetch bytes of Kafka consumer (integer value) #kafka_max_fetch_bytes = 1048576 # Default timeout(s) for Kafka consumers (floating point value) #kafka_consumer_timeout = 1.0 # DEPRECATED: Pool Size for Kafka Consumers (integer value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #pool_size = 10 # DEPRECATED: The pool size limit for connections expiration policy (integer # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #conn_pool_min_size = 2 # DEPRECATED: The time-to-live in sec of idle connections in the pool (integer # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #conn_pool_ttl = 1200 # Group id for Kafka consumer. Consumers in one group will coordinate message # consumption (string value) #consumer_group = oslo_messaging_consumer # Upper bound on the delay for KafkaProducer batching in seconds (floating # point value) #producer_batch_timeout = 0.0 # Size of batch for the producer async send (integer value) #producer_batch_size = 16384 # The compression codec for all data generated by the producer. If not set, # compression will not be used. Note that the allowed values of this depend on # the kafka version (string value) # Possible values: # none - # gzip - # snappy - # lz4 - # zstd - #compression_codec = none # Enable asynchronous consumer commits (boolean value) #enable_auto_commit = false # The maximum number of records returned in a poll call (integer value) #max_poll_records = 500 # Protocol used to communicate with brokers (string value) # Possible values: # PLAINTEXT - # SASL_PLAINTEXT - # SSL - # SASL_SSL - #security_protocol = PLAINTEXT # Mechanism when security protocol is SASL (string value) #sasl_mechanism = PLAIN # CA certificate PEM file used to verify the server certificate (string value) #ssl_cafile = # Client certificate PEM file used for authentication. (string value) #ssl_client_cert_file = # Client key PEM file used for authentication. (string value) #ssl_client_key_file = # Client key password file used for authentication. (string value) #ssl_client_key_password = [oslo_messaging_notifications] # # From oslo.messaging # # The Drivers(s) to handle sending notifications. Possible values are # messaging, messagingv2, routing, log, test, noop (multi valued) # Deprecated group/name - [DEFAULT]/notification_driver #driver = # A URL representing the messaging driver to use for notifications. If not set, # we fall back to the same configuration used for RPC. (string value) # Deprecated group/name - [DEFAULT]/notification_transport_url #transport_url = # AMQP topic used for OpenStack notifications. (list value) # Deprecated group/name - [rpc_notifier2]/topics # Deprecated group/name - [DEFAULT]/notification_topics #topics = notifications # The maximum number of attempts to re-send a notification message which failed # to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite # (integer value) #retry = -1 [oslo_messaging_rabbit] # # From oslo.messaging # # Use durable queues in AMQP. (boolean value) #amqp_durable_queues = false # Auto-delete queues in AMQP. (boolean value) #amqp_auto_delete = false # Connect over SSL. (boolean value) # Deprecated group/name - [oslo_messaging_rabbit]/rabbit_use_ssl #ssl = false # SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and # SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some # distributions. (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version #ssl_version = # SSL key file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile #ssl_key_file = # SSL cert file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile #ssl_cert_file = # SSL certification authority file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs #ssl_ca_file = # DEPRECATED: Run the health check heartbeat thread through a native python # thread by default. If this option is equal to False then the health check # heartbeat will inherit the execution model from the parent process. For # example if the parent process has monkey patched the stdlib by using # eventlet/greenlet then the heartbeat will be run through a green thread. # (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #heartbeat_in_pthread = true # How long to wait before reconnecting in response to an AMQP consumer cancel # notification. (floating point value) #kombu_reconnect_delay = 1.0 # EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not # be used. This option may not be available in future versions. (string value) #kombu_compression = # How long to wait a missing client before abandoning to send it its replies. # This value should not be longer than rpc_response_timeout. (integer value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout #kombu_missing_consumer_retry_timeout = 60 # Determines how the next RabbitMQ node is chosen in case the one we are # currently connected to becomes unavailable. Takes effect only if more than # one RabbitMQ node is provided in config. (string value) # Possible values: # round-robin - # shuffle - #kombu_failover_strategy = round-robin # The RabbitMQ login method. (string value) # Possible values: # PLAIN - # AMQPLAIN - # RABBIT-CR-DEMO - #rabbit_login_method = AMQPLAIN # How frequently to retry connecting with RabbitMQ. (integer value) #rabbit_retry_interval = 1 # How long to backoff for between retries when connecting to RabbitMQ. (integer # value) #rabbit_retry_backoff = 2 # Maximum interval of RabbitMQ connection retries. Default is 30 seconds. # (integer value) #rabbit_interval_max = 30 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring # is no longer controlled by the x-ha-policy argument when declaring a queue. # If you just want to make sure that all queues (except those with auto- # generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy # HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value) #rabbit_ha_queues = false # Positive integer representing duration in seconds for queue TTL (x-expires). # Queues which are unused for the duration of the TTL are automatically # deleted. The parameter affects only reply and fanout queues. (integer value) # Minimum value: 1 #rabbit_transient_queues_ttl = 1800 # Specifies the number of messages to prefetch. Setting to zero allows # unlimited messages. (integer value) #rabbit_qos_prefetch_count = 0 # Number of seconds after which the Rabbit broker is considered down if # heartbeat's keep-alive fails (0 disables heartbeat). (integer value) #heartbeat_timeout_threshold = 60 # How often times during the heartbeat_timeout_threshold we check the # heartbeat. (integer value) #heartbeat_rate = 2 # DEPRECATED: (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for # direct send. The direct send is used as reply, so the MessageUndeliverable # exception is raised in case the client queue does not # exist.MessageUndeliverable exception will be used to loop for a timeout to # lets a chance to sender to recover.This flag is deprecated and it will not be # possible to deactivate this functionality anymore (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Mandatory flag no longer deactivable. #direct_mandatory_flag = true # Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and # notify consumerswhen queue is down (boolean value) #enable_cancel_on_failover = false [oslo_middleware] # # From oslo.middleware # # The maximum body size for each request, in bytes. (integer value) # Deprecated group/name - [DEFAULT]/osapi_max_request_body_size # Deprecated group/name - [DEFAULT]/max_request_body_size #max_request_body_size = 114688 # DEPRECATED: The HTTP Header that will be used to determine what the original # request protocol scheme was, even if it was hidden by a SSL termination # proxy. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #secure_proxy_ssl_header = X-Forwarded-Proto # Whether the application is behind a proxy or not. This determines if the # middleware should parse the headers or not. (boolean value) #enable_proxy_headers_parsing = false [oslo_policy] # # From oslo.policy # # This option controls whether or not to enforce scope when evaluating # policies. If ``True``, the scope of the token used in the request is compared # to the ``scope_types`` of the policy being enforced. If the scopes do not # match, an ``InvalidScope`` exception will be raised. If ``False``, a message # will be logged informing operators that policies are being invoked with # mismatching scope. (boolean value) #enforce_scope = false # This option controls whether or not to use old deprecated defaults when # evaluating policies. If ``True``, the old deprecated defaults are not going # to be evaluated. This means if any existing token is allowed for old defaults # but is disallowed for new defaults, it will be disallowed. It is encouraged # to enable this flag along with the ``enforce_scope`` flag so that you can get # the benefits of new defaults and ``scope_type`` together (boolean value) #enforce_new_defaults = false # The relative or absolute path of a file that maps roles to permissions for a # given service. Relative paths must be specified in relation to the # configuration file setting this option. (string value) #policy_file = policy.yaml # Default rule. Enforced when a requested rule is not found. (string value) #policy_default_rule = default # Directories where policy configuration files are stored. They can be relative # to any directory in the search path defined by the config_dir option, or # absolute paths. The file defined by policy_file must exist for these # directories to be searched. Missing or empty directories are ignored. (multi # valued) #policy_dirs = policy.d # Content Type to send and receive data for REST based policy check (string # value) # Possible values: # application/x-www-form-urlencoded - # application/json - #remote_content_type = application/x-www-form-urlencoded # server identity verification for REST based policy check (boolean value) #remote_ssl_verify_server_crt = false # Absolute path to ca cert file for REST based policy check (string value) #remote_ssl_ca_crt_file = # Absolute path to client cert for REST based policy check (string value) #remote_ssl_client_crt_file = # Absolute path client key file REST based policy check (string value) #remote_ssl_client_key_file = [tacker] # # From tacker.vnflcm.vnflcm_driver # # Hosting vnf drivers tacker plugin will use (list value) #vnflcm_infra_driver = openstack,kubernetes # MGMT driver to communicate with Hosting VNF/logical service instance tacker # plugin will use (list value) #vnflcm_mgmt_driver = vnflcm_noop # # From tacker.vnfm.monitor # # Monitor driver to communicate with Hosting VNF/logical service instance # tacker plugin will use (list value) #monitor_driver = ping,http_ping # Alarm monitoring driver to communicate with Hosting VNF/logical service # instance tacker plugin will use (list value) #alarm_monitor_driver = ceilometer # App monitoring driver to communicate with Hosting VNF/logical service # instance tacker plugin will use (list value) #app_monitor_driver = zabbix # # From tacker.vnfm.plugin # # MGMT driver to communicate with Hosting VNF/logical service instance tacker # plugin will use (list value) #mgmt_driver = noop,openwrt # Time interval to wait for VM to boot (integer value) #boot_wait = 30 # Hosting vnf drivers tacker plugin will use (list value) #infra_driver = noop,openstack,kubernetes # Hosting vnf drivers tacker plugin will use (list value) #policy_action = autoscaling,respawn,vdu_autoheal,log,log_and_kill [vim_keys] use_barbican = True # # From tacker.nfvo.drivers.vim.openstack_driver # # Dir.path to store fernet keys. (string value) #openstack = /etc/tacker/vim/fernet_keys # Use barbican to encrypt vim password if True, save vim credentials in local # file system if False (boolean value) #use_barbican = false [vim_monitor] # # From tacker.nfvo.drivers.vim.openstack_driver # # Number of ICMP packets to send (string value) #count = 1 # Number of seconds to wait for a response (string value) #timeout = 1 # Number of seconds to wait between packets (string value) #interval = 1 [vnf_lcm] # Vnflcm options group # # From tacker.conf # # endpoint_url (string value) #endpoint_url = http://localhost:9890/ # Number of subscriptions (integer value) #subscription_num = 100 # Number of retry (integer value) #retry_num = 3 # Retry interval(sec) (integer value) #retry_wait = 10 # Retry Timeout(sec) (integer value) #retry_timeout = 10 # Test callbackUri (boolean value) #test_callback_uri = true [vnf_package] vnf_package_csar_path = /opt/stack/data/tacker/vnfpackage # # Options under this group are used to store vnf packages in glance store. # # From tacker.conf # # Path to store extracted CSAR file (string value) #vnf_package_csar_path = /var/lib/tacker/vnfpackages/ # # Maximum size of CSAR file a user can upload in GB. # # An CSAR file upload greater than the size mentioned here would result # in an CSAR upload failure. This configuration option defaults to # 1024 GB (1 TiB). # # NOTES: # * This value should only be increased after careful # consideration and must be set less than or equal to # 8 EiB (~9223372036). # * This value must be set with careful consideration of the # backend storage capacity. Setting this to a very low value # may result in a large number of image failures. And, setting # this to a very large value may result in faster consumption # of storage. Hence, this must be set according to the nature of # images created and storage capacity available. # # Possible values: # * Any positive number less than or equal to 9223372036854775808 # (floating point value) # Minimum value: 1e-06 # Maximum value: 9223372036 #csar_file_size_cap = 1024 # # Secure hashing algorithm used for computing the 'hash' property. # # Possible values: # * sha256, sha512 # # Related options: # * None # (string value) #hashing_algorithm = sha512 # List of items to get from top-vnfd (list value) #get_top_list = tosca_definitions_version,description,metadata # Exclude node from node_template (list value) #exclude_node = VNF # List of types to get from lower-vnfd (list value) #get_lower_list = tosca.nodes.nfv.VNF,tosca.nodes.nfv.VDU.Tacker # List of del inputs from lower-vnfd (list value) #del_input_list = descriptor_id,descriptor_versionprovider,product_name,software_version,vnfm_info,flavour_id,flavour_description [agent] root_helper = sudo /usr/local/bin/tacker-rootwrap /etc/tacker/rootwrap.conf From cmccarth at mathworks.com Fri Jun 4 15:15:39 2021 From: cmccarth at mathworks.com (Christopher McCarthy) Date: Fri, 4 Jun 2021 15:15:39 +0000 Subject: [ops] rabbitmq queues for nova versioned notifications queues keep filling up In-Reply-To: References: Message-ID: Hi Ajay, We work around this by setting a TTL on our notifications queues via RabbitMQ policy definition. We include the following in our definitions.json for RabbitMQ: "policies":[ {"vhost": "/", "name": "notifications-ttl", "pattern": "^(notifications|versioned_notifications)\\.", "apply-to": "queues", "definition": {"message-ttl":600000}, "priority":0} ] This expires messages in the notifications and versioned_notifications queues after 10 minutes, which seems to work well for us. I believe we initially picked up this workaround from this[1] bug report. Hope this helps, - Chris -- Christopher McCarthy MathWorks cmccarth at mathworks.com [1] https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1737170 Date: Wed, 2 Jun 2021 22:39:54 -0000 From: "Ajay Tikoo (BLOOMBERG/ 120 PARK)" To: openstack-discuss at lists.openstack.org Subject: [ops] rabbitmq queues for nova versioned notifications queues keep filling up Message-ID: <60B808BA00D0068401D80001_0_3025859 at msclnypmsgsv04> Content-Type: text/plain; charset="utf-8" I am not sure if this is the right channel/format to post this question, so my apologies in advance if this is not the right place. We are using Openstack Rocky. Watcher needs versioned notifications to be enabled. However after enabling versioned notifications, the queues for versioned_notifications (info and error) keep filling up Based on the updates the the Watchers cluster data model, it appears that Watcher is consuming messages, but they still linger in these queues. So with nova versioned notifications disabled, Watcher is unable to update the cluster data model (between rebuild intervals), and with them enabled, it keeps filling up the MQ queues. What is the best way to resolve this? Thank you, Ajay Tikoo -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.camilleri at zylacomputing.com Fri Jun 4 15:16:18 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Fri, 4 Jun 2021 17:16:18 +0200 Subject: [Victoria][magnum][octavia]ingress-controller health degraded Message-ID: Hi Everyone, we have the following problem that we are trying to identify the main cause: Basically we have deployed an ingress and an ingress-controller (using the following deployment file https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml) the ingress controller deployment is successful with 1 replica of the ingress-controller pod, the Octavia LoadBalancer is successfully created and points to the NodePorts being published on each node. This was showing only 1 member in the LoadBalancers screen as healthy/online. I increased the replicas to 3, from the LoadBalancers screen in Horizon, I can see the service being reported as degraded and only the kubernetes-worker-nodes that have the ingress-controller pod/s deployed on are being reported as online, this behaviour is not the same as a standard deployment where the NodePort actually communicates with the ClusterIP:Port of the internal service and hence once there is a single pod UP the NodePorts are shown as up when queried: ingress-nginx-controller-74fd5565fb-d86h9   1/1     Running 0          14h     10.100.3.13 k8s-c1-prod-2-klctfd24lze6-node-1   ingress-nginx-controller-74fd5565fb-h9985   1/1     Running 0          15h     10.100.1.8 k8s-c1-prod-2-klctfd24lze6-node-0   ingress-nginx-controller-74fd5565fb-qkddq   1/1     Running 0          15h     10.100.1.7 k8s-c1-prod-2-klctfd24lze6-node-0   The below shows the status of the members in the pool as replica3: | 834750fe-e43e-408d-abc3-aad3dcde0fdb | member_0_node-0 | id | ACTIVE              | 192.168.1.75  |         32054 | ONLINE           |      1 | | 1ddffd80-acae-40b3-a2de-19be0a69a039 | member_0_node-2 | id | ACTIVE              | 192.168.1.90  |         32054 | ERROR            |      1 | | d4e4baa4-0a69-4775-8ea0-165a207f11ae | member_0_node-1| id | ACTIVE              | 192.168.1.148 |         32054 | ONLINE           |      1 | In fact to have the deployment spread across all 3 nodes, I had to increase the replicas until all 3 nodes had at least an instance of the ingress controller running on them (in this case it was replica 5). I do not believe this as being an Octavia issue as the health check is being done via TCP port number which is the NodePort exposed by Kubernetes and if the ingress-controller is not running on that node the port check fails, I added the label octavia mainly to get some input that may confirm the correct behavior of Octavia I am expecting to receive a healthy state when i check the members of the pool since I can query the ClusterIP from any worker node on ports 80 and 443 and the outcome is always successful but not when using the NodePort Thanks in advance From marios at redhat.com Fri Jun 4 15:19:10 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 4 Jun 2021 18:19:10 +0300 Subject: [TripleO] next irc meeting Tuesday 08 June @ 1400 UTC in OFTC #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 08 June 1400 UTC in OFTC irc channel: #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Add anything you want to highlight at https://etherpad.opendev.org/p/tripleo-meeting-items This can be recently completed things, ongoing review requests, blocking issues, or anything else tripleo you want to share. Our last meeting was on May 25 - you can find logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-05-25-14.00.html Hope you can make it on Tuesday, regards, marios From derekokeeffe85 at yahoo.ie Fri Jun 4 15:20:19 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 4 Jun 2021 15:20:19 +0000 (UTC) Subject: [novnc-console] Cannot connect to console References: <408400332.2018688.1622820019304.ref@mail.yahoo.com> Message-ID: <408400332.2018688.1622820019304@mail.yahoo.com> Hi all, This is my first post to this list so excuse me if I have not submitted correctly.  I have installed openstack Victoria manually as a multi node setup. A controller & 3 computes. Everything works fine and the way it's expected. I have secured horizon with letsencrypt certs (for now) and again all is fine. When I did a test deploy I also used those certs to load the novnc console securely and it worked. My problem with my new deploy is that the console will not load no matter what I try. I get the following error when I enable debug mode in nova. 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy Traceback (most recent call last):2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy   File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 691, in top_new_client2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy     client = self.do_handshake(startsock, address)2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy   File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 578, in do_handshake2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy     context.load_cert_chain(certfile=self.cert, keyfile=self.key, password=self.key_password)2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy PermissionError: [Errno 13] Permission denied If I don't have debug enabled I just get the permission denied error. I have switched to the nova user and confirmed I can access the certs directory and read the certs. All my nova services are running fine as well. My controller conf is the following:[default]ssl_only=true cert=/etc/letsencrypt/live/ /fullchain.pemkey=/etc/letsencrypt/live/ /privkey.pem [vnc]enabled = trueserver_listen = 0.0.0.0server_proxyclient_address = $my_ipnovncproxy_base_url = https://:6080/vnc_auto.html My compute config is the following:[vnc]enabled = trueserver_listen = 0.0.0.0server_proxyclient_address = $my_ipnovncproxy_base_url = https://:6080/vnc_auto.html  If anyone could help that would be really appreciated or any advice to further troubleshoot!! I cannot see anything else in any logs but I might not be looking in the right place. Thank you in advance. Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Jun 4 15:21:52 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 4 Jun 2021 16:21:52 +0100 Subject: [kolla-ansible] kolla-ansible destroy In-Reply-To: <476495C0-A42E-4B74-AF46-13FF814C974B@poczta.onet.pl> References: <476495C0-A42E-4B74-AF46-13FF814C974B@poczta.onet.pl> Message-ID: On Fri, 4 Jun 2021 at 14:54, at wrote: > > Hi > is kolla-ansible destroy "--tags" aware? What is the best way to remove all unwanted containers, configuration files, logs, etc. when you want to remove some service or move it to another node? > Regards > Adam Tomas Hi Adam, Currently it is not aware of tags, and will remove all services. We have talked about improving it in the past, but it needs someone to work on it. Thanks, Mark > > > From bkslash at poczta.onet.pl Fri Jun 4 15:31:40 2021 From: bkslash at poczta.onet.pl (at) Date: Fri, 4 Jun 2021 17:31:40 +0200 Subject: [kolla-ansible] kolla-ansible destroy In-Reply-To: References: Message-ID: <0DEAC90A-9F1A-4910-AA6A-02A36E3B55DD@poczta.onet.pl> Hi Mark, thank you for the answer. So what is the "cleanest" way to remove some service? For example I've moved gnocchi and ceilometer from controllers to dedicated nodes but there's a lot of "leftovers" on controllers - it won't be easy to find every one... Best regards Adam Tomas P.S. kolla-ansible is the best Openstack deployment method anyway :) > Wiadomość napisana przez Mark Goddard w dniu 04.06.2021, o godz. 17:21: > > On Fri, 4 Jun 2021 at 14:54, at wrote: >> >> Hi >> is kolla-ansible destroy "--tags" aware? What is the best way to remove all unwanted containers, configuration files, logs, etc. when you want to remove some service or move it to another node? >> Regards >> Adam Tomas > > Hi Adam, > > Currently it is not aware of tags, and will remove all services. We > have talked about improving it in the past, but it needs someone to > work on it. > > Thanks, > Mark > >> >> >> From mark at stackhpc.com Fri Jun 4 15:46:17 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 4 Jun 2021 16:46:17 +0100 Subject: [kolla-ansible] kolla-ansible destroy In-Reply-To: <0DEAC90A-9F1A-4910-AA6A-02A36E3B55DD@poczta.onet.pl> References: <0DEAC90A-9F1A-4910-AA6A-02A36E3B55DD@poczta.onet.pl> Message-ID: On Fri, 4 Jun 2021 at 16:31, at wrote: > > Hi Mark, thank you for the answer. So what is the "cleanest" way to remove some service? For example I've moved gnocchi and ceilometer from controllers to dedicated nodes but there's a lot of "leftovers" on controllers - it won't be easy to find every one... Removing the containers will at least stop it from running, but you may also want to remove users & endpoints from keystone, remove container configuration from /etc/kolla/, and potentially other service-specific stuff. > Best regards > Adam Tomas > P.S. kolla-ansible is the best Openstack deployment method anyway :) > > > Wiadomość napisana przez Mark Goddard w dniu 04.06.2021, o godz. 17:21: > > > > On Fri, 4 Jun 2021 at 14:54, at wrote: > >> > >> Hi > >> is kolla-ansible destroy "--tags" aware? What is the best way to remove all unwanted containers, configuration files, logs, etc. when you want to remove some service or move it to another node? > >> Regards > >> Adam Tomas > > > > Hi Adam, > > > > Currently it is not aware of tags, and will remove all services. We > > have talked about improving it in the past, but it needs someone to > > work on it. > > > > Thanks, > > Mark > > > >> > >> > >> > From mark at stackhpc.com Fri Jun 4 15:47:07 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 4 Jun 2021 16:47:07 +0100 Subject: [kolla-ansible] kolla-ansible destroy In-Reply-To: References: <0DEAC90A-9F1A-4910-AA6A-02A36E3B55DD@poczta.onet.pl> Message-ID: On Fri, 4 Jun 2021 at 16:46, Mark Goddard wrote: > > On Fri, 4 Jun 2021 at 16:31, at wrote: > > > > Hi Mark, thank you for the answer. So what is the "cleanest" way to remove some service? For example I've moved gnocchi and ceilometer from controllers to dedicated nodes but there's a lot of "leftovers" on controllers - it won't be easy to find every one... > Removing the containers will at least stop it from running, but you > may also want to remove users & endpoints from keystone, remove > container configuration from /etc/kolla/, and potentially > other service-specific stuff. See L422 https://etherpad.opendev.org/p/kolla-xena-ptg > > Best regards > > Adam Tomas > > P.S. kolla-ansible is the best Openstack deployment method anyway :) > > > > > Wiadomość napisana przez Mark Goddard w dniu 04.06.2021, o godz. 17:21: > > > > > > On Fri, 4 Jun 2021 at 14:54, at wrote: > > >> > > >> Hi > > >> is kolla-ansible destroy "--tags" aware? What is the best way to remove all unwanted containers, configuration files, logs, etc. when you want to remove some service or move it to another node? > > >> Regards > > >> Adam Tomas > > > > > > Hi Adam, > > > > > > Currently it is not aware of tags, and will remove all services. We > > > have talked about improving it in the past, but it needs someone to > > > work on it. > > > > > > Thanks, > > > Mark > > > > > >> > > >> > > >> > > From DHilsbos at performair.com Fri Jun 4 15:54:34 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 4 Jun 2021 15:54:34 +0000 Subject: [ops] Windows Guest Resolution Message-ID: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> All; We finally have reliable means to generate Windows images for our OpenStack, but we're running into a minor annoyance. Our Windows instances appear to have a resolution cap of 1024x768. It would be extremely useful to be able use resolutions higher than this, especially 1920x1080. Is this possible with OpenStack on KVM? As a second request; is there a way to add a second virtual monitor? Or to achieve the same thing, increase the resolution to 3840x1080? Thank you, Dominic L. Hilsbos, MBA Vice President - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From luke.camilleri at zylacomputing.com Fri Jun 4 17:01:49 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Fri, 4 Jun 2021 19:01:49 +0200 Subject: [ops] Windows Guest Resolution In-Reply-To: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> References: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> Message-ID: <0a9c4759-8bd7-b1c2-6ca4-b15225f6413a@zylacomputing.com> I believe you need the guest drivers https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers Right now the instance does not seem to have a windows driver for the video hardware and hence will use a generic video driver On 04/06/2021 17:54, DHilsbos at performair.com wrote: > All; > > We finally have reliable means to generate Windows images for our OpenStack, but we're running into a minor annoyance. Our Windows instances appear to have a resolution cap of 1024x768. It would be extremely useful to be able use resolutions higher than this, especially 1920x1080. Is this possible with OpenStack on KVM? > > As a second request; is there a way to add a second virtual monitor? Or to achieve the same thing, increase the resolution to 3840x1080? > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From Arkady.Kanevsky at dell.com Fri Jun 4 17:21:58 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 4 Jun 2021 17:21:58 +0000 Subject: [Interop] draft of presentation to the board Message-ID: https://docs.google.com/presentation/d/1-9H1cTXZxW0vCSTzfBe0aMKbd7nggd8SOHQOT987nFs/ Comments welcome. Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Fri Jun 4 18:04:59 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Fri, 4 Jun 2021 23:04:59 +0500 Subject: [ops] Windows Guest Resolution In-Reply-To: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> References: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> Message-ID: Hi, Please try to install latest drivers from below links. https://www.linuxsysadmins.com/create-windows-server-image-for-openstack/ https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.190-1/ Ammad On Fri, Jun 4, 2021 at 8:59 PM wrote: > All; > > We finally have reliable means to generate Windows images for our > OpenStack, but we're running into a minor annoyance. Our Windows instances > appear to have a resolution cap of 1024x768. It would be extremely useful > to be able use resolutions higher than this, especially 1920x1080. Is this > possible with OpenStack on KVM? > > As a second request; is there a way to add a second virtual monitor? Or > to achieve the same thing, increase the resolution to 3840x1080? > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri Jun 4 18:53:36 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 4 Jun 2021 14:53:36 -0400 Subject: [novnc-console] Cannot connect to console In-Reply-To: <408400332.2018688.1622820019304@mail.yahoo.com> References: <408400332.2018688.1622820019304.ref@mail.yahoo.com> <408400332.2018688.1622820019304@mail.yahoo.com> Message-ID: Hi Derek, What's the permissions of the letsencrypt cert files and the user that Nova is running on? sudo -u nova stat /etc/letsencrypt/live/ /fullchain.pem Will probably fail, so you might wanna fix that! M On Fri, Jun 4, 2021 at 11:23 AM Derek O keeffe wrote: > > Hi all, > > This is my first post to this list so excuse me if I have not submitted correctly. > > I have installed openstack Victoria manually as a multi node setup. A controller & 3 computes. Everything works fine and the way it's expected. I have secured horizon with letsencrypt certs (for now) and again all is fine. When I did a test deploy I also used those certs to load the novnc console securely and it worked. > > My problem with my new deploy is that the console will not load no matter what I try. I get the following error when I enable debug mode in nova. > > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy Traceback (most recent call last): > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 691, in top_new_client > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy client = self.do_handshake(startsock, address) > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 578, in do_handshake > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy context.load_cert_chain(certfile=self.cert, keyfile=self.key, password=self.key_password) > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy PermissionError: [Errno 13] Permission denied > > If I don't have debug enabled I just get the permission denied error. I have switched to the nova user and confirmed I can access the certs directory and read the certs. All my nova services are running fine as well. > > My controller conf is the following: > [default] > ssl_only=true > cert=/etc/letsencrypt/live/ /fullchain.pem > key=/etc/letsencrypt/live/ /privkey.pem > > [vnc] > enabled = true > server_listen = 0.0.0.0 > server_proxyclient_address = $my_ip > novncproxy_base_url = https://:6080/vnc_auto.html > > My compute config is the following: > [vnc] > enabled = true > server_listen = 0.0.0.0 > server_proxyclient_address = $my_ip > novncproxy_base_url = https://:6080/vnc_auto.html > > > If anyone could help that would be really appreciated or any advice to further troubleshoot!! I cannot see anything else in any logs but I might not be looking in the right place. Thank you in advance. > > Derek > > > -- Mohammed Naser VEXXHOST, Inc. From melwittt at gmail.com Fri Jun 4 19:04:04 2021 From: melwittt at gmail.com (melanie witt) Date: Fri, 4 Jun 2021 12:04:04 -0700 Subject: [nova] stable branches nova-grenade-multinode job broken Message-ID: Hi all, FYI the nova-grenade-multinode CI job is known to be broken on stable branches at the moment due to a too new version of Ceph (Pacific) being installed that is incompatible with the older jobs. We have fixes proposed with the following patch (and its backports to victoria/ussuri/train) to convert the job to native Zuul v3: https://review.opendev.org/c/openstack/nova/+/794345 Once these patches merge, the CI should be passing again. Cheers, -melanie From derekokeeffe85 at yahoo.ie Fri Jun 4 19:32:12 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 4 Jun 2021 19:32:12 +0000 (UTC) Subject: [novnc-console] Cannot connect to console In-Reply-To: References: <408400332.2018688.1622820019304.ref@mail.yahoo.com> <408400332.2018688.1622820019304@mail.yahoo.com> Message-ID: <1859406315.1820662.1622835132691@mail.yahoo.com> Hi Mohammad, Thanks you for the reply. Below is the output of the command you sent: sudo -u nova stat /etc/letsencrypt/live//fullchain.pem   File: /etc/letsencrypt/live//fullchain.pem  Size: 5616      Blocks: 16         IO Block: 4096   regular fileDevice: 802h/2050d Inode: 7340138     Links: 1Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)Access: 2021-06-04 15:47:48.544545426 +0100Modify: 2021-06-03 11:50:26.410071017 +0100Change: 2021-06-03 11:52:39.870554481 +0100 Birth: - The permissions on the live directory are: ls -al /etc/letsencrypt/live/total 16drwx--x--x 3 root root 4096 Jun  3 11:53 .drwxr-xr-x 9 root root 4096 Jun  3 11:50 ..-rw-r--r-- 1 root root  740 Jun  3 11:50 READMEdrwxr-xr-x 2 root root 4096 Jun  3 11:50  I changed the owner and group to nova as a test to see if that was the issue but it still didn't work. The first error I had was as you say a permissions issue on the live directory and as nova (su nova -s /bin/bash) I couldn't access that directory so I changed the permissions and tested it as the nova user (cd /etc/letsencrypt/live & cat fullchain.pem) and I could read the files in there. I then had the error I sent in the original email. The funny thing is I had a test deploy and it all worked fine but when I redeployed it on new machines with the same OS (ubuntu 20.04) it won't work for me.  Regards,Derek On Friday 4 June 2021, 19:59:31 IST, Mohammed Naser wrote: Hi Derek, What's the permissions of the letsencrypt cert files and the user that Nova is running on? sudo -u nova stat /etc/letsencrypt/live/ /fullchain.pem Will probably fail, so you might wanna fix that! M On Fri, Jun 4, 2021 at 11:23 AM Derek O keeffe wrote: > > Hi all, > > This is my first post to this list so excuse me if I have not submitted correctly. > > I have installed openstack Victoria manually as a multi node setup. A controller & 3 computes. Everything works fine and the way it's expected. I have secured horizon with letsencrypt certs (for now) and again all is fine. When I did a test deploy I also used those certs to load the novnc console securely and it worked. > > My problem with my new deploy is that the console will not load no matter what I try. I get the following error when I enable debug mode in nova. > > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy Traceback (most recent call last): > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy  File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 691, in top_new_client > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy    client = self.do_handshake(startsock, address) > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy  File "/usr/lib/python3/dist-packages/websockify/websockifyserver.py", line 578, in do_handshake > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy    context.load_cert_chain(certfile=self.cert, keyfile=self.key, password=self.key_password) > 2021-06-04 15:54:11.004 356545 ERROR nova.console.websocketproxy PermissionError: [Errno 13] Permission denied > > If I don't have debug enabled I just get the permission denied error. I have switched to the nova user and confirmed I can access the certs directory and read the certs. All my nova services are running fine as well. > > My controller conf is the following: > [default] > ssl_only=true > cert=/etc/letsencrypt/live/ /fullchain.pem > key=/etc/letsencrypt/live/ /privkey.pem > > [vnc] > enabled = true > server_listen = 0.0.0.0 > server_proxyclient_address = $my_ip > novncproxy_base_url = https://:6080/vnc_auto.html > > My compute config is the following: > [vnc] > enabled = true > server_listen = 0.0.0.0 > server_proxyclient_address = $my_ip > novncproxy_base_url = https://:6080/vnc_auto.html > > > If anyone could help that would be really appreciated or any advice to further troubleshoot!! I cannot see anything else in any logs but I might not be looking in the right place. Thank you in advance. > > Derek > > > -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangerzonen at gmail.com Fri Jun 4 15:48:23 2021 From: dangerzonen at gmail.com (dangerzone ar) Date: Fri, 4 Jun 2021 23:48:23 +0800 Subject: [Tacker] Tacker Not able to create VIM In-Reply-To: References: Message-ID: Hi All, I'm struggling these few days to register vim on my Tacker from the dashboard. What I did is I removed tacker.conf and with the original file and set back the setting each line..when I run the create from dashboard I'm still not able to register the VIM but now I'm getting a new error below. Below is the error *error: failed to register vim: unable to find key file for vim* I also tried from cli and still failed with error below command run:- tacker vim-register --config-file vim_config.yaml --is-default vim-default --os-username admin --os-project-name admin --os-project-domain-name Default --os-auth-url http://192.168.0.121:5000/v3 --os-password c81e0c7a842f40c6 error return:- Expecting to find domain in user. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-a980cea4-adf2-4461-a66d-4c6c3bfd2e7d) Most of the line in tacker.conf setting is based on https://docs.openstack.org/tacker/latest/install/manual_installation.html I'm running all-in-one openstack packstack (queens) and deploying Tacker manually. I really hope someone could advise and help me please. Thank you for your help and support. **Attached image file and tacker.log for ref.* On Fri, Jun 4, 2021 at 11:05 PM yasufum wrote: > Hi, > > It might be a failure of not tacker but authentication because I've run > VIM registration as you tried and no failure happened although it's just > a bit different from your environment. Could you run it from CLI again > referring [1] if you cannot register from horizon? > > [1] https://docs.openstack.org/tacker/latest/install/getting_started.html > > Thanks, > Yasufumi > > On 2021/06/03 10:55, dangerzone ar wrote: > > Hi all, > > > > I just deployed Tacker and tried to add my 1^st VIM but I’m getting > > errors as per attached file. Pls advise how to resolve this problem. > Thanks > > > > 1. *Error: *Failed to register VIM: {"error": {"message": > > "(http://192.168.0.121:5000/v3/tokens > > ): The resource could not be > > found.", "code": 404, "title": "Not Found"}} > > > > 2. *Error as below**à**WARNING keystonemiddleware.auth_token [-] > > Authorization failed for token: InvalidToken*** > > > > ** > > > > *{"vim": {"vim_project": {"name": "admin", "project_domain_name": > > "Default"}, "description": "d", "is_default": false, "auth_cred": > > {"username": "admin", "user_domain_name": "Default", "password": > > "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 > > **", "type": "openstack", "name": "d"}} > > process_request > > /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* > > > > *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] > > Authorization failed for token: InvalidToken* > > > > *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - > > [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* > > > > ** > > > > Below is my tacker.conf > > > > [DEFAULT] > > auth_strategy = keystone > > policy_file = /etc/tacker/policy.json > > debug = True > > use_syslog = False > > bind_host = 192.168.0.121 > > bind_port = 9890 > > service_plugins = nfvo,vnfm > > state_path = /var/lib/tacker > > > > > > [nfvo] > > vim_drivers = openstack > > > > [keystone_authtoken] > > region_name = RegionOne > > auth_type = password > > project_domain_name = Default > > user_domain_name = Default > > username = tacker > > password = password > > auth_url = http://192.168.0.121:35357 > > auth_uri = http://192.168.0.121:5000 > > > > [agent] > > root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf > > > > > > [database] > > connection = > > mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 > > ** > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: err1.jpg Type: image/jpeg Size: 59986 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tacker.log Type: application/octet-stream Size: 11279 bytes Desc: not available URL: From amy at demarco.com Fri Jun 4 21:00:12 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 4 Jun 2021 16:00:12 -0500 Subject: [Diversity] Diversity and Inclusion Meeting Reminder - OFTC Message-ID: The Diversity & Inclusion WG invites members of all OIF projects to attend our next meeting Monday June 7th, at 17:00 UTC in the #openinfra-diversity channel on OFTC. The agenda can be found at https://etherpad.openstack.org/p/ diversity-wg-agenda. Please feel free to add any topics you wish to discuss at the meeting. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Fri Jun 4 21:31:36 2021 From: haleyb.dev at gmail.com (Brian Haley) Date: Fri, 4 Jun 2021 17:31:36 -0400 Subject: [neutron][all] Functional/tempest/rally jobs not running on changes Message-ID: Hi, This might be affecting more than Neutron so I added the [all] tag, and is maybe being discussed in one of the #opendev channels and I missed it (?), but looking at a recent patch recheck shows a number of jobs not being run, for example [0] has just 11 jobs instead of 25 in the previous run. So for now I would not approve any changes since they could merge accidentally with broken code. I pinged gmann and he thought [1] might have caused this, and it just merged... so perhaps a quick revert is in order. -Brian [0] https://review.opendev.org/c/openstack/neutron/+/790060 [1] https://review.opendev.org/c/openstack/devstack/+/791541 From gmann at ghanshyammann.com Fri Jun 4 21:38:35 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 04 Jun 2021 16:38:35 -0500 Subject: [neutron][all] Functional/tempest/rally jobs not running on changes In-Reply-To: References: Message-ID: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> ---- On Fri, 04 Jun 2021 16:31:36 -0500 Brian Haley wrote ---- > Hi, > > This might be affecting more than Neutron so I added the [all] tag, and > is maybe being discussed in one of the #opendev channels and I missed it > (?), but looking at a recent patch recheck shows a number of jobs not > being run, for example [0] has just 11 jobs instead of 25 in the > previous run. > > So for now I would not approve any changes since they could merge > accidentally with broken code. > > I pinged gmann and he thought [1] might have caused this, and it just > merged... so perhaps a quick revert is in order. yeah, that is only patch in devstack side we merged.I have not clue why 791541 is causing the issue for check pipleline on master. gate pipeline is all fine and all the jobs are running there. Anyways I proposed the revert for now and meanwhile we can debug what went wrong with 'pragma'. - https://review.opendev.org/c/openstack/devstack/+/794822 -gmann > > -Brian > > [0] https://review.opendev.org/c/openstack/neutron/+/790060 > [1] https://review.opendev.org/c/openstack/devstack/+/791541 > > From cboylan at sapwetik.org Fri Jun 4 22:39:22 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 04 Jun 2021 15:39:22 -0700 Subject: =?UTF-8?Q?Re:_[neutron][all]_Functional/tempest/rally_jobs_not_running_o?= =?UTF-8?Q?n_changes?= In-Reply-To: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> References: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> Message-ID: On Fri, Jun 4, 2021, at 2:38 PM, Ghanshyam Mann wrote: > ---- On Fri, 04 Jun 2021 16:31:36 -0500 Brian Haley > wrote ---- > > Hi, > > > > This might be affecting more than Neutron so I added the [all] tag, > and > > is maybe being discussed in one of the #opendev channels and I > missed it > > (?), but looking at a recent patch recheck shows a number of jobs > not > > being run, for example [0] has just 11 jobs instead of 25 in the > > previous run. > > > > So for now I would not approve any changes since they could merge > > accidentally with broken code. > > > > I pinged gmann and he thought [1] might have caused this, and it > just > > merged... so perhaps a quick revert is in order. > > yeah, that is only patch in devstack side we merged.I have not clue why > 791541 > is causing the issue for check pipleline on master. gate pipeline is > all fine and > all the jobs are running there. Anyways I proposed the revert for now > and meanwhile > we can debug what went wrong with 'pragma'. Reading the docs [2] I think you need to include the current branch too. That pragma doesn't appear to be additive and instead defines the complete list. This means you not only need the feature/r1 branch but also master. > > - https://review.opendev.org/c/openstack/devstack/+/794822 > > -gmann > > > > > -Brian > > > > [0] https://review.opendev.org/c/openstack/neutron/+/790060 > > [1] https://review.opendev.org/c/openstack/devstack/+/791541 [2] https://zuul-ci.org/docs/zuul/reference/pragma_def.html#attr-pragma.implied-branches From fungi at yuggoth.org Fri Jun 4 23:33:10 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 4 Jun 2021 23:33:10 +0000 Subject: [docs] Project contributor doc latest redirects (was: Upcoming changes to the OpenStack Community IRC this weekend) In-Reply-To: <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> References: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> Message-ID: <20210604233309.3wvrmwytxph7q2j6@yuggoth.org> On 2021-06-03 17:39:23 -0500 (-0500), Ghanshyam Mann wrote: [...] > Fungi will add the global redirect link to master/latest version > in openstack-manual. Project does not need to do this explicitly. [...] Proposed now as https://review.opendev.org/794874 if anyone feels up for reviewing it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Sat Jun 5 07:18:18 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 5 Jun 2021 09:18:18 +0200 Subject: [docs] Project contributor doc latest redirects (was: Upcoming changes to the OpenStack Community IRC this weekend) In-Reply-To: <20210604233309.3wvrmwytxph7q2j6@yuggoth.org> References: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> <20210604233309.3wvrmwytxph7q2j6@yuggoth.org> Message-ID: On Sat, Jun 5, 2021 at 1:34 AM Jeremy Stanley wrote: > > On 2021-06-03 17:39:23 -0500 (-0500), Ghanshyam Mann wrote: > [...] > > Fungi will add the global redirect link to master/latest version > > in openstack-manual. Project does not need to do this explicitly. > [...] > > Proposed now as https://review.opendev.org/794874 if anyone feels up > for reviewing it. It's going in! -yoctozepto From fungi at yuggoth.org Sat Jun 5 13:37:19 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 5 Jun 2021 13:37:19 +0000 Subject: [docs] Project contributor doc latest redirects (was: Upcoming changes to the OpenStack Community IRC this weekend) In-Reply-To: References: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> <20210604233309.3wvrmwytxph7q2j6@yuggoth.org> Message-ID: <20210605133719.g3udhvsotyknanmc@yuggoth.org> On 2021-06-05 09:18:18 +0200 (+0200), Radosław Piliszek wrote: > On Sat, Jun 5, 2021 at 1:34 AM Jeremy Stanley wrote: > > > > On 2021-06-03 17:39:23 -0500 (-0500), Ghanshyam Mann wrote: > > [...] > > > Fungi will add the global redirect link to master/latest version > > > in openstack-manual. Project does not need to do this explicitly. > > [...] > > > > Proposed now as https://review.opendev.org/794874 if anyone feels up > > for reviewing it. > > It's going in! And some quick spot-checks indicate it's deployed and working as intended. Let me know if anyone notices any issues with it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Sat Jun 5 22:45:32 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 05 Jun 2021 17:45:32 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 4th June, 21: Reading: 5 min Message-ID: <179de5a5664.fe10e867230787.6943675927200289834@ghanshyammann.com> Hello Everyone, Here is last week's summary of the Technical Committee activities. 1. What we completed this week: ========================= * Retired the sushy-cli[1]. * Replaced freenode ref with OFTC[2]. * Added TC resolution to move the IRC network from Freenode to OFTC[3] 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - https://meetings.opendev.org/meetings/tc/2021/tc.2021-06-03-15.00.log.html * We will have next week's meeting on June 10th, Thursday 15:00 UTC[4]. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ TC is using the etherpad[5] for Xena cycle working item. We will be checking and updating the status biweekly in the same etherpad. Open Reviews ----------------- * Two open reviews for ongoing activities[6]. MIgration from Freenode to OFTC ----------------------------------------- * All the required work for this migration is tracked in this etherpad[7] * TC resolution is merged[3]. * OFTC bot/logging is done. All project teams started their discussion/meetings to OFTC. * I have communicated the next steps on openstack-discuss ML[8] as well as to all PTLs individual email. * We are in 'Communicate with community' work where we need to update all contributor doc etc. Please finish this in your project and mark the progress in etherpad[7]. * This migration has been proposed to add in Open Infra newsletter OpenStack's news also[9]. * Topic change on Freenode channels will be done on June 11th. Nomination is open for the 'Y' release naming ------------------------------------------------------ * Y release naming process is started[10]. Nomination is open until June 10th feel free to propose names in below wiki ** https://wiki.openstack.org/wiki/Release_Naming/Y_Proposals Replacing ATC terminology with AC (Active Contributors) ------------------------------------------------------------------- * Governance charter change for ATC->AC has been merged [11]. * TC resolution to map the ATC with the new term AC from Bylaws' perspective is up[12]. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [14] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [15] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://review.opendev.org/c/openstack/governance/+/792348 [2] https://review.opendev.org/c/openstack/governance/+/793864 [3] https://review.opendev.org/c/openstack/governance/+/793260 [4] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [5] https://etherpad.opendev.org/p/tc-xena-tracker [6] https://review.opendev.org/q/project:openstack/governance+status:open [7] https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc [8] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022780.html [9] https://etherpad.opendev.org/p/newsletter-openstack-news [10] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022383.html [11] https://review.opendev.org/c/openstack/governance/+/790092 [12] https://review.opendev.org/c/openstack/governance/+/794366 [13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [14] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [15] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From gmann at ghanshyammann.com Sat Jun 5 22:49:56 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 05 Jun 2021 17:49:56 -0500 Subject: [neutron][all] Functional/tempest/rally jobs not running on changes In-Reply-To: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> References: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> Message-ID: <179de5e5d11.108fe1e3c230837.1328507055378743815@ghanshyammann.com> ---- On Fri, 04 Jun 2021 16:38:35 -0500 Ghanshyam Mann wrote ---- > ---- On Fri, 04 Jun 2021 16:31:36 -0500 Brian Haley wrote ---- > > Hi, > > > > This might be affecting more than Neutron so I added the [all] tag, and > > is maybe being discussed in one of the #opendev channels and I missed it > > (?), but looking at a recent patch recheck shows a number of jobs not > > being run, for example [0] has just 11 jobs instead of 25 in the > > previous run. > > > > So for now I would not approve any changes since they could merge > > accidentally with broken code. > > > > I pinged gmann and he thought [1] might have caused this, and it just > > merged... so perhaps a quick revert is in order. > > yeah, that is only patch in devstack side we merged.I have not clue why 791541 > is causing the issue for check pipleline on master. gate pipeline is all fine and > all the jobs are running there. Anyways I proposed the revert for now and meanwhile > we can debug what went wrong with 'pragma'. > > - https://review.opendev.org/c/openstack/devstack/+/794822 This is merged now, please do recheck if any of your patch's check pipeline did not run the complete jobs. -gmann > > -gmann > > > > > -Brian > > > > [0] https://review.opendev.org/c/openstack/neutron/+/790060 > > [1] https://review.opendev.org/c/openstack/devstack/+/791541 > > > > > > From ueha.ayumu at fujitsu.com Mon Jun 7 03:02:26 2021 From: ueha.ayumu at fujitsu.com (ueha.ayumu at fujitsu.com) Date: Mon, 7 Jun 2021 03:02:26 +0000 Subject: [Tacker] Tacker Not able to create VIM In-Reply-To: References: Message-ID: Hi Have you installed “barbican” as described in the instructions? https://docs.openstack.org/tacker/latest/install/manual_installation.html#pre-requisites I looked at the error log. It seems that the error occurred on the route that does not use barbican. Could you add the following settings to tacker.conf and try again? [vim_keys] use_barbican = True Thanks, Ueha From: dangerzone ar Sent: Saturday, June 5, 2021 12:48 AM To: yasufum Cc: OpenStack Discuss Subject: Re: [Tacker] Tacker Not able to create VIM Hi All, I'm struggling these few days to register vim on my Tacker from the dashboard. What I did is I removed tacker.conf and with the original file and set back the setting each line..when I run the create from dashboard I'm still not able to register the VIM but now I'm getting a new error below. Below is the error error: failed to register vim: unable to find key file for vim I also tried from cli and still failed with error below command run:- tacker vim-register --config-file vim_config.yaml --is-default vim-default --os-username admin --os-project-name admin --os-project-domain-name Default --os-auth-url http://192.168.0.121:5000/v3 --os-password c81e0c7a842f40c6 error return:- Expecting to find domain in user. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-a980cea4-adf2-4461-a66d-4c6c3bfd2e7d) Most of the line in tacker.conf setting is based on https://docs.openstack.org/tacker/latest/install/manual_installation.html I'm running all-in-one openstack packstack (queens) and deploying Tacker manually. I really hope someone could advise and help me please. Thank you for your help and support. *Attached image file and tacker.log for ref. On Fri, Jun 4, 2021 at 11:05 PM yasufum > wrote: Hi, It might be a failure of not tacker but authentication because I've run VIM registration as you tried and no failure happened although it's just a bit different from your environment. Could you run it from CLI again referring [1] if you cannot register from horizon? [1] https://docs.openstack.org/tacker/latest/install/getting_started.html Thanks, Yasufumi On 2021/06/03 10:55, dangerzone ar wrote: > Hi all, > > I just deployed Tacker and tried to add my 1^st VIM but I’m getting > errors as per attached file. Pls advise how to resolve this problem. Thanks > > 1. *Error: *Failed to register VIM: {"error": {"message": > "(http://192.168.0.121:5000/v3/tokens > ): The resource could not be > found.", "code": 404, "title": "Not Found"}} > > 2. *Error as below**à**WARNING keystonemiddleware.auth_token [-] > Authorization failed for token: InvalidToken*** > > ** > > *{"vim": {"vim_project": {"name": "admin", "project_domain_name": > "Default"}, "description": "d", "is_default": false, "auth_cred": > {"username": "admin", "user_domain_name": "Default", "password": > "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 > **", "type": "openstack", "name": "d"}} > process_request > /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* > > *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] > Authorization failed for token: InvalidToken* > > *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - > [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* > > ** > > Below is my tacker.conf > > [DEFAULT] > auth_strategy = keystone > policy_file = /etc/tacker/policy.json > debug = True > use_syslog = False > bind_host = 192.168.0.121 > bind_port = 9890 > service_plugins = nfvo,vnfm > state_path = /var/lib/tacker > > > [nfvo] > vim_drivers = openstack > > [keystone_authtoken] > region_name = RegionOne > auth_type = password > project_domain_name = Default > user_domain_name = Default > username = tacker > password = password > auth_url = http://192.168.0.121:35357 > auth_uri = http://192.168.0.121:5000 > > [agent] > root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf > > > [database] > connection = > mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 > ** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Mon Jun 7 06:41:08 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 7 Jun 2021 08:41:08 +0200 Subject: [baremetal-sig][ironic] Tue June 8, 2021, 2pm UTC: The Ironic Python Agent Builder Message-ID: <4af9f9ed-dd59-0463-ec41-aa2f2905aafc@cern.ch> Dear all, The Bare Metal SIG will meet tomorrow Tue June 8, 2021, at 2pm UTC on zoom. The meeting will feature a "topic-of-the-day" presentation by Dmitry Tantsur (dtantsur) with an "Introduction to the Ironic Python Agent Builder" As usual, all details on https://etherpad.opendev.org/p/bare-metal-sig Everyone is welcome, hope to see you there! Cheers, Arne From kira034 at 163.com Mon Jun 7 07:14:13 2021 From: kira034 at 163.com (Hongbin Lu) Date: Mon, 7 Jun 2021 15:14:13 +0800 (CST) Subject: [neutron] Bug deputy report - week of May 31th Message-ID: <79fe35c0.3fb1.179e55266e6.Coremail.kira034@163.com> Hi, I was bug deputy last week. Here is my report regarding bugs from it: Critical: * https://bugs.launchpad.net/neutron/+bug/1930397 neutron-lib from master branch is breaking our UT job * https://bugs.launchpad.net/neutron/+bug/1930401 Fullstack l3 agent tests failing due to timeout waiting until port is active * https://bugs.launchpad.net/neutron/+bug/1930402 SSH timeouts happens very often in the ovn based CI jobs * https://bugs.launchpad.net/neutron/+bug/1930750 pyroute2 >= 0.6.2 fails in pep8 import analysis High: * https://bugs.launchpad.net/neutron/+bug/1930367 "TestNeutronServer" related tests failing frequently Medium: * https://bugs.launchpad.net/neutron/+bug/1930294 Port deletion fails due to foreign key constraint * https://bugs.launchpad.net/neutron/+bug/1930432 [L2] provisioning_block should be added to Neutron internal service port? Or should not? * https://bugs.launchpad.net/neutron/+bug/1930443 [LB] Linux Bridge agent always loads trunk extension, regardless of the loaded service plugins * https://bugs.launchpad.net/neutron/+bug/1930926 Failing over OVN dbs can cause original controller to permanently lose connection * https://bugs.launchpad.net/neutron/+bug/1930996 "rpc_response_max_timeout" configuration variable not present in neutron-sriov-nic agent Low: * https://bugs.launchpad.net/neutron/+bug/1930283 PUT /v2.0/qos/policies/{policy_id}/minimum_bandwidth_rules/{rule_id} returns HTTP 501 which is undocumented in the API ref * https://bugs.launchpad.net/neutron/+bug/1930876 "get_reservations_for_resources" execute DB operations without opening a DB context Triaging in progress * https://bugs.launchpad.net/neutron/+bug/1930838 key error in deleted_ports * https://bugs.launchpad.net/neutron/+bug/1930858 OVN central service does not start properly -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Mon Jun 7 07:42:10 2021 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 7 Jun 2021 09:42:10 +0200 Subject: [neutron][all] Functional/tempest/rally jobs not running on changes In-Reply-To: <179de5e5d11.108fe1e3c230837.1328507055378743815@ghanshyammann.com> References: <179d8f6af22.11d151378215738.8273714246311033146@ghanshyammann.com> <179de5e5d11.108fe1e3c230837.1328507055378743815@ghanshyammann.com> Message-ID: Hi, There was a bunch of patches which tried to reduce the number of jobs executed for Neutron: https://review.opendev.org/q/topic:%22improve-neutron-ci%22+(status:open%20OR%20status:merged) worth checking it as perhaps some irrelevant file list needs to be updated. lajoskatona Ghanshyam Mann ezt írta (időpont: 2021. jún. 6., V, 0:51): > ---- On Fri, 04 Jun 2021 16:38:35 -0500 Ghanshyam Mann < > gmann at ghanshyammann.com> wrote ---- > > ---- On Fri, 04 Jun 2021 16:31:36 -0500 Brian Haley < > haleyb.dev at gmail.com> wrote ---- > > > Hi, > > > > > > This might be affecting more than Neutron so I added the [all] tag, > and > > > is maybe being discussed in one of the #opendev channels and I > missed it > > > (?), but looking at a recent patch recheck shows a number of jobs > not > > > being run, for example [0] has just 11 jobs instead of 25 in the > > > previous run. > > > > > > So for now I would not approve any changes since they could merge > > > accidentally with broken code. > > > > > > I pinged gmann and he thought [1] might have caused this, and it > just > > > merged... so perhaps a quick revert is in order. > > > > yeah, that is only patch in devstack side we merged.I have not clue why > 791541 > > is causing the issue for check pipleline on master. gate pipeline is > all fine and > > all the jobs are running there. Anyways I proposed the revert for now > and meanwhile > > we can debug what went wrong with 'pragma'. > > > > - https://review.opendev.org/c/openstack/devstack/+/794822 > > This is merged now, please do recheck if any of your patch's check > pipeline did not run the > complete jobs. > > -gmann > > > > > -gmann > > > > > > > > -Brian > > > > > > [0] https://review.opendev.org/c/openstack/neutron/+/790060 > > > [1] https://review.opendev.org/c/openstack/devstack/+/791541 > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Mon Jun 7 07:45:28 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Mon, 7 Jun 2021 16:45:28 +0900 Subject: [Tacker] Tacker Not able to create VIM In-Reply-To: References: Message-ID: <421ce3c0-87d9-dd63-ea46-65cc5ecd98d5@gmail.com> Hi Ueha, It a little bit strange because using barbican is for a consideration for security. Without using barbican, encode_vim_auth() should work because it just outputs a contents of fernet_key to a file under "/etc/tacker/vim/fernet_keys/" if `use_barbican` isn't True. https://opendev.org/openstack/tacker/src/branch/master/tacker/nfvo/drivers/vim/openstack_driver.py#L224 I think the reason of the error is the output directory doesn't exist or wrong permission (Although changing to use barbican might also be working as you suggested). What do you think? Thanks, Yasufumi On 2021/06/07 12:02, ueha.ayumu at fujitsu.com wrote: > Hi > > Have you installed “barbican” as described in the instructions? > > https://docs.openstack.org/tacker/latest/install/manual_installation.html#pre-requisites > > > I looked at the error log. It seems that the error occurred on the route > that does not use barbican. > > Could you add the following settings to tacker.conf and try again? > > [vim_keys] > > use_barbican = True > > Thanks, > > Ueha > > *From:*dangerzone ar > *Sent:* Saturday, June 5, 2021 12:48 AM > *To:* yasufum > *Cc:* OpenStack Discuss > *Subject:* Re: [Tacker] Tacker Not able to create VIM > > Hi All, > > I'm struggling these few days to register vim on my Tacker from the > dashboard. What I did is I removed tacker.conf and with the original > file and set back the setting each line..when I run the create from > dashboard I'm still not able to register the VIM but now I'm getting a > new error below. > > Below is the error > > *error: failed to register vim: unable to find key file for vim* > > I also tried from cli and still failed with error below > > command run:- > > tacker vim-register --config-file vim_config.yaml --is-default > vim-default --os-username admin --os-project-name admin > --os-project-domain-name Default --os-auth-url > http://192.168.0.121:5000/v3 >  --os-password c81e0c7a842f40c6 > > error return:- > Expecting to find domain in user. The server could not comply with the > request since it is either malformed or otherwise incorrect. The client > is assumed to be in error. (HTTP 400) (Request-ID: > req-a980cea4-adf2-4461-a66d-4c6c3bfd2e7d) > > Most of the line in tacker.conf setting is based on > > https://docs.openstack.org/tacker/latest/install/manual_installation.html > > I'm running all-in-one openstack packstack (queens) and deploying Tacker > manually. I really hope someone could advise and help me please. Thank > you for your help and support. > > **Attached image file and tacker.log for ref.* > > On Fri, Jun 4, 2021 at 11:05 PM yasufum > wrote: > > Hi, > > It might be a failure of not tacker but authentication because I've run > VIM registration as you tried and no failure happened although it's > just > a bit different from your environment. Could you run it from CLI again > referring [1] if you cannot register from horizon? > > [1] > https://docs.openstack.org/tacker/latest/install/getting_started.html > > Thanks, > Yasufumi > > On 2021/06/03 10:55, dangerzone ar wrote: > > Hi all, > > > > I just deployed Tacker and tried to add my 1^st VIM but I’m getting > > errors as per attached file. Pls advise how to resolve this problem. Thanks > > > >  1. *Error: *Failed to register VIM: {"error": {"message": > >     "(http://192.168.0.121:5000/v3/tokens > > >      >): The resource could not be > >     found.", "code": 404, "title": "Not Found"}} > > > >  2. *Error as below**à**WARNING keystonemiddleware.auth_token [-] > >     Authorization failed for token: InvalidToken*** > > > > ** > > > > *{"vim": {"vim_project": {"name": "admin", "project_domain_name": > > "Default"}, "description": "d", "is_default": false, "auth_cred": > > {"username": "admin", "user_domain_name": "Default", "password": > > "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 > > >**", > "type": "openstack", "name": "d"}} > > process_request > > /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* > > > > *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] > > Authorization failed for token: InvalidToken* > > > > *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - > > [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* > > > > ** > > > > Below is my tacker.conf > > > > [DEFAULT] > > auth_strategy = keystone > > policy_file = /etc/tacker/policy.json > > debug = True > > use_syslog = False > > bind_host = 192.168.0.121 > > bind_port = 9890 > > service_plugins = nfvo,vnfm > > state_path = /var/lib/tacker > > > > > > [nfvo] > > vim_drivers = openstack > > > > [keystone_authtoken] > > region_name = RegionOne > > auth_type = password > > project_domain_name = Default > > user_domain_name = Default > > username = tacker > > password = password > > auth_url = http://192.168.0.121:35357 > > > > auth_uri = http://192.168.0.121:5000 > > > > > > [agent] > > root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf > > > > > > [database] > > connection = > > mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 > > > >** > > > From ueha.ayumu at fujitsu.com Mon Jun 7 08:04:00 2021 From: ueha.ayumu at fujitsu.com (ueha.ayumu at fujitsu.com) Date: Mon, 7 Jun 2021 08:04:00 +0000 Subject: [Tacker] Tacker Not able to create VIM In-Reply-To: <421ce3c0-87d9-dd63-ea46-65cc5ecd98d5@gmail.com> References: <421ce3c0-87d9-dd63-ea46-65cc5ecd98d5@gmail.com> Message-ID: Hi Yasufumi, I think so, I suggested using barbican is one of workaround. As you said, I think it's better to check the directory (/etc/tacker/vim/fernet_keys) first. > I think the reason of the error is the output directory doesn't exist or wrong permission Thanks, Ueha -----Original Message----- From: Yasufumi Ogawa Sent: Monday, June 7, 2021 4:45 PM To: Ueha, Ayumu/植波 歩 ; 'dangerzone ar' Cc: OpenStack Discuss Subject: Re: [Tacker] Tacker Not able to create VIM Hi Ueha, It a little bit strange because using barbican is for a consideration for security. Without using barbican, encode_vim_auth() should work because it just outputs a contents of fernet_key to a file under "/etc/tacker/vim/fernet_keys/" if `use_barbican` isn't True. https://opendev.org/openstack/tacker/src/branch/master/tacker/nfvo/drivers/vim/openstack_driver.py#L224 I think the reason of the error is the output directory doesn't exist or wrong permission (Although changing to use barbican might also be working as you suggested). What do you think? Thanks, Yasufumi On 2021/06/07 12:02, ueha.ayumu at fujitsu.com wrote: > Hi > > Have you installed “barbican” as described in the instructions? > > https://docs.openstack.org/tacker/latest/install/manual_installation.h > tml#pre-requisites > html#pre-requisites> > > I looked at the error log. It seems that the error occurred on the > route that does not use barbican. > > Could you add the following settings to tacker.conf and try again? > > [vim_keys] > > use_barbican = True > > Thanks, > > Ueha > > *From:*dangerzone ar > *Sent:* Saturday, June 5, 2021 12:48 AM > *To:* yasufum > *Cc:* OpenStack Discuss > *Subject:* Re: [Tacker] Tacker Not able to create VIM > > Hi All, > > I'm struggling these few days to register vim on my Tacker from the > dashboard. What I did is I removed tacker.conf and with the original > file and set back the setting each line..when I run the create from > dashboard I'm still not able to register the VIM but now I'm getting a > new error below. > > Below is the error > > *error: failed to register vim: unable to find key file for vim* > > I also tried from cli and still failed with error below > > command run:- > > tacker vim-register --config-file vim_config.yaml --is-default > vim-default --os-username admin --os-project-name admin > --os-project-domain-name Default --os-auth-url > http://192.168.0.121:5000/v3 >  --os-password c81e0c7a842f40c6 > > error return:- > Expecting to find domain in user. The server could not comply with the > request since it is either malformed or otherwise incorrect. The > client is assumed to be in error. (HTTP 400) (Request-ID: > req-a980cea4-adf2-4461-a66d-4c6c3bfd2e7d) > > Most of the line in tacker.conf setting is based on > > https://docs.openstack.org/tacker/latest/install/manual_installation.h > tml > html> > > I'm running all-in-one openstack packstack (queens) and deploying > Tacker manually. I really hope someone could advise and help me > please. Thank you for your help and support. > > **Attached image file and tacker.log for ref.* > > On Fri, Jun 4, 2021 at 11:05 PM yasufum > wrote: > > Hi, > > It might be a failure of not tacker but authentication because I've run > VIM registration as you tried and no failure happened although it's > just > a bit different from your environment. Could you run it from CLI again > referring [1] if you cannot register from horizon? > > [1] > > https://docs.openstack.org/tacker/latest/install/getting_started.html > > > > Thanks, > Yasufumi > > On 2021/06/03 10:55, dangerzone ar wrote: > > Hi all, > > > > I just deployed Tacker and tried to add my 1^st VIM but I’m getting > > errors as per attached file. Pls advise how to resolve this problem. Thanks > > > >  1. *Error: *Failed to register VIM: {"error": {"message": > >     "(http://192.168.0.121:5000/v3/tokens > > >      >): The resource could not be > >     found.", "code": 404, "title": "Not Found"}} > > > >  2. *Error as below**à**WARNING keystonemiddleware.auth_token [-] > >     Authorization failed for token: InvalidToken*** > > > > ** > > > > *{"vim": {"vim_project": {"name": "admin", "project_domain_name": > > "Default"}, "description": "d", "is_default": false, "auth_cred": > > {"username": "admin", "user_domain_name": "Default", "password": > > "c81e0c7a842f40c6"}, "auth_url": "**http://192.168.0.121:5000/v3 > > >**", > "type": "openstack", "name": "d"}} > > process_request > > /usr/lib/python2.7/site-packages/tacker/alarm_receiver.py:43* > > > > *2021-06-04 09:41:44.655 61233 WARNING keystonemiddleware.auth_token [-] > > Authorization failed for token: InvalidToken* > > > > *2021-06-04 09:41:44.655 61233 INFO tacker.wsgi [-] 192.168.0.121 - - > > [04/Jun/2021 09:41:44] "POST //v1.0/vims.json HTTP/1.1" 401 384 0.001720* > > > > ** > > > > Below is my tacker.conf > > > > [DEFAULT] > > auth_strategy = keystone > > policy_file = /etc/tacker/policy.json > > debug = True > > use_syslog = False > > bind_host = 192.168.0.121 > > bind_port = 9890 > > service_plugins = nfvo,vnfm > > state_path = /var/lib/tacker > > > > > > [nfvo] > > vim_drivers = openstack > > > > [keystone_authtoken] > > region_name = RegionOne > > auth_type = password > > project_domain_name = Default > > user_domain_name = Default > > username = tacker > > password = password > > auth_url = http://192.168.0.121:35357 > > > > auth_uri = http://192.168.0.121:5000 > > > > > > [agent] > > root_helper = sudo /usr/bin/tacker-rootwrap /etc/tacker/rootwrap.conf > > > > > > [database] > > connection = > > mysql://tacker:password at 192.168.0.121:3306/tacker?charset=utf8 > > > >** > > > From mark at stackhpc.com Mon Jun 7 08:10:53 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 7 Jun 2021 09:10:53 +0100 Subject: [kolla] [kolla-ansible] Magnum UI In-Reply-To: References: Message-ID: On Thu, 3 Jun 2021 at 09:49, Alexandros Soumplis wrote: > > Hi all, > > Before submitting a bug against launchpad I would like to ask if anyone > else can confirm this issue. I deploy Magnum on Victoria release using > the ubuntu binary containers and I do not have the UI installed. > Changing to the source binaries, the UI is installed and working as > expected. Is this a configerror, a bug or a feature maybe :) Hi Alexandros, Thank you for raising this issue. It was an easy fix, so I raised a bug [1] and fixed it [2]. Mark [1] https://bugs.launchpad.net/kolla/+bug/1931075 [2] https://review.opendev.org/c/openstack/kolla/+/795054 > > Thank you, > a. > > From geguileo at redhat.com Mon Jun 7 08:54:09 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 7 Jun 2021 10:54:09 +0200 Subject: [victoria][cinder ?] Dell Unity + Iscsi In-Reply-To: References: Message-ID: <20210607085409.5heiwmvt67nv4kwa@localhost> On 01/06, Albert Shih wrote: > Hi everyone > > > I've a small openstack configuration with 4 computes nodes, a Dell Unity 480F for the storage. > > I'm using cinder with iscsi. > > Everything work when I create a instance. But some instance after few time > are not reponsive. When I check on the hypervisor I can see > > [888240.310461] sd 14:0:0:2: [sdb] tag#120 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE > [888240.310493] sd 14:0:0:2: [sdb] tag#120 Sense Key : Illegal Request [current] > [888240.310502] sd 14:0:0:2: [sdb] tag#120 Add. Sense: Logical unit not supported > [888240.310510] sd 14:0:0:2: [sdb] tag#120 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 > [888240.310519] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 > [888240.311045] sd 14:0:0:2: [sdb] tag#121 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE > [888240.311050] sd 14:0:0:2: [sdb] tag#121 Sense Key : Illegal Request [current] > [888240.311065] sd 14:0:0:2: [sdb] tag#121 Add. Sense: Logical unit not supported > [888240.311070] sd 14:0:0:2: [sdb] tag#121 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 > [888240.311074] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 > [888240.342482] sd 14:0:0:2: [sdb] tag#70 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE > [888240.342490] sd 14:0:0:2: [sdb] tag#70 Sense Key : Illegal Request [current] > [888240.342496] sd 14:0:0:2: [sdb] tag#70 Add. Sense: Logical unit not supported > > I check on the hypervisor, no error at all on the ethernet interface. > > I check on the switch, no error at all on the interface on the switch. > > No sure but it's seem the problem appear more often when the instance are > doing nothing during some time. > Hi, You should first check if the volume is still exported and mapped to the host in Unity's web console. If it is still properly mapped, you should configure mutlipathing to make it more resilient. If it isn't you probably should confirm that all nodes have different initiator name (/etc/iscsi/initiatorname.iscsi) and different hostname (if configured in nova's conf file under "host" or at the Linux level if not). In any case I would turn on debug logs on Nova and Cinder and try to follow what happened with that specific LUN. Cheers, Gorka. > Every firmware, software on the Unity are uptodate. > > The 4 computes are exactly same, they run the same version of the > nova-compute & OS & firmware on the hardware. > > Any clue ? Or place to search the problem ? > > Regards > > -- > Albert SHIH > Observatoire de Paris > xmpp: jas at obspm.fr > Heure local/Local time: > Tue Jun 1 08:27:42 PM CEST 2021 > From skaplons at redhat.com Mon Jun 7 10:46:43 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 07 Jun 2021 12:46:43 +0200 Subject: [neutron][interop][refstack] New tests and capabilities to track in interop In-Reply-To: References: <6595086.PSTg7GmUaj@p1> Message-ID: <1982844.YfaRR3DleS@p1> Hi, Dnia środa, 2 czerwca 2021 14:16:50 CEST Martin Kopec pisze: > Hi Slawek, > > thanks for getting back to us and sharing new potential tests and > capabilities from neutron-tempest-plugin. > Let's first discuss tests which are in tempest directly please. > > We have done an analysis where we have cross checked tests we have in our > guidelines with the ones (api and non-admin ones) present in tempest at the > tempest checkout we currently use and here are the results: > https://etherpad.opendev.org/p/refstack-test-analysis > There are 110 and tempest.api.network tests which we don't have in any > guideline yet. > Could you please have a look at the list of the tests? Would it make sense > to include them in a guideline? Would they extend any network capabilities > we have in OpenStack Powered Platform program or would we need to create a > new one(s)? > https://opendev.org/osf/interop/src/branch/master/next.json Sure. I took a look at that list today. I think that: * tests from the group tempest.api.network.test_allowed_address_pair could be added to the "networks-l2-CRUD". Allowed_address_pairs is API extension, but it is supported by ML2 plugin since very long time, and should be available in all clouds which are using ML2 plugin. * tests from tempest.api.network.test_dhcp_ipv6 can probably be included in "IPAM drivers" section as now I think all clouds should supports IPv6 :) * tempest.api.network.test_floating_ips - those tests could be probably added to the "Core API L3 extension" section, but I'm not sure what are the guidlines for negative tests in the refstack, * Tests from tempest.api.network.test_networks.BulkNetwork* - are similar to the other L2 CRUD tests but are testing basic bulk CRUD operations for Networks. So It could be IMO included in the "networks-l2-CRUD" section * same for all other tests from tempest.api.network.test_networks and tempest.api.network.test_networks_negative modules * Tests from tempest.api.network.test_ports can probably also be included in the "network-l2-CRUD" section as filtering is supported by core Neutron db modules, * Tests from the tempest.api.network.test_routers module can probably go to the network-l3-CRUD section, That are the tests which I think that may be included somehow in the refstack. But I'm not refstack expert so please forgive me if I included here too many of them or if some of them are not approriate to be there :) > > Thank you, > > On Mon, 24 May 2021 at 16:33, Slawek Kaplonski wrote: > > Hi, > > > > Dnia poniedziałek, 26 kwietnia 2021 17:48:08 CEST Martin Kopec pisze: > > > Hi everyone, > > > > > > > > > > > > I would like to further discuss the topics we covered with the neutron > > > > team > > > > > during > > > > > > the PTG [1]. > > > > > > > > > > > > * adding address_group API capability > > > > > > It's tested by tests in neutron-tempest-plugin. First question is if > > > > tests > > > > > which are > > > > > > not directly in tempest can be a part of a non-add-on marketing program? > > > > > > It's possible to move them to tempest though, by the time we do so, could > > > > > > they be > > > > > > marked as advisory? > > > > > > > > > > > > * Shall we include QoS tempest tests since we don't know what share of > > > > > > vendors > > > > > > enable QoS? Could it be an add-on? > > > > > > These tests are also in neutron-tempest-plugin, I assume we're talking > > > > about > > > > > neutron_tempest_plugin.api.test_qos tests. > > > > > > If we want to include these tests, which program should they belong to? > > > > Do > > > > > we wanna > > > > > > create a new one? > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Mon Jun 7 12:15:51 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 07 Jun 2021 14:15:51 +0200 Subject: [neutron] IRC meetings location Message-ID: <2488685.8ipJFutIaR@p1> Hi, As we discussed on the last team meeting, I proposed [1] and it was merged today. So our meetings starting this week will be on the #openstack-neutron channel @OFTC. Please be aware of that change and see You on the channel at the meetings :) [1] https://review.opendev.org/c/opendev/irc-meetings/+/794711[1] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/opendev/irc-meetings/+/794711 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From mkopec at redhat.com Mon Jun 7 12:31:53 2021 From: mkopec at redhat.com (Martin Kopec) Date: Mon, 7 Jun 2021 14:31:53 +0200 Subject: [valet] Should we retire x/valet? Message-ID: Hi all, x/valet project has been inactive for some time now, f.e. there is a review [1] which has been open for more than 2 years and which would solve sanity issues in Tempest. The project is on an exclude list due to that [2]. Also there haven't been any real changes for the past 3 years [3]. I'm bringing this up to start a discussion about the future of the project. Should it be retired? Is it used? Are there any plans with it? [1] https://review.opendev.org/c/x/valet/+/638339 [2] https://opendev.org/openstack/tempest/src/commit/663787ee794df54e7ded41e5f3e8ae246e9b4288/tools/generate-tempest-plugins-list.py#L53 [3] https://opendev.org/x/valet/commits/branch/master Regards, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Mon Jun 7 12:31:56 2021 From: mkopec at redhat.com (Martin Kopec) Date: Mon, 7 Jun 2021 14:31:56 +0200 Subject: [mogan] Should we retire x/mogan? Message-ID: Hi all, x/mogan project has been inactive for some time now. It causes sanity issues in Tempest due to which it's excluded from the sanity check [1] and a review which should help to resolve them [2] is left untouched with failing gates - which also shows that the project is not maintained. Plus there haven't been any real changes done in the past 3 years [3]. I'm bringing this up to start a discussion about the future of the project. Should it be retired? Is it used? Are there any plans with it? [1] https://opendev.org/openstack/tempest/src/commit/663787ee794df54e7ded41e5f3e8ae246e9b4288/tools/generate-tempest-plugins-list.py#L59 [2] https://review.opendev.org/c/x/mogan/+/767718 [3] https://opendev.org/x/mogan/commits/branch/master Regards, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Mon Jun 7 12:31:55 2021 From: mkopec at redhat.com (Martin Kopec) Date: Mon, 7 Jun 2021 14:31:55 +0200 Subject: [kingbird] Should we retire x/kingbird? Message-ID: Hi all, x/kingbird project has been inactive for some time now, f.e. there is a bug [1] which has been open for more than a year and which would solve sanity issues in Tempest. The project is on an exclude list due to that [2]. Also there haven't been any real changes for the past 3 years [3]. I'm bringing this up to start a discussion about the future of the project. Should it be retired? Is it used? Are there any plans with it? [1] https://bugs.launchpad.net/kingbird/+bug/1869722 [2] https://opendev.org/openstack/tempest/src/commit/663787ee794df54e7ded41e5f3e8ae246e9b4288/tools/generate-tempest-plugins-list.py#L54 [3] https://opendev.org/x/kingbird/commits/branch/master Regards, -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Mon Jun 7 12:48:17 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 7 Jun 2021 17:48:17 +0500 Subject: [ops] Windows Guest Resolution In-Reply-To: References: <0670B960225633449A24709C291A5252511AAA5E@COM01.performair.local> Message-ID: Hi Hilibos, I have tested one more thing, you need to set image property hw_video_mode to qxl and install RedHat QXL controller driver from virtio-win IO. Then you will be able to set the resolution to 1080. - Ammad On Fri, Jun 4, 2021 at 11:04 PM Ammad Syed wrote: > Hi, > > Please try to install latest drivers from below links. > > https://www.linuxsysadmins.com/create-windows-server-image-for-openstack/ > > > https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.190-1/ > > Ammad > > On Fri, Jun 4, 2021 at 8:59 PM wrote: > >> All; >> >> We finally have reliable means to generate Windows images for our >> OpenStack, but we're running into a minor annoyance. Our Windows instances >> appear to have a resolution cap of 1024x768. It would be extremely useful >> to be able use resolutions higher than this, especially 1920x1080. Is this >> possible with OpenStack on KVM? >> >> As a second request; is there a way to add a second virtual monitor? Or >> to achieve the same thing, increase the resolution to 3840x1080? >> >> Thank you, >> >> Dominic L. Hilsbos, MBA >> Vice President - Information Technology >> Perform Air International Inc. >> DHilsbos at PerformAir.com >> www.PerformAir.com >> >> >> >> > > -- > Regards, > > > Syed Ammad Ali > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From atikoo at bloomberg.net Mon Jun 7 13:56:39 2021 From: atikoo at bloomberg.net (Ajay Tikoo (BLOOMBERG/ 120 PARK)) Date: Mon, 7 Jun 2021 13:56:39 -0000 Subject: =?UTF-8?B?UmU6IFtvcHNdIHJhYmJpdG1xIHF1ZXVlcyBmb3Igbm92YSB2ZXJzaW9uZWQgbm90aWZpYw==?= =?UTF-8?B?YXRpb25zIHF1ZXVlcyBrZWVwIGZpbGxpbmcgdXA=?= Message-ID: <60BE259700B103CE00390001_0_33274@msllnjpmsgsv06> Thank you, Christopher. From: cmccarth at mathworks.com At: 06/04/21 11:17:23 UTC-4:00To: openstack-discuss at lists.openstack.org Subject: Re: [ops] rabbitmq queues for nova versioned notifications queues keep filling up Hi Ajay, We work around this by setting a TTL on our notifications queues via RabbitMQ policy definition. We include the following in our definitions.json for RabbitMQ: "policies":[ {"vhost": "/", "name": "notifications-ttl", "pattern": "^(notifications|versioned_notifications)\\.", "apply-to": "queues", "definition": {"message-ttl":600000}, "priority":0} ] This expires messages in the notifications and versioned_notifications queues after 10 minutes, which seems to work well for us. I believe we initially picked up this workaround from this[1] bug report. Hope this helps, - Chris -- Christopher McCarthy MathWorks cmccarth at mathworks.com [1] https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1737170 Date: Wed, 2 Jun 2021 22:39:54 -0000 From: "Ajay Tikoo (BLOOMBERG/ 120 PARK)" To: openstack-discuss at lists.openstack.org Subject: [ops] rabbitmq queues for nova versioned notifications queues keep filling up Message-ID: <60B808BA00D0068401D80001_0_3025859 at msclnypmsgsv04> Content-Type: text/plain; charset="utf-8" I am not sure if this is the right channel/format to post this question, so my apologies in advance if this is not the right place. We are using Openstack Rocky. Watcher needs versioned notifications to be enabled. However after enabling versioned notifications, the queues for versioned_notifications (info and error) keep filling up Based on the updates the the Watchers cluster data model, it appears that Watcher is consuming messages, but they still linger in these queues. So with nova versioned notifications disabled, Watcher is unable to update the cluster data model (between rebuild intervals), and with them enabled, it keeps filling up the MQ queues. What is the best way to resolve this? Thank you, Ajay Tikoo -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Jun 7 15:50:20 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 07 Jun 2021 08:50:20 -0700 Subject: [mogan] Should we retire x/mogan? In-Reply-To: References: Message-ID: On Mon, Jun 7, 2021, at 5:31 AM, Martin Kopec wrote: > Hi all, > > x/mogan project has been inactive for some time now. It causes sanity > issues in Tempest due to which it's excluded from the sanity check [1] > and a review which should help to resolve them [2] is left untouched > with failing gates - which also shows that the project is not > maintained. Plus there haven't been any real changes done in the past 3 > years [3]. > > I'm bringing this up to start a discussion about the future of the project. > Should it be retired? Is it used? Are there any plans with it? Projects are in the x/ namespace because they weren't officially part of OpenStack. I think that puts us in a weird position to decide it should be abandoned. That said if the original maintainers chime in I suppose that is one possibility. As an idea, why not exclude all x/* projects from the project list in generate-tempest-plugins-list.py by default, then explicitly add the ones you know you care about instead? Then you don't have to maintain these lists unless state changes in something you care about and that might be something you want to take action on. > > [1] > https://opendev.org/openstack/tempest/src/commit/663787ee794df54e7ded41e5f3e8ae246e9b4288/tools/generate-tempest-plugins-list.py#L59 > [2] https://review.opendev.org/c/x/mogan/+/767718 > [3] https://opendev.org/x/mogan/commits/branch/master > > Regards, > -- > Martin Kopec > Senior Software Quality Engineer > Red Hat EMEA > > > From DHilsbos at performair.com Mon Jun 7 16:17:36 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Mon, 7 Jun 2021 16:17:36 +0000 Subject: [ops][victoria] Instance Hostname Metadata Message-ID: <0670B960225633449A24709C291A5252511AE7DA@COM01.performair.local> All; Is there an instance metadata value that will set and / or change the instance hostname? Thank you, Dominic L. Hilsbos, MBA Vice President - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From johnsomor at gmail.com Mon Jun 7 16:30:52 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 7 Jun 2021 09:30:52 -0700 Subject: [oslo][taskflow][tooz][infra] Proposal to retire #openstack-state-management IRC channel Message-ID: Hello OpenStack community, The recent need to update various pointers to OpenStack IRC channels raised a question about the continued need for the #openstack-state-management channel[1]. This channel is for discussions of the OpenStack state management libraries such as TaskFlow and Tooz. Both libraries fall under the Oslo project. These projects are both in a maintenance phase and discussions in the #openstack-state-management channel have been few and far between. Today, at the Oslo IRC meeting, we discussed retiring this channel and updating the IRC channel information to the main Oslo IRC channel #openstack-oslo[2]. The intent of this change is to help users find a larger group of people that may be able to help answer questions as well as reduce the number of channels people and bots need to monitor. If you have any questions or concerns about the plan to consolidate this channel into the main Oslo IRC channel, please let us know. I plan to work on updating the documentation update patches (grin) to point to the Oslo channel and will work with OpenDev/Infra to retire the #openstack-state-management channel. Michael [1] https://review.opendev.org/c/openstack/taskflow/+/793992 [2] https://meetings.opendev.org/meetings/oslo/2021/oslo.2021-06-07-15.00.log.html From peiyongz at gmail.com Mon Jun 7 05:51:05 2021 From: peiyongz at gmail.com (Pete Zhang) Date: Sun, 6 Jun 2021 22:51:05 -0700 Subject: Getting error during install openstack-nova-scheduler Message-ID: I hit the following errors and would like to know the fix. Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gro.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_latencystats.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx5.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_member.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_nfp.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_tap.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_pci.so.2()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-traits >= 0.16.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pdump.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vdev_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-resource-classes >= 0.4.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_failsafe.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_ring.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_ixgbe.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bitratestats.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_stack.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vdev.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_qede.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vhost.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_metrics.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_i40e.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pci.so.1()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Error: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gro.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_latencystats.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx5.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_member.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_nfp.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_tap.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_pci.so.2()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-traits >= 0.16.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pdump.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vdev_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-resource-classes >= 0.4.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_failsafe.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_ring.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_ixgbe.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bitratestats.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_stack.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vdev.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_qede.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vhost.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_metrics.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_i40e.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pci.so.1()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon Jun 7 17:04:41 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 7 Jun 2021 11:04:41 -0600 Subject: [tripleo][ci] ovb jobs Message-ID: 0/ Update on the OVB jobs across all centos-stream-8 branches. OVB jobs should be SUCCESSFUL If your overcloud has the rpm hostname.3.20-6.el8 and NOT 3.20-7.el8. e.g. [1] . The hostname package is being fixed via centos packaging. Related Change: https://git.centos.org/rpms/hostname/c/e097d2aac3e76eebbaac3ee4c2b95f575f3798fa?branch=c8s Related Bugs: https://bugs.launchpad.net/tripleo/+bug/1930849 https://bugzilla.redhat.com/show_bug.cgi?id=1965897 https://bugzilla.redhat.com/show_bug.cgi?id=1956378 The CI team is putting in a temporary patch to force any OVB job to BUILD the overcloud images vs. pulling the prebuilt images until new overcloud images are rebuilt and promoted at this time [2] Thanks to Sandeep, Arx, Yatin and MIchele!!! [1] https://logserver.rdoproject.org/61/33961/6/check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/9b98429/logs/overcloud-controller-0/var/log/extra/package-list-installed.txt.gz https://logserver.rdoproject.org/42/795042/1/openstack-check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/4b6d711/logs/overcloud-controller-0/var/log/extra/package-list-installed.txt.gz [2] https://review.rdoproject.org/r/c/rdo-jobs/+/34022 https://review.rdoproject.org/r/c/config/+/34023/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Mon Jun 7 18:07:55 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 7 Jun 2021 11:07:55 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler Message-ID: I hit this error when installing “openstack-nova-scheduler” of release train.Anyone knows the issue/fix? What is the librte? is it another rpm i can download somewhere? or what is the best channel/DL to post this question, thx.Here is what I did. 1. I did this in a test box. 2. I have puppet-modules installed on the box 3. I have openstack-release-train’s rpms on the box and built a local-repo for puppet to install Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gro.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_latencystats.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx5.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_member.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_nfp.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_tap.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_pci.so.2()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-traits >= 0.16.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pdump.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vdev_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-resource-classes >= 0.4.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_failsafe.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_ring.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_ixgbe.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bitratestats.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_stack.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vdev.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_qede.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vhost.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_metrics.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_i40e.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pci.so.1()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Error: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyongz at gmail.com Mon Jun 7 18:01:26 2021 From: peiyongz at gmail.com (Pete Zhang) Date: Mon, 7 Jun 2021 11:01:26 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler Message-ID: I hit this error when installing “openstack-nova-scheduler” of release train.Anyone knows the issue/fix? What is the librte? is it another rpm i can download somewhere? or what is the best channel/DL to post this question, thx.Here is what I did. 1. I did this in a test box. 2. I have puppet-modules installed on the box 3. I have openstack-release-train’s rpms on the box and built a local-repo for puppet to install Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gro.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_latencystats.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx5.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_member.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_nfp.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_tap.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_pci.so.2()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-traits >= 0.16.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pdump.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vdev_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-os-resource-classes >= 0.4.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2(DPDK_18.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_failsafe.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_ring.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_ixgbe.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bitratestats.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.08)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_16.07)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_stack.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vdev.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_qede.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_vhost.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_metrics.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_i40e.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pci.so.1()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Error: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool_bucket.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_bus_vmbus.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_mlx4.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4()(64bit) Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) Requires: libsodium.so.23()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_17.05)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_meter.so.2()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_vhost.so.4(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_eal.so.9(DPDK_18.11)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_ring.so.2(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_gso.so.1()(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_mempool.so.5(DPDK_2.0)(64bit) Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_netvsc.so.1()(64bit) Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) Requires: python2-tooz >= 1.58.0 Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) Requires: librte_pmd_bnxt.so.2()(64bit) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Jun 7 18:52:42 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 7 Jun 2021 20:52:42 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> On 6/7/21 8:07 PM, Pete Zhang wrote: > > I hit this error when installing “openstack-nova-scheduler” of release > train.Anyone knows the issue/fix? > What is the librte? is it another rpm i can download somewhere? > or what is the best channel/DL to post this question, thx.Here is what I > did. > > 1. I did this in a test box. > 2. I have puppet-modules installed on the box > 3. I have openstack-release-train’s rpms on the box and built a > local-repo for puppet to install > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > Requires: librte_mempool_bucket.so.1()(64bit) > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) Hi, I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though librte is from dpdk. It's likely a bug if nova-scheduler depends on openvswitch (but it's probably not a bug if OVS depends on dpdk if it was compiled with dpdk support). Cheers, Thomas Goirand (zigo) From juliaashleykreger at gmail.com Mon Jun 7 19:14:59 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 7 Jun 2021 12:14:59 -0700 Subject: [RDO] Re: Getting error during install openstack-nova-scheduler In-Reply-To: References: Message-ID: Greetings Pete, I'm going to guess your issue may actually be with RDO packaging dependencies than with the nova project itself. I guess there is there is a dependency issue for Centos7? Are any RDO contributors aware of this? I suspect you need Centos Extra enabled as a couple of the required files/libraries are sourced from packages in extras, such openvswitch itself and dpdk. -Julia On Mon, Jun 7, 2021 at 10:09 AM Pete Zhang wrote: > > > I hit the following errors and would like to know the fix. > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_bucket.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_vmbus.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mbuf.so.4()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_mlx4.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4()(64bit) > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: libsodium.so.23()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_gso.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_netvsc.so.1()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-tooz >= 1.58.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_bnxt.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_gro.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_latencystats.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_mlx5.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_member.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_nfp.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_tap.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_pci.so.2()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-os-traits >= 0.16.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pdump.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-os-resource-classes >= 0.4.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ring.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_failsafe.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_ring.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bitratestats.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_stack.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_vdev.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_qede.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_vhost.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_metrics.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_i40e.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pci.so.1()(64bit) > > You could try using --skip-broken to work around the problem > > You could try running: rpm -Va --nofiles --nodigest > > Error: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_bucket.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_vmbus.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mbuf.so.4()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_mlx4.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4()(64bit) > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: libsodium.so.23()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_gso.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_netvsc.so.1()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-tooz >= 1.58.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_bnxt.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_gro.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_latencystats.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_mlx5.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_member.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_nfp.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_tap.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_pci.so.2()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-os-traits >= 0.16.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pdump.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > Requires: python2-os-resource-classes >= 0.4.0 > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ring.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_failsafe.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_ring.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_eal.so.9()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bitratestats.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_stack.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_bus_vdev.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_qede.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_vhost.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_metrics.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pmd_i40e.so.2()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_pci.so.1()(64bit) > > You could try using --skip-broken to work around the problem > > You could try running: rpm -Va --nofiles --nodigest From mrunge at matthias-runge.de Mon Jun 7 19:37:51 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Mon, 7 Jun 2021 21:37:51 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: > On 6/7/21 8:07 PM, Pete Zhang wrote: > > > > I hit this error when installing “openstack-nova-scheduler” of release > > train.Anyone knows the issue/fix? > > What is the librte? is it another rpm i can download somewhere? > > or what is the best channel/DL to post this question, thx.Here is what I > > did. > > > > 1. I did this in a test box. > > 2. I have puppet-modules installed on the box > > 3. I have openstack-release-train’s rpms on the box and built a > > local-repo for puppet to install > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_bucket.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > Hi, > > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though > librte is from dpdk. It's likely a bug if nova-scheduler depends on > openvswitch (but it's probably not a bug if OVS depends on dpdk if it > was compiled with dpdk support). Packages ending with el7 are probably a bit aged already. You may want to switch to something more recent. RDO is only updating the latest release. I don't know where you got the other packages from, but I can see there is no direct dependency from openstack-nova-scheduler to openvswitch[1]. On the other side, the openvswitch build indeed requires librte[2]. RDO describes the used repositories[3], and you may want to enable CentOS extras. [1] https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec [2] https://cbs.centos.org/koji/rpminfo?rpmID=173673 [3] https://www.rdoproject.org/documentation/repositories/ -- Matthias Runge From cboylan at sapwetik.org Mon Jun 7 21:00:03 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 07 Jun 2021 14:00:03 -0700 Subject: [ops][victoria] Instance Hostname Metadata In-Reply-To: <0670B960225633449A24709C291A5252511AE7DA@COM01.performair.local> References: <0670B960225633449A24709C291A5252511AE7DA@COM01.performair.local> Message-ID: On Mon, Jun 7, 2021, at 9:17 AM, DHilsbos at performair.com wrote: > All; > > Is there an instance metadata value that will set and / or change the > instance hostname? Yes, there are two keys: "hostname" and "name", https://docs.openstack.org/nova/latest/user/metadata.html#openstack-format-metadata. I'm not completely sure what the difference is between the two, but it looks like hostname may be more of an fqdn and name is a hostname? You then need a tool like cloud-init or glean to set the name. Glean only operates on the config drive which doesn't update post creation which means it won't handle name changes. I'm not sure if name changes are something that cloud-init can watch out for and update on the instance. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From peiyong.zhang at salesforce.com Mon Jun 7 21:27:50 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 7 Jun 2021 14:27:50 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler Message-ID: Julie, The original email is too long and requires moderator approval. So I have a new email thread instead. The openstack-vswitch is required (>=11.0.0 < 12.0.0) by openstack-neutron (v15.0.0, from openstack-release-train, the release we chose). I downloaded openstack-vswitch-11.0.0 from https://forge.puppet.com/modules/openstack/vswitch/11.0.0. Where I can download the missing *librte and its dependencies*? I don't think we have a yum-repo for Centos Extra so I might need to have those dependencies downloaded as well. Thanks a lot! Pete -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jun 7 23:24:05 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 07 Jun 2021 18:24:05 -0500 Subject: [mogan] Should we retire x/mogan? In-Reply-To: References: Message-ID: <179e8ca57e3.10c36bf3f315912.8329947087135243517@ghanshyammann.com> ---- On Mon, 07 Jun 2021 10:50:20 -0500 Clark Boylan wrote ---- > On Mon, Jun 7, 2021, at 5:31 AM, Martin Kopec wrote: > > Hi all, > > > > x/mogan project has been inactive for some time now. It causes sanity > > issues in Tempest due to which it's excluded from the sanity check [1] > > and a review which should help to resolve them [2] is left untouched > > with failing gates - which also shows that the project is not > > maintained. Plus there haven't been any real changes done in the past 3 > > years [3]. > > > > I'm bringing this up to start a discussion about the future of the project. > > Should it be retired? Is it used? Are there any plans with it? > > Projects are in the x/ namespace because they weren't officially part of OpenStack. I think that puts us in a weird position to decide it should be abandoned. That said if the original maintainers chime in I suppose that is one possibility. > > As an idea, why not exclude all x/* projects from the project list in generate-tempest-plugins-list.py by default, then explicitly add the ones you know you care about instead? Then you don't have to maintain these lists unless state changes in something you care about and that might be something you want to take action on. Actually we want to cover all the tempest plugins as part of sanity check not just the one under OpenStack governance so that we do not break them as Tempest is used in much wider space than just OpenStack. As these x/ namespace plugins are failing we will keep adding them in inactive plugins exclusive list. If we find any inactive plugins from OpenStack namespeace then we can start the discussion over retiring the plugins. -gmann > > > > > [1] > > https://opendev.org/openstack/tempest/src/commit/663787ee794df54e7ded41e5f3e8ae246e9b4288/tools/generate-tempest-plugins-list.py#L59 > > [2] https://review.opendev.org/c/x/mogan/+/767718 > > [3] https://opendev.org/x/mogan/commits/branch/master > > > > Regards, > > -- > > Martin Kopec > > Senior Software Quality Engineer > > Red Hat EMEA > > > > > > > > From gmann at ghanshyammann.com Mon Jun 7 23:53:23 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 07 Jun 2021 18:53:23 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 10th at 1500 UTC Message-ID: <179e8e52bd5.e35e022f316095.4772252122737314526@ghanshyammann.com> Hello Everyone, NOTE: TC MEETINGS WILL BE HELD IN #openstack-tc CHANNEL ON OFTC NETWORK (NOT FREENODE) Technical Committee's next weekly meeting is scheduled for June 10th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, June 9th , at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From peiyong.zhang at salesforce.com Tue Jun 8 00:23:10 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 7 Jun 2021 17:23:10 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: Matthias, These steps, "install python-nova", "install openstack-nova-scheduler" need python-openvswitch-2.11 which in turn looking for libopenvswitch which is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm . And I have this copy installed on my local repo. *Trying to figure out which rpm has the librte_*.* BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. Which has rpms for train, stein, rocky and queens. Is there a similar site for later releases like Ussuri or Victoria? Pete On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge wrote: > On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: > > On 6/7/21 8:07 PM, Pete Zhang wrote: > > > > > > I hit this error when installing “openstack-nova-scheduler” of release > > > train.Anyone knows the issue/fix? > > > What is the librte? is it another rpm i can download somewhere? > > > or what is the best channel/DL to post this question, thx.Here is what > I > > > did. > > > > > > 1. I did this in a test box. > > > 2. I have puppet-modules installed on the box > > > 3. I have openstack-release-train’s rpms on the box and built a > > > local-repo for puppet to install > > > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install > openstack-nova-scheduler' > > > Error: Execution of '/bin/yum -d 0 -e 0 -y install > openstack-nova-scheduler' returned 1: Error: Package: > 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 > (local_openstack-tnrp) > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > Hi, > > > > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though > > librte is from dpdk. It's likely a bug if nova-scheduler depends on > > openvswitch (but it's probably not a bug if OVS depends on dpdk if it > > was compiled with dpdk support). > > Packages ending with el7 are probably a bit aged already. You may want > to switch to something more recent. RDO is only updating the latest > release. > I don't know where you got the other packages from, but I can see there > is no direct dependency from openstack-nova-scheduler to > openvswitch[1]. On the other side, the openvswitch build indeed requires > librte[2]. > > RDO describes the used repositories[3], and you may want to enable > CentOS extras. > > [1] > https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ > [2] > https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ > [3] > https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ > > -- > Matthias Runge > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Jun 8 01:32:41 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 7 Jun 2021 19:32:41 -0600 Subject: [tripleo][ci] ovb jobs In-Reply-To: References: Message-ID: ah.. one note on this.. One issue is still under investigation from vexxhost. If you get a RETRY message from zuul, you most likely hit the following bug. https://bugs.launchpad.net/tripleo/+bug/1930273 On Mon, Jun 7, 2021 at 11:04 AM Wesley Hayutin wrote: > 0/ > > Update on the OVB jobs across all centos-stream-8 branches. > OVB jobs should be SUCCESSFUL If your overcloud has the rpm hostname.3.20-6.el8 > and NOT 3.20-7.el8. e.g. [1] . > > The hostname package is being fixed via centos packaging. > > Related Change: > > https://git.centos.org/rpms/hostname/c/e097d2aac3e76eebbaac3ee4c2b95f575f3798fa?branch=c8s > > Related Bugs: > https://bugs.launchpad.net/tripleo/+bug/1930849 > https://bugzilla.redhat.com/show_bug.cgi?id=1965897 > https://bugzilla.redhat.com/show_bug.cgi?id=1956378 > > The CI team is putting in a temporary patch to force any OVB job to BUILD > the overcloud images vs. pulling the prebuilt images until new overcloud > images are rebuilt and promoted at this time [2] > > Thanks to Sandeep, Arx, Yatin and MIchele!!! > > [1] > https://logserver.rdoproject.org/61/33961/6/check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/9b98429/logs/overcloud-controller-0/var/log/extra/package-list-installed.txt.gz > > https://logserver.rdoproject.org/42/795042/1/openstack-check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/4b6d711/logs/overcloud-controller-0/var/log/extra/package-list-installed.txt.gz > > [2] > https://review.rdoproject.org/r/c/rdo-jobs/+/34022 > https://review.rdoproject.org/r/c/config/+/34023/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Tue Jun 8 06:15:33 2021 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 8 Jun 2021 11:45:33 +0530 Subject: [RDO] Re: Getting error during install openstack-nova-scheduler In-Reply-To: References: Message-ID: Hi Pete, Julia, On Tue, Jun 8, 2021 at 12:50 AM Julia Kreger wrote: > > Greetings Pete, > > I'm going to guess your issue may actually be with RDO packaging > dependencies than with the nova project itself. I guess there is there > is a dependency issue for Centos7? Are any RDO contributors aware of > this? I suspect you need Centos Extra enabled as a couple of the > required files/libraries are sourced from packages in extras, such > openvswitch itself and dpdk. > Yes, correct the issue is not related to nova itself, but dependencies and repos. >From Error I see a local repo is used for Train release, which looks missing deps in that repo. Most of the missing deps are provided by dpdk which comes from CentOS Extras repos. So fixing that local repo or using OpenStack CentOS repos along with CentOS base repos directly you shouldn't see the issue. On a CentOS node you can install train repos with "yum install centos-release-openstack-train", other CentOS repos need to be kept enabled to avoid such deps issues. > -Julia > > On Mon, Jun 7, 2021 at 10:09 AM Pete Zhang wrote: > > > > > > I hit the following errors and would like to know the fix. > > > > > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vmbus.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx4.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4()(64bit) > > > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: libsodium.so.23()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gso.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-tooz >= 1.58.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_bnxt.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gro.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_latencystats.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx5.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_member.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_nfp.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_tap.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_pci.so.2()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-traits >= 0.16.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pdump.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-resource-classes >= 0.4.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_failsafe.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_ring.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bitratestats.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_stack.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vdev.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_qede.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vhost.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_metrics.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_i40e.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pci.so.1()(64bit) > > > > You could try using --skip-broken to work around the problem > > > > You could try running: rpm -Va --nofiles --nodigest > > > > Error: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vmbus.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx4.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4()(64bit) > > > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: libsodium.so.23()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gso.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-tooz >= 1.58.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_bnxt.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gro.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_latencystats.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx5.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_member.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_nfp.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_tap.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_pci.so.2()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-traits >= 0.16.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pdump.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-resource-classes >= 0.4.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_failsafe.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_ring.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bitratestats.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_stack.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vdev.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_qede.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vhost.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_metrics.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_i40e.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pci.so.1()(64bit) > > > > You could try using --skip-broken to work around the problem > > > > You could try running: rpm -Va --nofiles --nodigest > Thanks and Regards Yatin Karel From pierre at stackhpc.com Tue Jun 8 08:18:28 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 8 Jun 2021 10:18:28 +0200 Subject: [CLOUDKITTY] Fix tests cases broken by flask >=2.0.1 In-Reply-To: References: Message-ID: Thanks a lot Rafael for fixing this gate blocker! On Tue, 1 Jun 2021 at 15:55, Rafael Weingärtner wrote: > > Hello guys, > I was reviewing the patch https://review.opendev.org/c/openstack/cloudkitty/+/793790, and decided to propose an alternative patch (https://review.opendev.org/c/openstack/cloudkitty/+/793973). > > Could you guys review it? > > The idea I am proposing is that, instead of mocking the root object ("flask.request"), we address the issue by mocking only the needed methods and attributes. This facilitates the understanding of the unit test, and also helps people to pin-point problems right away as the mocked attributes/methods are clearly seen in the unit test. > > -- > Rafael Weingärtner From pierre at stackhpc.com Tue Jun 8 08:41:32 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 8 Jun 2021 10:41:32 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: RDO packages for Ussuri and Victoria are available CentOS 8 [1] and CentOS Stream 8 [2]. [1] http://mirror.centos.org/centos/8/cloud/x86_64/ [2] http://mirror.centos.org/centos/8-stream/cloud/x86_64/ On Tue, 8 Jun 2021 at 02:24, Pete Zhang wrote: > > Matthias, > > These steps, "install python-nova", "install openstack-nova-scheduler" need python-openvswitch-2.11 which in turn looking for libopenvswitch which is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm. And I have this copy installed on my local repo. > > Trying to figure out which rpm has the librte_*. > > BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. Which has rpms for train, stein, rocky and queens. > Is there a similar site for later releases like Ussuri or Victoria? > > Pete > > > On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge wrote: >> >> On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: >> > On 6/7/21 8:07 PM, Pete Zhang wrote: >> > > >> > > I hit this error when installing “openstack-nova-scheduler” of release >> > > train.Anyone knows the issue/fix? >> > > What is the librte? is it another rpm i can download somewhere? >> > > or what is the best channel/DL to post this question, thx.Here is what I >> > > did. >> > > >> > > 1. I did this in a test box. >> > > 2. I have puppet-modules installed on the box >> > > 3. I have openstack-release-train’s rpms on the box and built a >> > > local-repo for puppet to install >> > > >> > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' >> > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > > Requires: librte_mempool_bucket.so.1()(64bit) >> > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) >> > >> > Hi, >> > >> > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though >> > librte is from dpdk. It's likely a bug if nova-scheduler depends on >> > openvswitch (but it's probably not a bug if OVS depends on dpdk if it >> > was compiled with dpdk support). >> >> Packages ending with el7 are probably a bit aged already. You may want >> to switch to something more recent. RDO is only updating the latest >> release. >> I don't know where you got the other packages from, but I can see there >> is no direct dependency from openstack-nova-scheduler to >> openvswitch[1]. On the other side, the openvswitch build indeed requires >> librte[2]. >> >> RDO describes the used repositories[3], and you may want to enable >> CentOS extras. >> >> [1] https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ >> [2] https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ >> [3] https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ >> >> -- >> Matthias Runge >> > > > -- > From katkumar at in.ibm.com Tue Jun 8 09:50:32 2021 From: katkumar at in.ibm.com (Katari Kumar) Date: Tue, 8 Jun 2021 09:50:32 +0000 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Tue Jun 8 10:00:57 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 08 Jun 2021 12:00:57 +0200 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: References: Message-ID: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> On Tuesday, 8 June 2021 11:50:32 CEST Katari Kumar wrote: > > Hi, > > our 3rd party CI (IBM Storage CI, based on zuul v2) uses devstack-gate > scripts to install openstack via devstack and run tempest suite on the > storage. > It works with wallaby but fails on latest master as the devstack project > dropped bionic support. > We are currently trying to use ubuntu focal, but facing issues in devstack > gate script. > I understand that all 3rdparty drivers should migrate to Zuul v3 to avoid > such issues. As devstack-gate is not used in Zuul V3 , i see no activity in > devstack-gate to support latest versions. > But as there are many existing Zuul v2 users, devstack-gate should continue > to support latest projects. This has been communicated several times: devstack-gate should have been dropped in ussuri already according the original plan. The plan was delayed a bit because we had a few relevant legacy jobs around, but the last bits have been merged recently and there are no further plans to support devstack-gate for xena. On your specific issue: I think we had a few focal-based legacy jobs in victoria before dropping them, so you may probably tune the jobs to work with devstack-gate. But this won't work when Xena is branched. So please prioritize the migration to Zuul v3, rather than trying to patch an unsupported software stack. During last PTG, in the Cinder session a 3rd party CI shared their experience with the migration using Software Factory as Zuul distribution, you can find the recording here: https://www.youtube.com/watch?v=hVLpPBldn7g&t=426 https://wiki.openstack.org/wiki/ CinderXenaPTGSummary#Using_Software_Factory_for_Cinder_Third_Party_CI -- Luigi From rafaelweingartner at gmail.com Tue Jun 8 11:08:16 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 8 Jun 2021 08:08:16 -0300 Subject: [CLOUDKITTY] Fix tests cases broken by flask >=2.0.1 In-Reply-To: References: Message-ID: Glad to help! On Tue, Jun 8, 2021 at 5:19 AM Pierre Riteau wrote: > Thanks a lot Rafael for fixing this gate blocker! > > On Tue, 1 Jun 2021 at 15:55, Rafael Weingärtner > wrote: > > > > Hello guys, > > I was reviewing the patch > https://review.opendev.org/c/openstack/cloudkitty/+/793790, and decided > to propose an alternative patch ( > https://review.opendev.org/c/openstack/cloudkitty/+/793973). > > > > Could you guys review it? > > > > The idea I am proposing is that, instead of mocking the root object > ("flask.request"), we address the issue by mocking only the needed methods > and attributes. This facilitates the understanding of the unit test, and > also helps people to pin-point problems right away as the mocked > attributes/methods are clearly seen in the unit test. > > > > -- > > Rafael Weingärtner > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Tue Jun 8 12:07:28 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 8 Jun 2021 17:07:28 +0500 Subject: [nova][glance] Instance Password Reset Message-ID: Hi, I am trying to enable guest password reset for windows and linux guests. Is it possible to do it when the instance is running ? I am using wallaby release. By searching, I have found that qemu-guest-agent is required to reset the password. But I didn't see the the image property for it. https://opendev.org/openstack/glance/src/branch/stable/wallaby/doc/source/admin/useful-image-properties.rst On other link I have found hw_qemu_guest_agent property for image this will help in this but looks too old and seems to be deprepacted. https://wiki.openstack.org/wiki/VirtDriverImageProperties My objective is to reset the linux and windows guest password or inject new key pair. Need your help, how to achieve it. - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernandoperches at gmail.com Tue Jun 8 12:11:40 2021 From: fernandoperches at gmail.com (Fernando Ferraz) Date: Tue, 8 Jun 2021 09:11:40 -0300 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> Message-ID: Hello, The NetApp CI for Cinder also relies on Zuul v2. We were able to recently move our jobs to focal, but dropping devstack-gate is a big concern considering our team size and schedule. Luigi, could you clarify what would immediately break after xena is branched? Fernando Em ter., 8 de jun. de 2021 às 07:05, Luigi Toscano escreveu: > On Tuesday, 8 June 2021 11:50:32 CEST Katari Kumar wrote: > > > > Hi, > > > > our 3rd party CI (IBM Storage CI, based on zuul v2) uses devstack-gate > > scripts to install openstack via devstack and run tempest suite on the > > storage. > > It works with wallaby but fails on latest master as the devstack project > > dropped bionic support. > > We are currently trying to use ubuntu focal, but facing issues in > devstack > > gate script. > > I understand that all 3rdparty drivers should migrate to Zuul v3 to avoid > > such issues. As devstack-gate is not used in Zuul V3 , i see no activity > in > > devstack-gate to support latest versions. > > But as there are many existing Zuul v2 users, devstack-gate should > continue > > to support latest projects. > > This has been communicated several times: devstack-gate should have been > dropped in ussuri already according the original plan. The plan was > delayed a > bit because we had a few relevant legacy jobs around, but the last bits > have > been merged recently and there are no further plans to support > devstack-gate > for xena. > > On your specific issue: I think we had a few focal-based legacy jobs in > victoria before dropping them, so you may probably tune the jobs to work > with > devstack-gate. But this won't work when Xena is branched. > > So please prioritize the migration to Zuul v3, rather than trying to patch > an > unsupported software stack. During last PTG, in the Cinder session a 3rd > party > CI shared their experience with the migration using Software Factory as > Zuul > distribution, you can find the recording here: > > https://www.youtube.com/watch?v=hVLpPBldn7g&t=426 > > https://wiki.openstack.org/wiki/ > CinderXenaPTGSummary#Using_Software_Factory_for_Cinder_Third_Party_CI > > > > -- > Luigi > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Tue Jun 8 12:42:21 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 08 Jun 2021 14:42:21 +0200 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> Message-ID: <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> On Tuesday, 8 June 2021 14:11:40 CEST Fernando Ferraz wrote: > Hello, > > The NetApp CI for Cinder also relies on Zuul v2. We were able to > recently move our jobs to focal, but dropping devstack-gate is a big > concern considering our team size and schedule. > Luigi, could you clarify what would immediately break after xena is > branched? > For example grenade jobs won't work anymore because there won't be any new entry related to stable/xena added here to devstack-vm-gate-wrap.sh: https://opendev.org/openstack/devstack-gate/src/branch/master/devstack-vm-gate-wrap.sh#L335 I understand that grenade testing is probably not relevant for 3rd party CIs (it should be, but that's a different discussion), but the main point is that devstack-gate is already now in almost-maintenance mode. The minimum amount of fixed that have been merged have been used to keep working the very few legacy jobs defined on opendev.org, and that number is basically 0 at this point. This mean that there are a ton of potential breakages happening anytime, and the focal change is just one (and each one of you, CI owner, had to fix it on your own). Others may come anytime and they won't be detected nor investigated anymore because we don't have de-facto legacy jobs around since wallaby. To summarize: if you use Zuul v2, you have been running for a long while on an unsupported software stack. The last tiny bits which could be used on both zuulv2 and zuulv3 in legacy mode to easy the transition are unsupported too. This problem, I believe, has been communicated periodically by the various team and the time to migrate is... last month. Please hurry up! Ciao -- Luigi From smooney at redhat.com Tue Jun 8 13:30:12 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 08 Jun 2021 14:30:12 +0100 Subject: [ops] [oslo] [nova] rabbitmq queues for nova versioned notifications queues keep filling up In-Reply-To: <60BE259700B103CE00390001_0_33274@msllnjpmsgsv06> References: <60BE259700B103CE00390001_0_33274@msllnjpmsgsv06> Message-ID: <05786003e021db2132418c0d561bcd5da2795ed9.camel@redhat.com> On Mon, 2021-06-07 at 13:56 +0000, Ajay Tikoo (BLOOMBERG/ 120 PARK) wrote: > Thank you, Christopher. > > From: cmccarth at mathworks.com At: 06/04/21 11:17:23 UTC-4:00To: openstack-discuss at lists.openstack.org > Subject: Re: [ops] rabbitmq queues for nova versioned notifications queues keep filling up > > > > Hi Ajay, > > We work around this by setting a TTL on our notifications queues via RabbitMQ policy definition. We include the following in our definitions.json for RabbitMQ: > > "policies":[ > {"vhost": "/", "name": "notifications-ttl", "pattern": "^(notifications|versioned_notifications)\\.", "apply-to": "queues", "definition": {"message-ttl":600000}, "priority":0} ] > adding the oslo and nova tag as im wonderign if ^ should be configurable via oslo.messagaing or nova automatically. perhaps we have a configuration option fo notification experation but i think this could be a useful feature to add if its not already present. we have a  rabbit_transient_queues_ttl https://docs.openstack.org/nova/latest/configuration/config.html#oslo_messaging_rabbit.rabbit_transient_queues_ttl but im not sure tha tthat is applied to notification queues by default. im wondering if that is a bug that we shoudl correct? the default value is 1800 sec which is 30 mins which seams reasonable. while longer then the 10 mins chris is usign it's better then infinity. > This expires messages in the notifications and versioned_notifications queues after 10 minutes, which seems to work well for us. I believe we initially picked up this workaround from this[1] bug report. > > Hope this helps, > > - Chris > > -- > Christopher McCarthy > MathWorks > cmccarth at mathworks.com > > [1] https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1737170 > > > Date: Wed, 2 Jun 2021 22:39:54 -0000 > From: "Ajay Tikoo (BLOOMBERG/ 120 PARK)" > To: openstack-discuss at lists.openstack.org > Subject: [ops] rabbitmq queues for nova versioned notifications queues > keep filling up > Message-ID: <60B808BA00D0068401D80001_0_3025859 at msclnypmsgsv04> > Content-Type: text/plain; charset="utf-8" > > I am not sure if this is the right channel/format to post this question, so my apologies in advance if this is not the right place. > > We are using Openstack Rocky. Watcher needs versioned notifications to be enabled. However after enabling versioned notifications, the queues for versioned_notifications (info and error) keep filling up Based on the updates the the Watchers cluster data model, it appears that Watcher is consuming messages, but they still linger in these queues. So with nova versioned notifications disabled, Watcher is unable to update the cluster data model (between rebuild intervals), and with them enabled, it keeps filling up the MQ queues. What is the best way to resolve this? > > Thank you, > Ajay Tikoo > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jun 8 13:34:31 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 08 Jun 2021 14:34:31 +0100 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: On Mon, 2021-06-07 at 20:52 +0200, Thomas Goirand wrote: > On 6/7/21 8:07 PM, Pete Zhang wrote: > > > > I hit this error when installing “openstack-nova-scheduler” of release > > train.Anyone knows the issue/fix? > > What is the librte? is it another rpm i can download somewhere? > > or what is the best channel/DL to post this question, thx.Here is what I > > did. > > > > 1. I did this in a test box. > > 2. I have puppet-modules installed on the box > > 3. I have openstack-release-train’s rpms on the box and built a > > local-repo for puppet to install > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_mempool_bucket.so.1()(64bit) > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > Hi, > > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though > librte is from dpdk. It's likely a bug if nova-scheduler depends on > openvswitch (but it's probably not a bug if OVS depends on dpdk if it > was compiled with dpdk support). ya that is a define bug the scheduler has no dependency on ovs or dpdk > > Cheers, > > Thomas Goirand (zigo) > From smooney at redhat.com Tue Jun 8 13:39:40 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 08 Jun 2021 14:39:40 +0100 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: On Tue, 2021-06-08 at 10:41 +0200, Pierre Riteau wrote: > RDO packages for Ussuri and Victoria are available CentOS 8 [1] and > CentOS Stream 8 [2]. > > [1] http://mirror.centos.org/centos/8/cloud/x86_64/ > [2] http://mirror.centos.org/centos/8-stream/cloud/x86_64/ > > On Tue, 8 Jun 2021 at 02:24, Pete Zhang wrote: > > > > Matthias, > > > > These steps, "install python-nova", "install openstack-nova-scheduler" need python-openvswitch-2.11 which in turn looking for libopenvswitch which is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm. And I have this copy installed on my local repo. ya so openstack-nova-scheduler does not required python-openvswitch-2.11 os-vif required python-openvswitch for the python bidnigns but os-vif is not needed by the scheduler. and even then the python binding do not need librte_* librte_* is an optional depency of openvswitch. redhat choose to build in dpdk supprot into the ovs-vswitchd binday rahter then shiping a seperate package but openvswitch shoudl not be a mandatory install requirement of any nova rpm. nor should librte_* espically the contoler services. > > > > Trying to figure out which rpm has the librte_*. > > > > BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. Which has rpms for train, stein, rocky and queens. > > Is there a similar site for later releases like Ussuri or Victoria? > > > > Pete > > > > > > On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge wrote: > > > > > > On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: > > > > On 6/7/21 8:07 PM, Pete Zhang wrote: > > > > > > > > > > I hit this error when installing “openstack-nova-scheduler” of release > > > > > train.Anyone knows the issue/fix? > > > > > What is the librte? is it another rpm i can download somewhere? > > > > > or what is the best channel/DL to post this question, thx.Here is what I > > > > > did. > > > > > > > > > > 1. I did this in a test box. > > > > > 2. I have puppet-modules installed on the box > > > > > 3. I have openstack-release-train’s rpms on the box and built a > > > > > local-repo for puppet to install > > > > > > > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > > > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > > > > > Hi, > > > > > > > > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though > > > > librte is from dpdk. It's likely a bug if nova-scheduler depends on > > > > openvswitch (but it's probably not a bug if OVS depends on dpdk if it > > > > was compiled with dpdk support). > > > > > > Packages ending with el7 are probably a bit aged already. You may want > > > to switch to something more recent. RDO is only updating the latest > > > release. > > > I don't know where you got the other packages from, but I can see there > > > is no direct dependency from openstack-nova-scheduler to > > > openvswitch[1]. On the other side, the openvswitch build indeed requires > > > librte[2]. > > > > > > RDO describes the used repositories[3], and you may want to enable > > > CentOS extras. > > > > > > [1] https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ > > > [2] https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ > > > [3] https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ > > > > > > -- > > > Matthias Runge > > > > > > > > > -- > > > From smooney at redhat.com Tue Jun 8 14:04:59 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 08 Jun 2021 15:04:59 +0100 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: <6ee439b13fac2fbea15de4aeaca8067586856357.camel@redhat.com> just looking into the rdo packaging python-nova depend on python-os-vif which depends on python-ovsdbapp which depens on python3-openvswitch https://github.com/rdo-packages/ovsdbapp-distgit/blob/rpm-master/python-ovsdbapp.spec#L51 i would have too double check if os-vif is technially requried for any contol plane service but i belive we only use it within the compute service currently. python3-openvswitch appears to only required libopenvswitch https://cbs.centos.org/koji/rpminfo?rpmID=183064 not the full ovs package the problem is that apprently libopenvswitch is not packaged seperatly and is provided by the main openvswitch package which is not correct. https://cbs.centos.org/koji/rpminfo?rpmID=183069 that is pulling in dpdk. it looks like there are no rhle 8 build of dpdk from what im seeing quickly but dpdk is what provides those missing libs https://cbs.centos.org/koji/rpminfo?rpmID=138108 i think the correct packaging fix woudl be do have libopenvswitch be provided by a spereate package e.g. an openvswich-common or similar that did not have the depencies on dpdk. althernitvaly we coudl package dpdk on centos 8 but really you shoudl not need to install it to install the nova scheduler. On Tue, 2021-06-08 at 14:39 +0100, Sean Mooney wrote: > On Tue, 2021-06-08 at 10:41 +0200, Pierre Riteau wrote: > > RDO packages for Ussuri and Victoria are available CentOS 8 [1] and > > CentOS Stream 8 [2]. > > > > [1] http://mirror.centos.org/centos/8/cloud/x86_64/ > > [2] http://mirror.centos.org/centos/8-stream/cloud/x86_64/ > > > > On Tue, 8 Jun 2021 at 02:24, Pete Zhang wrote: > > > > > > Matthias, > > > > > > These steps, "install python-nova", "install openstack-nova-scheduler" need python-openvswitch-2.11 which in turn looking for libopenvswitch which is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm. And I have this copy installed on my local repo. > ya so openstack-nova-scheduler does not required python-openvswitch-2.11 > > os-vif required python-openvswitch for the python bidnigns but os-vif is not needed by the scheduler. > and even then the python binding do not need librte_* > > librte_* is an optional depency of openvswitch. redhat choose to build in dpdk supprot into the ovs-vswitchd binday rahter then shiping a seperate > package but openvswitch shoudl not be a mandatory install requirement of any nova rpm. nor should librte_* espically the contoler services. > > > > > > > Trying to figure out which rpm has the librte_*. > > > > > > BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. Which has rpms for train, stein, rocky and queens. > > > Is there a similar site for later releases like Ussuri or Victoria? > > > > > > Pete > > > > > > > > > On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge wrote: > > > > > > > > On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: > > > > > On 6/7/21 8:07 PM, Pete Zhang wrote: > > > > > > > > > > > > I hit this error when installing “openstack-nova-scheduler” of release > > > > > > train.Anyone knows the issue/fix? > > > > > > What is the librte? is it another rpm i can download somewhere? > > > > > > or what is the best channel/DL to post this question, thx.Here is what I > > > > > > did. > > > > > > > > > > > > 1. I did this in a test box. > > > > > > 2. I have puppet-modules installed on the box > > > > > > 3. I have openstack-release-train’s rpms on the box and built a > > > > > > local-repo for puppet to install > > > > > > > > > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' > > > > > > Error: Execution of '/bin/yum -d 0 -e 0 -y install openstack-nova-scheduler' returned 1: Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > > > > > > > Hi, > > > > > > > > > > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though > > > > > librte is from dpdk. It's likely a bug if nova-scheduler depends on > > > > > openvswitch (but it's probably not a bug if OVS depends on dpdk if it > > > > > was compiled with dpdk support). > > > > > > > > Packages ending with el7 are probably a bit aged already. You may want > > > > to switch to something more recent. RDO is only updating the latest > > > > release. > > > > I don't know where you got the other packages from, but I can see there > > > > is no direct dependency from openstack-nova-scheduler to > > > > openvswitch[1]. On the other side, the openvswitch build indeed requires > > > > librte[2]. > > > > > > > > RDO describes the used repositories[3], and you may want to enable > > > > CentOS extras. > > > > > > > > [1] https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ > > > > [2] https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ > > > > [3] https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ > > > > > > > > -- > > > > Matthias Runge > > > > > > > > > > > > > -- > > > > > > > From gmann at ghanshyammann.com Tue Jun 8 14:14:35 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 08 Jun 2021 09:14:35 -0500 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> Message-ID: <179ebf99f29.d451fbf0365691.4329366033312889323@ghanshyammann.com> ---- On Tue, 08 Jun 2021 07:42:21 -0500 Luigi Toscano wrote ---- > On Tuesday, 8 June 2021 14:11:40 CEST Fernando Ferraz wrote: > > Hello, > > > > The NetApp CI for Cinder also relies on Zuul v2. We were able to > > recently move our jobs to focal, but dropping devstack-gate is a big > > concern considering our team size and schedule. > > Luigi, could you clarify what would immediately break after xena is > > branched? > > > > For example grenade jobs won't work anymore because there won't be any new > entry related to stable/xena added here to devstack-vm-gate-wrap.sh: > > https://opendev.org/openstack/devstack-gate/src/branch/master/devstack-vm-gate-wrap.sh#L335 > > I understand that grenade testing is probably not relevant for 3rd party CIs > (it should be, but that's a different discussion), but the main point is that > devstack-gate is already now in almost-maintenance mode. The minimum amount of > fixed that have been merged have been used to keep working the very few legacy > jobs defined on opendev.org, and that number is basically 0 at this point. > > This mean that there are a ton of potential breakages happening anytime, and > the focal change is just one (and each one of you, CI owner, had to fix it on > your own). Others may come anytime and they won't be detected nor investigated > anymore because we don't have de-facto legacy jobs around since wallaby. > > To summarize: if you use Zuul v2, you have been running for a long while on an > unsupported software stack. The last tiny bits which could be used on both > zuulv2 and zuulv3 in legacy mode to easy the transition are unsupported too. > > This problem, I believe, has been communicated periodically by the various > team and the time to migrate is... last month. Please hurry up! Yes, we have done this migration in Victoria release cycle with two community-wide goals together with the direction of moving all the CI from devstack gate from wallaby itself. But by seeing few jobs and especially 3rd party CI, we extended the devstack-gate support for wallaby release [1]. So we extended the support for one more release until stable/wallaby. NOTE: supporting a extra release extend the devstack-gate support until that release until that become EOL, as we need to support that release stable CI. So it is not just a one more cycle support but even longer time of 1 year or more. Now extended the support for Xena cycle also seems very difficult by seeing very less number of contributor or less bandwidth of current core members in devstack-gate. I will plan to officially declare the devstack-gate deprecation with team but please move your CI/CD to latest Focal and to zuulv3 ASAP. 1. https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html 2. https://governance.openstack.org/tc/goals/selected/victoria/native-zuulv3-jobs.html [1] https://review.opendev.org/c/openstack/devstack-gate/+/778129 https://review.opendev.org/c/openstack/devstack-gate/+/785010 -gmann > > > Ciao > -- > Luigi > > > > From fungi at yuggoth.org Tue Jun 8 14:42:10 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 8 Jun 2021 14:42:10 +0000 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> Message-ID: <20210608144210.qxmahyw3qozygvcd@yuggoth.org> On 2021-06-08 14:42:21 +0200 (+0200), Luigi Toscano wrote: [...] > To summarize: if you use Zuul v2, you have been running for a long > while on an unsupported software stack. The last tiny bits which > could be used on both zuulv2 and zuulv3 in legacy mode to easy the > transition are unsupported too. [...] For very large definitions of "long while." The last official 2.x release of Zuul was in September of 2017, so it's been EOL going on 4 years already. I'm not sure how much more warning people need that they should upgrade? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Tue Jun 8 14:45:45 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 8 Jun 2021 14:45:45 +0000 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: References: Message-ID: <20210608144545.vhghtk6p7mgkmkw6@yuggoth.org> On 2021-06-08 09:50:32 +0000 (+0000), Katari Kumar wrote: [...] > But as there are many existing Zuul v2 users, devstack-gate should > continue to support latest projects. Community software is developed and supported by its users, and devstack-gate is no exception. The people who were maintaining it no longer have any use for it. If you're using it, then it's up to you to keep it working (perhaps with the help of others who are also using it). But in my biased opinion, your time is probably better spent upgrading than trying to limp along with abandonware. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From whayutin at redhat.com Tue Jun 8 15:19:07 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 8 Jun 2021 09:19:07 -0600 Subject: [tripleo] Changing TripleO's release model Message-ID: Greetings TripleO community! At the most recent TripleO community meetings we have discussed formally changing the OpenStack release model for TripleO [1]. The previous released projects can be found here [2]. TripleO has previously released with release-type[‘trailing’, ‘cycle-with-intermediary’]. To quote the release model doc: ‘Trailing deliverables trail the release, so they cannot, by definition, be independent. They need to pick between cycle-with-rc or cycle-with-intermediary models.’ We are proposing to update the release-model to ‘independent’. This would give the TripleO community more flexibility in when we choose to cut a release. In turn this would mean less backporting, less upstream and 3rd party resources used by potentially some future releases. To quote the release model doc: ‘Some projects opt to completely bypass the 6-month cycle and release independently. For example, that is the case of projects that support the development infrastructure. The “independent” model describes such projects.’ The discussion here is to merely inform the greater community with regards to the proposal and conversations regarding the release model. This thread is NOT meant to discuss previous releases or their supported status, merely changing the release model here [3] [0] https://etherpad.opendev.org/p/tripleo-meeting-items [1] https://releases.openstack.org/reference/release_models.html [2] https://releases.openstack.org/teams/tripleo.html [3] https://opendev.org/openstack/releases/src/branch/master/deliverables/xena -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Mon Jun 7 20:56:56 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 7 Jun 2021 13:56:56 -0700 Subject: [RDO] Re: Getting error during install openstack-nova-scheduler In-Reply-To: References: Message-ID: Julia, The openstack-vswitch is required (>=11.0.0 < 12.0.0) by openstack-neutron (v15.0.0, from openstack-release-train, the release we chose). I downloaded openstack-vswitch-11.0.0 from https://forge.puppet.com/modules/openstack/vswitch/11.0.0. Any idea where I can download the missing librtb? thanks. Pete On Mon, Jun 7, 2021 at 12:20 PM Julia Kreger wrote: > Greetings Pete, > > I'm going to guess your issue may actually be with RDO packaging > dependencies than with the nova project itself. I guess there is there > is a dependency issue for Centos7? Are any RDO contributors aware of > this? I suspect you need Centos Extra enabled as a couple of the > required files/libraries are sourced from packages in extras, such > openvswitch itself and dpdk. > > -Julia > > On Mon, Jun 7, 2021 at 10:09 AM Pete Zhang wrote: > > > > > > I hit the following errors and would like to know the fix. > > > > > > > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install > openstack-nova-scheduler' > > > > Error: Execution of '/bin/yum -d 0 -e 0 -y install > openstack-nova-scheduler' returned 1: Error: Package: > 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vmbus.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx4.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4()(64bit) > > > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: libsodium.so.23()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gso.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-tooz >= 1.58.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_bnxt.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gro.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_latencystats.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx5.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_member.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_nfp.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_tap.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_pci.so.2()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-traits >= 0.16.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pdump.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-resource-classes >= 0.4.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_failsafe.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_ring.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bitratestats.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_stack.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vdev.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_qede.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vhost.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_metrics.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_i40e.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pci.so.1()(64bit) > > > > You could try using --skip-broken to work around the problem > > > > You could try running: rpm -Va --nofiles --nodigest > > > > Error: > /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: > change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 > -y install openstack-nova-scheduler' returned 1: Error: Package: > 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_bucket.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vmbus.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx4.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4()(64bit) > > > > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: libsodium.so.23()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_18.11)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gso.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-tooz >= 1.58.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_bnxt.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_gro.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_latencystats.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_mlx5.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_member.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_nfp.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_tap.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_pci.so.2()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-traits >= 0.16.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_2.0)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pdump.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) > > > > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch (local_openstack-tnrp) > > > > Requires: python2-os-resource-classes >= 0.4.0 > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ring.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_meter.so.2(DPDK_18.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_failsafe.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_ring.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_ixgbe.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_eal.so.9()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bitratestats.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_mempool_stack.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_bus_vdev.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_qede.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_vhost.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_metrics.so.1()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pmd_i40e.so.2()(64bit) > > > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) > > > > Requires: librte_pci.so.1()(64bit) > > > > You could try using --skip-broken to work around the problem > > > > You could try running: rpm -Va --nofiles --nodigest > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Mon Jun 7 21:22:32 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 7 Jun 2021 14:22:32 -0700 Subject: [RDO] Re: Getting error during install openstack-nova-scheduler In-Reply-To: References: Message-ID: Correct: librte On Mon, Jun 7, 2021 at 1:56 PM Pete Zhang wrote: > Julia, > > The openstack-vswitch is required (>=11.0.0 < 12.0.0) by openstack-neutron > (v15.0.0, from openstack-release-train, the release we chose). > I downloaded openstack-vswitch-11.0.0 from > https://forge.puppet.com/modules/openstack/vswitch/11.0.0. > > Any idea where I can download the missing librtb? thanks. > > Pete > > On Mon, Jun 7, 2021 at 12:20 PM Julia Kreger > wrote: > >> Greetings Pete, >> >> I'm going to guess your issue may actually be with RDO packaging >> dependencies than with the nova project itself. I guess there is there >> is a dependency issue for Centos7? Are any RDO contributors aware of >> this? I suspect you need Centos Extra enabled as a couple of the >> required files/libraries are sourced from packages in extras, such >> openvswitch itself and dpdk. >> >> -Julia >> >> On Mon, Jun 7, 2021 at 10:09 AM Pete Zhang wrote: >> > >> > >> > I hit the following errors and would like to know the fix. >> > >> > >> > >> > Debug: Executing: '/bin/yum -d 0 -e 0 -y install >> openstack-nova-scheduler' >> > >> > Error: Execution of '/bin/yum -d 0 -e 0 -y install >> openstack-nova-scheduler' returned 1: Error: Package: >> 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_bucket.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_vmbus.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mbuf.so.4()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_mlx4.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4()(64bit) >> > >> > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: libsodium.so.23()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_18.11)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ring.so.2(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_gso.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_netvsc.so.1()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-tooz >= 1.58.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_bnxt.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_gro.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_latencystats.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_mlx5.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_member.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_nfp.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_tap.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_17.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_pci.so.2()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-os-traits >= 0.16.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pdump.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-os-resource-classes >= 0.4.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ring.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2(DPDK_18.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_failsafe.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_ring.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_ixgbe.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bitratestats.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_stack.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_vdev.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_qede.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_vhost.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_metrics.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_i40e.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pci.so.1()(64bit) >> > >> > You could try using --skip-broken to work around the problem >> > >> > You could try running: rpm -Va --nofiles --nodigest >> > >> > Error: >> /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Package[nova-scheduler]/ensure: >> change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 >> -y install openstack-nova-scheduler' returned 1: Error: Package: >> 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_bucket.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_2.1)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_vmbus.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.11)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mbuf.so.4()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_mlx4.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4()(64bit) >> > >> > Error: Package: python2-pynacl-1.3.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: libsodium.so.23()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_2.2)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_17.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_18.11)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mbuf.so.4(DPDK_2.1)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ring.so.2(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_gso.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_netvsc.so.1()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-tooz >= 1.58.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_bnxt.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_gro.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_latencystats.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_18.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_mlx5.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_member.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_nfp.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_tap.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9(DPDK_17.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_pci.so.2()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-os-traits >= 0.16.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2(DPDK_2.0)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pdump.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_vdev_netvsc.so.1()(64bit) >> > >> > Error: Package: 1:python2-nova-20.6.0-1.el7.noarch >> (local_openstack-tnrp) >> > >> > Requires: python2-os-resource-classes >= 0.4.0 >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ring.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_ethdev.so.11(DPDK_17.05)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_meter.so.2(DPDK_18.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_failsafe.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_ring.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_ixgbe.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_eal.so.9()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bitratestats.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_vhost.so.4(DPDK_17.08)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool.so.5(DPDK_16.07)(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_mempool_stack.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_bus_vdev.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_qede.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_vhost.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_metrics.so.1()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pmd_i40e.so.2()(64bit) >> > >> > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > >> > Requires: librte_pci.so.1()(64bit) >> > >> > You could try using --skip-broken to work around the problem >> > >> > You could try running: rpm -Va --nofiles --nodigest >> >> > > -- > > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From katkumar at in.ibm.com Tue Jun 8 09:42:26 2021 From: katkumar at in.ibm.com (Katari Kumar) Date: Tue, 8 Jun 2021 09:42:26 +0000 Subject: 3rd party CI failures with devstack 'master' using devstack-gate Message-ID: An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Jun 8 16:46:26 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 08 Jun 2021 18:46:26 +0200 Subject: [nova][placement] Weekly meeting moves to #openstack-nova Message-ID: Hi, As we agreed on the today's meeting[1] we will try out to hold our meetings on the project channel. I've update the agenda page on the Wiki and proposed the patch to update the official meeting schedule page [2]. So next week we will use #openstack-nova for the weekly meeting. cheers, gibi [1] https://meetings.opendev.org/meetings/nova/2021/nova.2021-06-08-16.00.log.html#l-124 [2] https://review.opendev.org/c/opendev/irc-meetings/+/795377 From amoralej at redhat.com Tue Jun 8 16:58:45 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 8 Jun 2021 18:58:45 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: Hi, Sorry for arriving late. On Tue, Jun 8, 2021 at 2:26 AM Pete Zhang wrote: > Matthias, > > These steps, "install python-nova", "install openstack-nova-scheduler" > need python-openvswitch-2.11 which in turn looking for libopenvswitch which > is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm > . And I have this copy > installed on my local repo. > > I just tested installing openstack-nova-scheduler on a fresh centos7 system and worked fine installing openvswitch-2.12.0-el7. # yum install "*-train" # yum install openstack-nova-scheduler Just make sure you have the *extras* repo enabled, which should be by default. librte_* is provided in dpdk package which is in the extras repo. You shouldn't need any local repo. > *Trying to figure out which rpm has the librte_*.* > > BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. > Which has rpms for train, stein, rocky and queens. > Is there a similar site for later releases like Ussuri or Victoria? > > Train was the last version released for CentOS 7. Ussuri, Victoria and Wallaby are released for CentOS Linux 8 and CentOS Stream 8: http://mirror.centos.org/centos/8-stream/cloud/x86_64/ http://mirror.centos.org/centos/8/cloud/x86_64/ You can enable the repos by just installing centos-release-openstack-[ussuri,victoria,wallaby]. That should be enough. Regards, Alfredo Pete > > > > On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge > wrote: > >> On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: >> > On 6/7/21 8:07 PM, Pete Zhang wrote: >> > > >> > > I hit this error when installing “openstack-nova-scheduler” of release >> > > train.Anyone knows the issue/fix? >> > > What is the librte? is it another rpm i can download somewhere? >> > > or what is the best channel/DL to post this question, thx.Here is >> what I >> > > did. >> > > >> > > 1. I did this in a test box. >> > > 2. I have puppet-modules installed on the box >> > > 3. I have openstack-release-train’s rpms on the box and built a >> > > local-repo for puppet to install >> > > >> > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install >> openstack-nova-scheduler' >> > > Error: Execution of '/bin/yum -d 0 -e 0 -y install >> openstack-nova-scheduler' returned 1: Error: Package: >> 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >> > > Requires: librte_mempool_bucket.so.1()(64bit) >> > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 >> (local_openstack-tnrp) >> > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) >> > >> > Hi, >> > >> > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though >> > librte is from dpdk. It's likely a bug if nova-scheduler depends on >> > openvswitch (but it's probably not a bug if OVS depends on dpdk if it >> > was compiled with dpdk support). >> >> Packages ending with el7 are probably a bit aged already. You may want >> to switch to something more recent. RDO is only updating the latest >> release. >> I don't know where you got the other packages from, but I can see there >> is no direct dependency from openstack-nova-scheduler to >> openvswitch[1]. On the other side, the openvswitch build indeed requires >> librte[2]. >> >> RDO describes the used repositories[3], and you may want to enable >> CentOS extras. >> >> [1] >> https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ >> [2] >> https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ >> [3] >> https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ >> >> -- >> Matthias Runge >> >> > > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Tue Jun 8 17:10:14 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 8 Jun 2021 19:10:14 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: <21fd6563-1ed5-b470-1725-932389f23a6b@debian.org> Message-ID: On Tue, Jun 8, 2021 at 6:58 PM Alfredo Moralejo Alonso wrote: > Hi, > > Sorry for arriving late. > > On Tue, Jun 8, 2021 at 2:26 AM Pete Zhang > wrote: > >> Matthias, >> >> These steps, "install python-nova", "install openstack-nova-scheduler" >> need python-openvswitch-2.11 which in turn looking for libopenvswitch which >> is provided by openvswitch-1:2.12.0-1.el7.x86_64.rpm >> . And I have this copy >> installed on my local repo. >> >> > I just tested installing openstack-nova-scheduler on a fresh centos7 > system and worked fine installing openvswitch-2.12.0-el7. > > # yum install "*-train" > # yum install openstack-nova-scheduler > > Just make sure you have the *extras* repo enabled, which should be by > default. librte_* is provided in dpdk package which is in the extras repo. > You shouldn't need any local repo. > > BTW, extras repo is enabled by default in centos repos config, but you can enable it with: # yum-config-manager --enable extras > >> *Trying to figure out which rpm has the librte_*.* >> >> BTW, I got most rpms from http://mirror.centos.org/centos/7/cloud/x86_64/. >> Which has rpms for train, stein, rocky and queens. >> Is there a similar site for later releases like Ussuri or Victoria? >> >> > Train was the last version released for CentOS 7. Ussuri, Victoria and > Wallaby are released for CentOS Linux 8 and CentOS Stream 8: > > http://mirror.centos.org/centos/8-stream/cloud/x86_64/ > http://mirror.centos.org/centos/8/cloud/x86_64/ > > You can enable the repos by just installing > centos-release-openstack-[ussuri,victoria,wallaby]. That should be enough. > > Regards, > > Alfredo > > > Pete >> >> > > > >> >> On Mon, Jun 7, 2021 at 12:43 PM Matthias Runge >> wrote: >> >>> On Mon, Jun 07, 2021 at 08:52:42PM +0200, Thomas Goirand wrote: >>> > On 6/7/21 8:07 PM, Pete Zhang wrote: >>> > > >>> > > I hit this error when installing “openstack-nova-scheduler” of >>> release >>> > > train.Anyone knows the issue/fix? >>> > > What is the librte? is it another rpm i can download somewhere? >>> > > or what is the best channel/DL to post this question, thx.Here is >>> what I >>> > > did. >>> > > >>> > > 1. I did this in a test box. >>> > > 2. I have puppet-modules installed on the box >>> > > 3. I have openstack-release-train’s rpms on the box and built a >>> > > local-repo for puppet to install >>> > > >>> > > Debug: Executing: '/bin/yum -d 0 -e 0 -y install >>> openstack-nova-scheduler' >>> > > Error: Execution of '/bin/yum -d 0 -e 0 -y install >>> openstack-nova-scheduler' returned 1: Error: Package: >>> 1:openvswitch-2.12.0-1.el7.x86_64 (local_openstack-tnrp) >>> > > Requires: librte_mempool_bucket.so.1()(64bit) >>> > > Error: Package: 1:openvswitch-2.12.0-1.el7.x86_64 >>> (local_openstack-tnrp) >>> > > Requires: librte_ethdev.so.11(DPDK_18.05)(64bit) >>> > >>> > Hi, >>> > >>> > I'm not a Red Hat user (but the OpenStack maintainer in Debian). Though >>> > librte is from dpdk. It's likely a bug if nova-scheduler depends on >>> > openvswitch (but it's probably not a bug if OVS depends on dpdk if it >>> > was compiled with dpdk support). >>> >>> Packages ending with el7 are probably a bit aged already. You may want >>> to switch to something more recent. RDO is only updating the latest >>> release. >>> I don't know where you got the other packages from, but I can see there >>> is no direct dependency from openstack-nova-scheduler to >>> openvswitch[1]. On the other side, the openvswitch build indeed requires >>> librte[2]. >>> >>> RDO describes the used repositories[3], and you may want to enable >>> CentOS extras. >>> >>> [1] >>> https://urldefense.com/v3/__https://github.com/rdo-packages/nova-distgit/blob/train-rdo/openstack-nova.spec__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNr_Q_7lQ$ >>> [2] >>> https://urldefense.com/v3/__https://cbs.centos.org/koji/rpminfo?rpmID=173673__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNRaMe3hM$ >>> [3] >>> https://urldefense.com/v3/__https://www.rdoproject.org/documentation/repositories/__;!!DCbAVzZNrAf4!RKlcUEHBI3PvESOWZQ8z_KbIQjfkOEbCIaOj9bzgtDMQ58uyTEnQlD5QiYYfwVDNI36Ef5g$ >>> >>> -- >>> Matthias Runge >>> >>> >> >> -- >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Jun 8 17:12:11 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 08 Jun 2021 10:12:11 -0700 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: <179ebf99f29.d451fbf0365691.4329366033312889323@ghanshyammann.com> References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> <179ebf99f29.d451fbf0365691.4329366033312889323@ghanshyammann.com> Message-ID: <6fc4dc79-2083-4cf3-9ca8-ef6e1dd0ca5d@www.fastmail.com> On Tue, Jun 8, 2021, at 7:14 AM, Ghanshyam Mann wrote: > ---- On Tue, 08 Jun 2021 07:42:21 -0500 Luigi Toscano > wrote ---- > > On Tuesday, 8 June 2021 14:11:40 CEST Fernando Ferraz wrote: > > > Hello, > > > > > > The NetApp CI for Cinder also relies on Zuul v2. We were able to > > > recently move our jobs to focal, but dropping devstack-gate is a > big > > > concern considering our team size and schedule. > > > Luigi, could you clarify what would immediately break after xena is > > > branched? > > > > > > > For example grenade jobs won't work anymore because there won't be > any new > > entry related to stable/xena added here to devstack-vm-gate-wrap.sh: > > > > > https://opendev.org/openstack/devstack-gate/src/branch/master/devstack-vm-gate-wrap.sh#L335 > > > > I understand that grenade testing is probably not relevant for 3rd > party CIs > > (it should be, but that's a different discussion), but the main > point is that > > devstack-gate is already now in almost-maintenance mode. The minimum > amount of > > fixed that have been merged have been used to keep working the very > few legacy > > jobs defined on opendev.org, and that number is basically 0 at this > point. > > > > This mean that there are a ton of potential breakages happening > anytime, and > > the focal change is just one (and each one of you, CI owner, had to > fix it on > > your own). Others may come anytime and they won't be detected nor > investigated > > anymore because we don't have de-facto legacy jobs around since > wallaby. > > > > To summarize: if you use Zuul v2, you have been running for a long > while on an > > unsupported software stack. The last tiny bits which could be used > on both > > zuulv2 and zuulv3 in legacy mode to easy the transition are > unsupported too. > > > > This problem, I believe, has been communicated periodically by the > various > > team and the time to migrate is... last month. Please hurry up! > > Yes, we have done this migration in Victoria release cycle with two > community-wide goals together > with the direction of moving all the CI from devstack gate from wallaby > itself. But by seeing few jobs > and especially 3rd party CI, we extended the devstack-gate support for > wallaby release [1]. So we > extended the support for one more release until stable/wallaby. > > NOTE: supporting a extra release extend the devstack-gate support until > that release until that become EOL, > as we need to support that release stable CI. So it is not just a one > more cycle support but even longer > time of 1 year or more. > > Now extended the support for Xena cycle also seems very difficult by > seeing very less number of > contributor or less bandwidth of current core members in devstack-gate. > > I will plan to officially declare the devstack-gate deprecation with > team but please move your CI/CD to > latest Focal and to zuulv3 ASAP. These changes have started to go up [2]. I want to clarify a few things though. As far as I can remember we have never required any specific CI system or setup. What we have done are required basic behaviors from the CI system. Things like respond to "recheck", post logs in a publicly accessible location and report them back, have contacts available so we can contact you if things break, and so on. What this means is that some third party CI system are likely running Jenkins. I know others that ran some homegrown thing that watched the Gerrit event stream. We recommend Zuul and now Zuulv3 or newer because it is a tool that we understand and can provide some assistance with. Those that choose not to use the recommended tools are likely to need to invest in their own tooling and debugging. For devstack-gate we will not accept new patches to keep it running against master, but need to keep it around for older stable branches. If those that are running their own set of tools want to keep devstack-gate alive for modern openstack then forking it is likely the best path forward. > > 1. > https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > 2. > https://governance.openstack.org/tc/goals/selected/victoria/native-zuulv3-jobs.html > > > [1] > https://review.opendev.org/c/openstack/devstack-gate/+/778129 > https://review.opendev.org/c/openstack/devstack-gate/+/785010 [2] https://review.opendev.org/q/topic:%22deprecate-devstack-gate%22+(status:open%20OR%20status:merged) From lyarwood at redhat.com Tue Jun 8 19:03:46 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 8 Jun 2021 20:03:46 +0100 Subject: [nova] destroy_secrets being added to nova.virt.driver.ComputeDriver.{cleanup, destroy} Message-ID: Hello all, I'm looking to introduce and likely backport an optional kwarg to the signature of the cleanup and destroy virt driver methods below: virt: Add destroy_secrets kwarg to destroy and cleanup https://review.opendev.org/c/openstack/nova/+/794252 While this is optional for any callers any out of tree driver implementing either method will need to add this kwarg. This is part of the following bugfix series where I am attempting to avoid secrets from being destroyed during a hard reboot within the libvirt driver: https://review.opendev.org/q/topic:bug/1905701+status:open+branch:master Hopefully this is trivial but if there are any concerns or issues with this then please let me know! Cheers, Lee From sbaker at redhat.com Tue Jun 8 23:52:22 2021 From: sbaker at redhat.com (Steve Baker) Date: Wed, 9 Jun 2021 11:52:22 +1200 Subject: [baremetal-sig][ironic] Tue June 8, 2021, 2pm UTC: The Ironic Python Agent Builder In-Reply-To: <4af9f9ed-dd59-0463-ec41-aa2f2905aafc@cern.ch> References: <4af9f9ed-dd59-0463-ec41-aa2f2905aafc@cern.ch> Message-ID: <67f365a7-4fed-d0c8-e42e-9bbf70305602@redhat.com> On 7/06/21 6:41 pm, Arne Wiebalck wrote: > Dear all, > > The Bare Metal SIG will meet tomorrow Tue June 8, 2021, > at 2pm UTC on zoom. > > The meeting will feature a "topic-of-the-day" presentation > by Dmitry Tantsur (dtantsur) with an > >   "Introduction to the Ironic Python Agent Builder" > > As usual, all details on https://etherpad.opendev.org/p/bare-metal-sig > The recording of this presentation is now available: https://www.youtube.com/watch?v=1L1Ld7skgDw cheers From dsneddon at redhat.com Wed Jun 9 00:42:45 2021 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 8 Jun 2021 17:42:45 -0700 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: Message-ID: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Thanks for making the announcement. Can you clarify how the feature-freeze dates will be communicated to the greater community of contributors? - Dan Sneddon > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: >  > Greetings TripleO community! > > At the most recent TripleO community meetings we have discussed formally changing the OpenStack release model for TripleO [1]. The previous released projects can be found here [2]. TripleO has previously released with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > To quote the release model doc: > ‘Trailing deliverables trail the release, so they cannot, by definition, be independent. They need to pick between cycle-with-rc or cycle-with-intermediary models.’ > > We are proposing to update the release-model to ‘independent’. This would give the TripleO community more flexibility in when we choose to cut a release. In turn this would mean less backporting, less upstream and 3rd party resources used by potentially some future releases. > > To quote the release model doc: > ‘Some projects opt to completely bypass the 6-month cycle and release independently. For example, that is the case of projects that support the development infrastructure. The “independent” model describes such projects.’ > > The discussion here is to merely inform the greater community with regards to the proposal and conversations regarding the release model. This thread is NOT meant to discuss previous releases or their supported status, merely changing the release model here [3] > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > [1] https://releases.openstack.org/reference/release_models.html > [2] https://releases.openstack.org/teams/tripleo.html > [3] https://opendev.org/openstack/releases/src/branch/master/deliverables/xena -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Jun 9 01:59:56 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 8 Jun 2021 19:59:56 -0600 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> References: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> Message-ID: Seeing no objections.... Congrats Sandeep :) On Fri, Jun 4, 2021 at 2:31 AM Gaël Chamoulaud wrote: > Of course, a big +1! > > On 02/Jun/2021 14:17, Marios Andreou wrote: > > Hello all > > > > Having discussed this with some members of the tripleo ci team > > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: > > ysandeep) for core on the tripleo-ci repos (tripleo-ci, > > tripleo-quickstart and tripleo-quickstart-extras). > > > > Sandeep joined the team about 1.5 years ago and has from the start > > demonstrated his eagerness to learn and an excellent work ethic, > > having made many useful code submissions [1] and code reviews [2] to > > the CI repos and beyond. Thanks Sandeep and keep up the good work! > > > > Please reply to this mail with a +1 or -1 for objections in the usual > > manner. If there are no objections we can declare it official in a few > > days > > > > regards, marios > > > > [1] https://review.opendev.org/q/owner:sandeepyadav93 > > [2] > https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 > > > > > > Best Regards, > Gaël > > -- > Gaël Chamoulaud - (He/Him/His) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Wed Jun 9 05:03:32 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Wed, 9 Jun 2021 10:33:32 +0530 Subject: [horizon] Weekly meeting move to #openstack-horizon channel Message-ID: Hello everyone, As discussed during the last weekly meeting, I have purposed a patch[1] to move our weekly meeting from openstack-meeting-alt to the openstack-horizon channel. So from today onwards, our weekly meeting will be on the #openstack-horizon channel (OFTC n/w). See you at the meeting. Thanks & regards, Vishal Manchanda [1] https://review.opendev.org/c/opendev/irc-meetings/+/795110 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeepggn93 at gmail.com Wed Jun 9 07:11:22 2021 From: sandeepggn93 at gmail.com (Sandeep Yadav) Date: Wed, 9 Jun 2021 12:41:22 +0530 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> Message-ID: Thank you all, I'll do my best to use my powers wisely. On Wed, Jun 9, 2021 at 7:37 AM Wesley Hayutin wrote: > Seeing no objections.... > > Congrats Sandeep :) > > On Fri, Jun 4, 2021 at 2:31 AM Gaël Chamoulaud > wrote: > >> Of course, a big +1! >> >> On 02/Jun/2021 14:17, Marios Andreou wrote: >> > Hello all >> > >> > Having discussed this with some members of the tripleo ci team >> > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >> > ysandeep) for core on the tripleo-ci repos (tripleo-ci, >> > tripleo-quickstart and tripleo-quickstart-extras). >> > >> > Sandeep joined the team about 1.5 years ago and has from the start >> > demonstrated his eagerness to learn and an excellent work ethic, >> > having made many useful code submissions [1] and code reviews [2] to >> > the CI repos and beyond. Thanks Sandeep and keep up the good work! >> > >> > Please reply to this mail with a +1 or -1 for objections in the usual >> > manner. If there are no objections we can declare it official in a few >> > days >> > >> > regards, marios >> > >> > [1] https://review.opendev.org/q/owner:sandeepyadav93 >> > [2] >> https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 >> > >> > >> >> Best Regards, >> Gaël >> >> -- >> Gaël Chamoulaud - (He/Him/His) >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Wed Jun 9 08:01:44 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 9 Jun 2021 10:01:44 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon wrote: > Thanks for making the announcement. Can you clarify how the feature-freeze > dates will be communicated to the greater community of contributors? > > - Dan Sneddon > > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: > >  > > Greetings TripleO community! > > At the most recent TripleO community meetings we have discussed formally > changing the OpenStack release model for TripleO [1]. The previous > released projects can be found here [2]. TripleO has previously released > with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > To quote the release model doc: > > ‘Trailing deliverables trail the release, so they cannot, by definition, > be independent. They need to pick between cycle-with-rc > > or cycle-with-intermediary > > models.’ > > We are proposing to update the release-model to ‘independent’. This would > give the TripleO community more flexibility in when we choose to cut a > release. In turn this would mean less backporting, less upstream and 3rd > party resources used by potentially some future releases. > > What does this change mean in terms of branches and compatibility for OpenStack stable releases?. > To quote the release model doc: > > ‘Some projects opt to completely bypass the 6-month cycle and release > independently. For example, that is the case of projects that support the > development infrastructure. The “independent” model describes such > projects.’ > > The discussion here is to merely inform the greater community with regards > to the proposal and conversations regarding the release model. This thread > is NOT meant to discuss previous releases or their supported status, merely > changing the release model here [3] > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > > [1] https://releases.openstack.org/reference/release_models.html > > [2] https://releases.openstack.org/teams/tripleo.html > > [3] > https://opendev.org/openstack/releases/src/branch/master/deliverables/xena > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Jun 9 09:02:13 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 9 Jun 2021 12:02:13 +0300 Subject: [tripleo] Changing TripleO's release model In-Reply-To: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: On Wednesday, June 9, 2021, Dan Sneddon wrote: > Thanks for making the announcement. Can you clarify how the feature-freeze > dates will be communicated to the greater community of contributors? > if you mean tripleo contributors then in the usual manner i.e. this mailing list, the IRC meeting etc if you mean the other openstack projects then that hasnt really ever applied since tripleo always has trailed the openstack release. the main thing this buys us is the ability to skip creating a particular branch (assuming we go ahead... for example not creating stable/Y when that time comes) or creating it *much* later than the trailing release model allows us which is 6 months if i recall correctly. In which case again feature freeze wouldnt apply since that stable branch would already have been created by the openstack projects regards , marios > > - Dan Sneddon > > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: > >  > > Greetings TripleO community! > > At the most recent TripleO community meetings we have discussed formally > changing the OpenStack release model for TripleO [1]. The previous > released projects can be found here [2]. TripleO has previously released > with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > To quote the release model doc: > > ‘Trailing deliverables trail the release, so they cannot, by definition, > be independent. They need to pick between cycle-with-rc > > or cycle-with-intermediary > > models.’ > > We are proposing to update the release-model to ‘independent’. This would > give the TripleO community more flexibility in when we choose to cut a > release. In turn this would mean less backporting, less upstream and 3rd > party resources used by potentially some future releases. > > To quote the release model doc: > > ‘Some projects opt to completely bypass the 6-month cycle and release > independently. For example, that is the case of projects that support the > development infrastructure. The “independent” model describes such > projects.’ > > The discussion here is to merely inform the greater community with regards > to the proposal and conversations regarding the release model. This thread > is NOT meant to discuss previous releases or their supported status, merely > changing the release model here [3] > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > > [1] https://releases.openstack.org/reference/release_models.html > > [2] https://releases.openstack.org/teams/tripleo.html > > [3] https://opendev.org/openstack/releases/src/branch/master/ > deliverables/xena > > -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Jun 9 09:05:48 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 9 Jun 2021 11:05:48 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: Le mer. 9 juin 2021 à 10:05, Alfredo Moralejo Alonso a écrit : > > > On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon wrote: > >> Thanks for making the announcement. Can you clarify how the >> feature-freeze dates will be communicated to the greater community of >> contributors? >> > Feature freeze doesn't exist in the independent model. The independent model isn't bound with the openstack series. Usually the independent model is more suitable for stable projects, deliverables that are in use outside openstack or for deliverables that aren't related to openstack series (e.g openstack/pbr). >> - Dan Sneddon >> >> On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: >> >>  >> >> Greetings TripleO community! >> >> At the most recent TripleO community meetings we have discussed formally >> changing the OpenStack release model for TripleO [1]. The previous >> released projects can be found here [2]. TripleO has previously released >> with release-type[‘trailing’, ‘cycle-with-intermediary’]. >> >> To quote the release model doc: >> >> ‘Trailing deliverables trail the release, so they cannot, by definition, >> be independent. They need to pick between cycle-with-rc >> >> or cycle-with-intermediary >> >> models.’ >> >> We are proposing to update the release-model to ‘independent’. This >> would give the TripleO community more flexibility in when we choose to cut >> a release. In turn this would mean less backporting, less upstream and 3rd >> party resources used by potentially some future releases. >> >> How do you plan to manage the different versions of OSP without upstream branches? Backports can be done by defining downstream branches for OSP, however, that will introduce a gymnastic to filter and select the changes to backport, the sealing between OSP versions will be more difficult to manage downstream. That leads us to the next question. How to deal with RDO? If I'm right RDO is branch based ( https://www.rdoproject.org/what/repos/), that will force some updates here too. That will also impact tools like packstack ( https://www.rdoproject.org/install/packstack/). > >> > What does this change mean in terms of branches and compatibility for > OpenStack stable releases?. > The independent release model means no stable branches anymore for deliverables that follow this model (e.g openstack/pbr). The deliverables that stick to this model aren't no longer coordinated by the openstack series (A, B, C..., Ussuri, Victoria, Wallaby, Xena, Y, Z). However, we should notice that the independent model is different from branchless. Branches can be created by project owners, however, these branches won't be released by the release team, only the master branch will be released. So, maybe you could emulate the OSP versions upstream (some sort of stable branches) and then backport patches to them, however, you'll have to release them by yourself. > >> To quote the release model doc: >> >> ‘Some projects opt to completely bypass the 6-month cycle and release >> independently. For example, that is the case of projects that support the >> development infrastructure. The “independent” model describes such >> projects.’ >> >> The discussion here is to merely inform the greater community with >> regards to the proposal and conversations regarding the release model. >> This thread is NOT meant to discuss previous releases or their supported >> status, merely changing the release model here [3] >> >> >> [0] https://etherpad.opendev.org/p/tripleo-meeting-items >> >> [1] https://releases.openstack.org/reference/release_models.html >> >> [2] https://releases.openstack.org/teams/tripleo.html >> >> [3] >> https://opendev.org/openstack/releases/src/branch/master/deliverables/xena >> >> -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Jun 9 09:06:13 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 9 Jun 2021 12:06:13 +0300 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: On Wednesday, June 9, 2021, Alfredo Moralejo Alonso wrote: > > > On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon wrote: > >> Thanks for making the announcement. Can you clarify how the >> feature-freeze dates will be communicated to the greater community of >> contributors? >> >> - Dan Sneddon >> >> On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: >> >>  >> >> Greetings TripleO community! >> >> At the most recent TripleO community meetings we have discussed formally >> changing the OpenStack release model for TripleO [1]. The previous >> released projects can be found here [2]. TripleO has previously released >> with release-type[‘trailing’, ‘cycle-with-intermediary’]. >> >> To quote the release model doc: >> >> ‘Trailing deliverables trail the release, so they cannot, by definition, >> be independent. They need to pick between cycle-with-rc >> >> or cycle-with-intermediary >> >> models.’ >> >> We are proposing to update the release-model to ‘independent’. This >> would give the TripleO community more flexibility in when we choose to cut >> a release. In turn this would mean less backporting, less upstream and 3rd >> party resources used by potentially some future releases. >> >> > What does this change mean in terms of branches and compatibility for > OpenStack stable releases?. > > as i wrote to Dan just now the main thing is that we may delay or even skip a particular branch. For compatibility I guess it means we would have to rely on git tags so perhaps making consistently frequent (eg monthly? or more?) releases for all the tripleo repos. You could then call a particular range of tags as being compatible with stable/Y for example. Does it sound sane/doable from an rdo package build perspective? regards, marios > To quote the release model doc: >> >> ‘Some projects opt to completely bypass the 6-month cycle and release >> independently. For example, that is the case of projects that support the >> development infrastructure. The “independent” model describes such >> projects.’ >> >> The discussion here is to merely inform the greater community with >> regards to the proposal and conversations regarding the release model. >> This thread is NOT meant to discuss previous releases or their supported >> status, merely changing the release model here [3] >> >> >> [0] https://etherpad.opendev.org/p/tripleo-meeting-items >> >> [1] https://releases.openstack.org/reference/release_models.html >> >> [2] https://releases.openstack.org/teams/tripleo.html >> >> [3] https://opendev.org/openstack/releases/src/branch/master/ >> deliverables/xena >> >> -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Jun 9 09:13:10 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 9 Jun 2021 12:13:10 +0300 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> Message-ID: thanks all for voting ... yes said I would add him in yesterday's irc meeting but weshay beat me to it ;) I just checked and see ysandeep is now in the core reviewers group https://review.opendev.org/admin/groups/0319cee8020840a3016f46359b076fa6b6ea831a,members ysandeep go +2 all the CI things ! regards, marios On Wednesday, June 9, 2021, Wesley Hayutin wrote: > Seeing no objections.... > > Congrats Sandeep :) > > On Fri, Jun 4, 2021 at 2:31 AM Gaël Chamoulaud > wrote: > >> Of course, a big +1! >> >> On 02/Jun/2021 14:17, Marios Andreou wrote: >> > Hello all >> > >> > Having discussed this with some members of the tripleo ci team >> > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >> > ysandeep) for core on the tripleo-ci repos (tripleo-ci, >> > tripleo-quickstart and tripleo-quickstart-extras). >> > >> > Sandeep joined the team about 1.5 years ago and has from the start >> > demonstrated his eagerness to learn and an excellent work ethic, >> > having made many useful code submissions [1] and code reviews [2] to >> > the CI repos and beyond. Thanks Sandeep and keep up the good work! >> > >> > Please reply to this mail with a +1 or -1 for objections in the usual >> > manner. If there are no objections we can declare it official in a few >> > days >> > >> > regards, marios >> > >> > [1] https://review.opendev.org/q/owner:sandeepyadav93 >> > [2] https://www.stackalytics.io/report/contribution?module= >> tripleo-group&project_type=openstack&days=180 >> > >> > >> >> Best Regards, >> Gaël >> >> -- >> Gaël Chamoulaud - (He/Him/His) >> > -- _sent from my mobile - sorry for spacing spelling etc_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Wed Jun 9 11:27:25 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 9 Jun 2021 13:27:25 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: On Wed, Jun 9, 2021 at 11:06 AM Marios Andreou wrote: > > > On Wednesday, June 9, 2021, Alfredo Moralejo Alonso > wrote: > >> >> >> On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon wrote: >> >>> Thanks for making the announcement. Can you clarify how the >>> feature-freeze dates will be communicated to the greater community of >>> contributors? >>> >>> - Dan Sneddon >>> >>> On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: >>> >>>  >>> >>> Greetings TripleO community! >>> >>> At the most recent TripleO community meetings we have discussed formally >>> changing the OpenStack release model for TripleO [1]. The previous >>> released projects can be found here [2]. TripleO has previously >>> released with release-type[‘trailing’, ‘cycle-with-intermediary’]. >>> >>> To quote the release model doc: >>> >>> ‘Trailing deliverables trail the release, so they cannot, by >>> definition, be independent. They need to pick between cycle-with-rc >>> >>> or cycle-with-intermediary >>> >>> models.’ >>> >>> We are proposing to update the release-model to ‘independent’. This >>> would give the TripleO community more flexibility in when we choose to cut >>> a release. In turn this would mean less backporting, less upstream and 3rd >>> party resources used by potentially some future releases. >>> >>> >> What does this change mean in terms of branches and compatibility for >> OpenStack stable releases?. >> >> > > > as i wrote to Dan just now the main thing is that we may delay or even > skip a particular branch. For compatibility I guess it means we would have > to rely on git tags so perhaps making consistently frequent (eg monthly? or > more?) releases for all the tripleo repos. You could then call a particular > range of tags as being compatible with stable/Y for example. Does it sound > sane/doable from an rdo package build perspective? > > For me it's fine if TripleO provides a list of tags which are able to deploy and coinstall with a certain OpenStack release, let's say stable/Y, i don't see much problem on that, we'd need to figure out how to express that as code. The actual problem I see is how to maintain that working overtime during the maintenance phase of release Y without stable/Y branches or CI jobs for old releases. Let's assume that at GA time for Y you provide a list of tags for tripleo projects coming from the master branch. How will you manage a bug affecting to tripleo on release Y or introduced by any changing factor (OS updates, etc...)?, will master branch be kept tested and compatible with both master and stable/Y (as branchless project do, i.e. tempest)?. Note that frequent releases on master branches will not help to support Y release if a change in the branch depends on changes done in more recent releases. >From RDO, we don't require all packages to have stable branches (we include independent or branchless packages in the distro) but we want to provide a validated combination of packages working for a certain synchronized release and with the mechanism to fix it if it breaks during the maintenance period. I'm not sure how tripleo can do this without branches or maintaining backwards compatibility in master or other branches. > > regards, marios > > > > >> To quote the release model doc: >>> >>> ‘Some projects opt to completely bypass the 6-month cycle and release >>> independently. For example, that is the case of projects that support the >>> development infrastructure. The “independent” model describes such >>> projects.’ >>> >>> The discussion here is to merely inform the greater community with >>> regards to the proposal and conversations regarding the release model. >>> This thread is NOT meant to discuss previous releases or their supported >>> status, merely changing the release model here [3] >>> >>> >>> [0] https://etherpad.opendev.org/p/tripleo-meeting-items >>> >>> [1] https://releases.openstack.org/reference/release_models.html >>> >>> [2] https://releases.openstack.org/teams/tripleo.html >>> >>> [3] >>> https://opendev.org/openstack/releases/src/branch/master/deliverables/xena >>> >>> > > -- > _sent from my mobile - sorry for spacing spelling etc_ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 9 11:49:38 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 09 Jun 2021 12:49:38 +0100 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> Message-ID: <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> On Wed, 2021-06-09 at 12:06 +0300, Marios Andreou wrote: > On Wednesday, June 9, 2021, Alfredo Moralejo Alonso > wrote: > > > > > > > On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon wrote: > > > > > Thanks for making the announcement. Can you clarify how the > > > feature-freeze dates will be communicated to the greater community of > > > contributors? > > > > > > - Dan Sneddon > > > > > > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin wrote: > > > > > >  > > > > > > Greetings TripleO community! > > > > > > At the most recent TripleO community meetings we have discussed formally > > > changing the OpenStack release model for TripleO [1]. The previous > > > released projects can be found here [2]. TripleO has previously released > > > with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > > > > > To quote the release model doc: > > > > > > ‘Trailing deliverables trail the release, so they cannot, by definition, > > > be independent. They need to pick between cycle-with-rc > > > > > > or cycle-with-intermediary > > > > > > models.’ > > > > > > We are proposing to update the release-model to ‘independent’. This > > > would give the TripleO community more flexibility in when we choose to cut > > > a release. In turn this would mean less backporting, less upstream and 3rd > > > party resources used by potentially some future releases. > > > > > > > > What does this change mean in terms of branches and compatibility for > > OpenStack stable releases?. > > > > > > > as i wrote to Dan just now the main thing is that we may delay or even skip > a particular branch. For compatibility I guess it means we would have to > rely on git tags so perhaps making consistently frequent (eg monthly? or > more?) releases for all the tripleo repos. You could then call a particular > range of tags as being compatible with stable/Y for example. Does it sound > sane/doable from an rdo package build perspective? > too me this feels like we are leaking downstream product lifecycle into upstream. even if redhat is overwhelmingly the majority contibutor of reviews and commits to ooo im not sure that changing the upstream lifestyle to align more closely with our product life cycle is the correct thing to do. at least while tripleo is still in the Openstack namespaces and not the x namespaces. Skipping upstream release is really quite a radical departure form the project original goals. i think it would also be counter productive to our downstream efforts to move our testing close to upstream. if ooo was to lose the ablity to test master for example we would not be able to use ooo in our downstream ci to test feature that we plan to release osp n+1 that are develop during an upstream cycle that wont be productised. i do not work on ooo so at the end of the day this wont affect me much but to me skipping releases seam counter intuitive given the previous efforts to make ooo more usable for development and ci. Moving to independent to decouple the lifecycle seams more reasonable if the underlying goal is not to skip releases. you can release when ready rather then scrambling or wating for a deadline. personally i think moving in the other direction so that ooo can release sooner not later would make the project more appealing as the delay in support of a release is often considered a detractor for tripleo vs other openstack installers. i would hope that this change would not have any effect on the rdo packaging of non ooo packages. the rdo packages are used by other instalation methods (the puppet moduels for example) including i belive some of the larger chineese providers that have written there own installers. i think it would be damaging to centos if rdo was to skip upstream version of say nova. what might need to change is the packaging of ooo itself in rdo. tl;dr im not against the idea of ooo moving to independent model but i would hope that it will not affect RDO's packaging of non ooo projects and that ooo can still be used for ci of master and stable branches of for example nova. regards sean > > regards, marios > > > > > > To quote the release model doc: > > > > > > ‘Some projects opt to completely bypass the 6-month cycle and release > > > independently. For example, that is the case of projects that support the > > > development infrastructure. The “independent” model describes such > > > projects.’ > > > > > > The discussion here is to merely inform the greater community with > > > regards to the proposal and conversations regarding the release model. > > > This thread is NOT meant to discuss previous releases or their supported > > > status, merely changing the release model here [3] > > > > > > > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > > > > > > [1] https://releases.openstack.org/reference/release_models.html > > > > > > [2] https://releases.openstack.org/teams/tripleo.html > > > > > > [3] https://opendev.org/openstack/releases/src/branch/master/ > > > deliverables/xena > > > > > > > From amoralej at redhat.com Wed Jun 9 13:17:37 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 9 Jun 2021 15:17:37 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: On Wed, Jun 9, 2021 at 1:49 PM Sean Mooney wrote: > On Wed, 2021-06-09 at 12:06 +0300, Marios Andreou wrote: > > On Wednesday, June 9, 2021, Alfredo Moralejo Alonso > > > wrote: > > > > > > > > > > > On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon > wrote: > > > > > > > Thanks for making the announcement. Can you clarify how the > > > > feature-freeze dates will be communicated to the greater community of > > > > contributors? > > > > > > > > - Dan Sneddon > > > > > > > > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin > wrote: > > > > > > > >  > > > > > > > > Greetings TripleO community! > > > > > > > > At the most recent TripleO community meetings we have discussed > formally > > > > changing the OpenStack release model for TripleO [1]. The previous > > > > released projects can be found here [2]. TripleO has previously > released > > > > with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > > > > > > > To quote the release model doc: > > > > > > > > ‘Trailing deliverables trail the release, so they cannot, by > definition, > > > > be independent. They need to pick between cycle-with-rc > > > > < > https://releases.openstack.org/reference/release_models.html#cycle-with-rc > > > > > > or cycle-with-intermediary > > > > < > https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary > > > > > > models.’ > > > > > > > > We are proposing to update the release-model to ‘independent’. This > > > > would give the TripleO community more flexibility in when we choose > to cut > > > > a release. In turn this would mean less backporting, less upstream > and 3rd > > > > party resources used by potentially some future releases. > > > > > > > > > > > What does this change mean in terms of branches and compatibility for > > > OpenStack stable releases?. > > > > > > > > > > > > as i wrote to Dan just now the main thing is that we may delay or even > skip > > a particular branch. For compatibility I guess it means we would have to > > rely on git tags so perhaps making consistently frequent (eg monthly? or > > more?) releases for all the tripleo repos. You could then call a > particular > > range of tags as being compatible with stable/Y for example. Does it > sound > > sane/doable from an rdo package build perspective? > > > too me this feels like we are leaking downstream product lifecycle into > upstream. > even if redhat is overwhelmingly the majority contibutor of reviews and > commits to > ooo im not sure that changing the upstream lifestyle to align more closely > with our product life > cycle is the correct thing to do. > > at least while tripleo is still in the Openstack namespaces and not the x > namespaces. > Skipping upstream release is really quite a radical departure form the > project original goals. > i think it would also be counter productive to our downstream efforts to > move our testing close to upstream. > if ooo was to lose the ablity to test master for example we would not be > able to use ooo in our downstream ci to test > feature that we plan to release osp n+1 that are develop during an > upstream cycle that wont be productised. > > i do not work on ooo so at the end of the day this wont affect me much but > to me skipping releases seam counter intuitive > given the previous efforts to make ooo more usable for development and ci. > Moving to independent > to decouple the lifecycle seams more reasonable if the underlying goal is > not to skip releases. you can release when ready > rather then scrambling or wating for a deadline. personally i think moving > in the other direction so that ooo can release sooner > not later would make the project more appealing as the delay in support of > a release is often considered a detractor for tripleo vs > other openstack installers. > > i would hope that this change would not have any effect on the rdo > packaging of non ooo packages. > the rdo packages are used by other instalation methods (the puppet moduels > for example) including i belive some of the larger chineese providers that > have written there own installers. i think it would be damaging to centos > if rdo was to skip upstream version of say nova. what might need to change > is the packaging of ooo itself in rdo. > > tl;dr im not against the idea of ooo moving to independent model but i > would hope that it will not affect RDO's packaging of non ooo projects and > that > ooo can still be used for ci of master and stable branches of for example > nova. > > RDO has no plans on skipping releases or any other changes affecting non-tripleo packages. The impact of this change (unclear at this point) should only affect the packages for those repos. Note that RDO aims at being used and useful for other users and deployment tools as Puppet modules, Kolla, or others willing to work in CentOS and we'd like to maintain the collaboration with them as needed. Regards, Alfredo > regards > sean > > > > > regards, marios > > > > > > > > > > > To quote the release model doc: > > > > > > > > ‘Some projects opt to completely bypass the 6-month cycle and release > > > > independently. For example, that is the case of projects that > support the > > > > development infrastructure. The “independent” model describes such > > > > projects.’ > > > > > > > > The discussion here is to merely inform the greater community with > > > > regards to the proposal and conversations regarding the release > model. > > > > This thread is NOT meant to discuss previous releases or their > supported > > > > status, merely changing the release model here [3] > > > > > > > > > > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > > > > > > > > [1] https://releases.openstack.org/reference/release_models.html > > > > > > > > [2] https://releases.openstack.org/teams/tripleo.html > > > > > > > > [3] https://opendev.org/openstack/releases/src/branch/master/ > > > > deliverables/xena > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 9 13:57:03 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 09 Jun 2021 14:57:03 +0100 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: <4450b0fa40e3decf06f365c10be5e370af923e3b.camel@redhat.com> On Wed, 2021-06-09 at 15:17 +0200, Alfredo Moralejo Alonso wrote: > On Wed, Jun 9, 2021 at 1:49 PM Sean Mooney wrote: > > > On Wed, 2021-06-09 at 12:06 +0300, Marios Andreou wrote: > > > On Wednesday, June 9, 2021, Alfredo Moralejo Alonso > > > > > wrote: > > > > > > > > > > > > > > > On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon > > wrote: > > > > > > > > > Thanks for making the announcement. Can you clarify how the > > > > > feature-freeze dates will be communicated to the greater community of > > > > > contributors? > > > > > > > > > > - Dan Sneddon > > > > > > > > > > On Jun 8, 2021, at 8:21 AM, Wesley Hayutin > > wrote: > > > > > > > > > >  > > > > > > > > > > Greetings TripleO community! > > > > > > > > > > At the most recent TripleO community meetings we have discussed > > formally > > > > > changing the OpenStack release model for TripleO [1]. The previous > > > > > released projects can be found here [2]. TripleO has previously > > released > > > > > with release-type[‘trailing’, ‘cycle-with-intermediary’]. > > > > > > > > > > To quote the release model doc: > > > > > > > > > > ‘Trailing deliverables trail the release, so they cannot, by > > definition, > > > > > be independent. They need to pick between cycle-with-rc > > > > > < > > https://releases.openstack.org/reference/release_models.html#cycle-with-rc > > > > > > > > or cycle-with-intermediary > > > > > < > > https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary > > > > > > > > models.’ > > > > > > > > > > We are proposing to update the release-model to ‘independent’. This > > > > > would give the TripleO community more flexibility in when we choose > > to cut > > > > > a release. In turn this would mean less backporting, less upstream > > and 3rd > > > > > party resources used by potentially some future releases. > > > > > > > > > > > > > > What does this change mean in terms of branches and compatibility for > > > > OpenStack stable releases?. > > > > > > > > > > > > > > > > > as i wrote to Dan just now the main thing is that we may delay or even > > skip > > > a particular branch. For compatibility I guess it means we would have to > > > rely on git tags so perhaps making consistently frequent (eg monthly? or > > > more?) releases for all the tripleo repos. You could then call a > > particular > > > range of tags as being compatible with stable/Y for example. Does it > > sound > > > sane/doable from an rdo package build perspective? > > > > > too me this feels like we are leaking downstream product lifecycle into > > upstream. > > even if redhat is overwhelmingly the majority contibutor of reviews and > > commits to > > ooo im not sure that changing the upstream lifestyle to align more closely > > with our product life > > cycle is the correct thing to do. > > > > at least while tripleo is still in the Openstack namespaces and not the x > > namespaces. > > Skipping upstream release is really quite a radical departure form the > > project original goals. > > i think it would also be counter productive to our downstream efforts to > > move our testing close to upstream. > > if ooo was to lose the ablity to test master for example we would not be > > able to use ooo in our downstream ci to test > > feature that we plan to release osp n+1 that are develop during an > > upstream cycle that wont be productised. > > > > i do not work on ooo so at the end of the day this wont affect me much but > > to me skipping releases seam counter intuitive > > given the previous efforts to make ooo more usable for development and ci. > > Moving to independent > > to decouple the lifecycle seams more reasonable if the underlying goal is > > not to skip releases. you can release when ready > > rather then scrambling or wating for a deadline. personally i think moving > > in the other direction so that ooo can release sooner > > not later would make the project more appealing as the delay in support of > > a release is often considered a detractor for tripleo vs > > other openstack installers. > > > > i would hope that this change would not have any effect on the rdo > > packaging of non ooo packages. > > the rdo packages are used by other instalation methods (the puppet moduels > > for example) including i belive some of the larger chineese providers that > > have written there own installers. i think it would be damaging to centos > > if rdo was to skip upstream version of say nova. what might need to change > > is the packaging of ooo itself in rdo. > > > > tl;dr im not against the idea of ooo moving to independent model but i > > would hope that it will not affect RDO's packaging of non ooo projects and > > that > > ooo can still be used for ci of master and stable branches of for example > > nova. > > > > > > RDO has no plans on skipping releases or any other changes affecting > non-tripleo packages. The impact of this change (unclear at this point) > should only affect the packages for those repos. ack > > Note that RDO aims at being used and useful for other users and deployment > tools as Puppet modules, Kolla, or others willing to work in CentOS and > we'd like to maintain the collaboration with them as needed. ya that is what i was expecting. thanks for confirming. provided the possible change in ooo direction does not negatively impact the other consumes of rdo i dont really have an objection to ooo changing how they work if peolel think it will make there lives and there customer live simpler in the long run. as i said i do not work on or use ooo frequently but i have consumed the output of rdo via kolla in the past and while i typeically prefer using the source install i know many do use the centos binary install variant using the rdo packages. > > Regards, > > Alfredo > > > > regards > > sean > > > > > > > > regards, marios > > > > > > > > > > > > > > > > To quote the release model doc: > > > > > > > > > > ‘Some projects opt to completely bypass the 6-month cycle and release > > > > > independently. For example, that is the case of projects that > > support the > > > > > development infrastructure. The “independent” model describes such > > > > > projects.’ > > > > > > > > > > The discussion here is to merely inform the greater community with > > > > > regards to the proposal and conversations regarding the release > > model. > > > > > This thread is NOT meant to discuss previous releases or their > > supported > > > > > status, merely changing the release model here [3] > > > > > > > > > > > > > > > [0] https://etherpad.opendev.org/p/tripleo-meeting-items > > > > > > > > > > [1] https://releases.openstack.org/reference/release_models.html > > > > > > > > > > [2] https://releases.openstack.org/teams/tripleo.html > > > > > > > > > > [3] https://opendev.org/openstack/releases/src/branch/master/ > > > > > deliverables/xena > > > > > > > > > > > > > > > > > > > From james.slagle at gmail.com Wed Jun 9 13:58:29 2021 From: james.slagle at gmail.com (James Slagle) Date: Wed, 9 Jun 2021 09:58:29 -0400 Subject: [tripleo] Changing TripleO's release model In-Reply-To: <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: On Wed, Jun 9, 2021 at 7:54 AM Sean Mooney wrote: > too me this feels like we are leaking downstream product lifecycle into > upstream. > even if redhat is overwhelmingly the majority contibutor of reviews and > commits to > ooo im not sure that changing the upstream lifestyle to align more closely > with our product life > cycle is the correct thing to do. > I wouldn't characterize it as "leaking". Instead, we are aiming to accurately reflect what we intend to support as a community (not as any company), based on who we have working on this project. Unfortunately the reality is that no one (community or otherwise) should use TripleO from rocky/stein/ussuri/victoria other than for dev/test in my opinion. There's no upgrade path from any of those releases, and the community isn't working on one. However, that is not clearly represented by the project. While we don't intend to retrofit this policy to past branches, part of proposing this change is to help clear that up going forward. > at least while tripleo is still in the Openstack namespaces and not the x > namespaces. > I don't think I understand. At least..."what"? How is the OpenStack namespace related to release models? How does the namespace (which is a construct of how git repositories are organized aiui), have a relation to what is included in an OpenStack release? > Skipping upstream release is really quite a radical departure form the > project original goals. > I disagree with how you remember the history, and I think this is an overstatement. > i think it would also be counter productive to our downstream efforts to > move our testing close to upstream. > if ooo was to lose the ablity to test master for example we would not be > able to use ooo in our downstream ci to test > feature that we plan to release osp n+1 that are develop during an > upstream cycle that wont be productised. > I don't follow the premise. How is it counterproductive to move our testing close to upstream? We'd always continue to test master. When it comes time for OpenStack to branch, such as to create stable/xena in all the service projects, TripleO may choose not to branch, and I think at that point, TripleO would no longer have CI jobs running on stable/xena of those service projects. > > i do not work on ooo so at the end of the day this wont affect me much but > to me skipping releases seam counter intuitive > given the previous efforts to make ooo more usable for development and ci. > Moving to independent > to decouple the lifecycle seams more reasonable if the underlying goal is > not to skip releases. you can release when ready > rather then scrambling or wating for a deadline. I think the "when ready" is part of the problem here. For example, one might look at when we released stable/victoria and claim TripleO was ready. However, when TripleO victoria was released, you could not upgrade from ussuri to victoria. Likewise, you can't upgrade from victoria to wallaby. Were we really ready to release? My take is that we shouldn't have released at all. I think it sends a false signal. An alternative to this entire proposal would be to double down on making TripleO more fully support each OpenStack stable branch. That would mean adding update/upgrade jobs for each stable branch, and doing the development work to actually implement that support (if it's not tested, it's broken), as well as likely adding other missing jobs instead of de-emphasizing testing on these branches. AIUI, we do not want to be adding additional CI jobs of that scale upstream, especially given the complaints about node usage from TripleO from earlier in this cycle. And, we do not have the humans to develop and maintain this additional work. > personally i think moving in the other direction so that ooo can release > sooner > not later would make the project more appealing as the delay in support of > a release is often considered a detractor for tripleo vs > other openstack installers. > I think moving to the independent model does enable us to consider releasing sooner. > > i would hope that this change would not have any effect on the rdo > packaging of non ooo packages. > the rdo packages are used by other instalation methods (the puppet moduels > for example) including i belive some of the larger chineese providers that > have written there own installers. i think it would be damaging to centos > if rdo was to skip upstream version of say nova. what might need to change > is the packaging of ooo itself in rdo. > > tl;dr im not against the idea of ooo moving to independent model but i > would hope that it will not affect RDO's packaging of non ooo projects and > that > ooo can still be used for ci of master and stable branches of for example > nova. > We'd continue to always CI master. Not all stable branches would remain covered by TripleO. For example, if TripleO didn't branch and release for xena, you wouldn't have TripleO jobs on nova stable/xena patches. Those jobs don't provide any meaningful feedback for TripleO. Perhaps they do for nova as you are backporting a change through every branch, and you're final destination is a branch where TripleO is expected to be working, such as wallaby. You would want to know if the change broke on xena for example, or if it were something on wallaby. I can see how that would be useful for nova. However, part of what we're saying is that TripleO is trying to simplify what we are supporting and testing, so we can get better at supporting the releases that are most important to our community. Yes, there is some downstream influence here, in the same way that TripleO doesn't support deploying with Ubuntu, because it is less important to our (TripleO) community. I think that's ok, and I see nothing wrong with it. If the service projects (such as nova) want to commit additional resources and the upstream CI node count can handle the increase that properly supporting each stable branch implies, then I think we can weigh that option as well. However, the current status quo of more or less just checking the boxes on each stable branch is wrong and sends a false message in my opinion. That's a big part of what we're trying to correct. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed Jun 9 14:16:16 2021 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 9 Jun 2021 08:16:16 -0600 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: On Wed, Jun 9, 2021 at 8:06 AM James Slagle wrote: > > > On Wed, Jun 9, 2021 at 7:54 AM Sean Mooney wrote: > >> too me this feels like we are leaking downstream product lifecycle into >> upstream. >> even if redhat is overwhelmingly the majority contibutor of reviews and >> commits to >> ooo im not sure that changing the upstream lifestyle to align more >> closely with our product life >> cycle is the correct thing to do. >> > > I wouldn't characterize it as "leaking". Instead, we are aiming to > accurately reflect what we intend to support as a community (not as any > company), based on who we have working on this project. > > Unfortunately the reality is that no one (community or otherwise) should > use TripleO from rocky/stein/ussuri/victoria other than for dev/test in my > opinion. There's no upgrade path from any of those releases, and the > community isn't working on one. However, that is not clearly represented by > the project. While we don't intend to retrofit this policy to past > branches, part of proposing this change is to help clear that up going > forward. > > >> at least while tripleo is still in the Openstack namespaces and not the x >> namespaces. >> > > I don't think I understand. At least..."what"? How is the OpenStack > namespace related to release models? How does the namespace (which is a > construct of how git repositories are organized aiui), have a relation to > what is included in an OpenStack release? > > >> Skipping upstream release is really quite a radical departure form the >> project original goals. >> > > I disagree with how you remember the history, and I think this is an > overstatement. > > >> i think it would also be counter productive to our downstream efforts to >> move our testing close to upstream. >> > if ooo was to lose the ablity to test master for example we would not be >> able to use ooo in our downstream ci to test >> feature that we plan to release osp n+1 that are develop during an >> upstream cycle that wont be productised. >> > > I don't follow the premise. How is it counterproductive to move our > testing close to upstream? > We'd always continue to test master. When it comes time for OpenStack to > branch, such as to create stable/xena in all the service projects, TripleO > may choose not to branch, and I think at that point, TripleO would no > longer have CI jobs running on stable/xena of those service projects. > > >> >> i do not work on ooo so at the end of the day this wont affect me much >> but to me skipping releases seam counter intuitive >> given the previous efforts to make ooo more usable for development and >> ci. Moving to independent >> to decouple the lifecycle seams more reasonable if the underlying goal is >> not to skip releases. you can release when ready >> rather then scrambling or wating for a deadline. > > > I think the "when ready" is part of the problem here. For example, one > might look at when we released stable/victoria and claim TripleO was ready. > However, when TripleO victoria was released, you could not upgrade from > ussuri to victoria. Likewise, you can't upgrade from victoria to wallaby. > Were we really ready to release? My take is that we shouldn't have released > at all. I think it sends a false signal. > > An alternative to this entire proposal would be to double down on making > TripleO more fully support each OpenStack stable branch. That would mean > adding update/upgrade jobs for each stable branch, and doing the > development work to actually implement that support (if it's not tested, > it's broken), as well as likely adding other missing jobs instead of > de-emphasizing testing on these branches. > > AIUI, we do not want to be adding additional CI jobs of that scale > upstream, especially given the complaints about node usage from TripleO > from earlier in this cycle. And, we do not have the humans to develop and > maintain this additional work. > > >> personally i think moving in the other direction so that ooo can release >> sooner >> not later would make the project more appealing as the delay in support >> of a release is often considered a detractor for tripleo vs >> other openstack installers. >> > > I think moving to the independent model does enable us to consider > releasing sooner. > I think there's also something here that we should highlight in that it's desirable to be able to update the tripleo deployment process outside of openstack. By switching to independent and focusing on extracting the version specifics, we could allow for folks to leverage newer tripleo functionality (e.g. when we switched from mistral/zaqar to just ansible) without having to upgrade their openstack as well. > > > >> >> i would hope that this change would not have any effect on the rdo >> packaging of non ooo packages. >> the rdo packages are used by other instalation methods (the puppet >> moduels for example) including i belive some of the larger chineese >> providers that >> have written there own installers. i think it would be damaging to centos >> if rdo was to skip upstream version of say nova. what might need to change >> is the packaging of ooo itself in rdo. >> >> tl;dr im not against the idea of ooo moving to independent model but i >> would hope that it will not affect RDO's packaging of non ooo projects and >> that >> ooo can still be used for ci of master and stable branches of for example >> nova. >> > > We'd continue to always CI master. > > Not all stable branches would remain covered by TripleO. For example, if > TripleO didn't branch and release for xena, you wouldn't have TripleO jobs > on nova stable/xena patches. Those jobs don't provide any meaningful > feedback for TripleO. Perhaps they do for nova as you are backporting a > change through every branch, and you're final destination is a branch where > TripleO is expected to be working, such as wallaby. You would want to know > if the change broke on xena for example, or if it were something on > wallaby. I can see how that would be useful for nova. > > However, part of what we're saying is that TripleO is trying to simplify > what we are supporting and testing, so we can get better at supporting the > releases that are most important to our community. Yes, there is some > downstream influence here, in the same way that TripleO doesn't support > deploying with Ubuntu, because it is less important to our (TripleO) > community. I think that's ok, and I see nothing wrong with it. > > If the service projects (such as nova) want to commit additional resources > and the upstream CI node count can handle the increase that properly > supporting each stable branch implies, then I think we can weigh that > option as well. However, the current status quo of more or less just > checking the boxes on each stable branch is wrong and sends a false message > in my opinion. That's a big part of what we're trying to correct. > > -- > -- James Slagle > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 9 16:13:45 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 09 Jun 2021 17:13:45 +0100 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: <2a9826418077d18c20e7ad718d7a99f4bffa8515.camel@redhat.com> On Wed, 2021-06-09 at 09:58 -0400, James Slagle wrote: > On Wed, Jun 9, 2021 at 7:54 AM Sean Mooney wrote: > > > too me this feels like we are leaking downstream product lifecycle into > > upstream. > > even if redhat is overwhelmingly the majority contibutor of reviews and > > commits to > > ooo im not sure that changing the upstream lifestyle to align more closely > > with our product life > > cycle is the correct thing to do. > > > > I wouldn't characterize it as "leaking". Instead, we are aiming to > accurately reflect what we intend to support as a community (not as any > company), based on who we have working on this project. > > Unfortunately the reality is that no one (community or otherwise) should > use TripleO from rocky/stein/ussuri/victoria other than for dev/test in my > opinion. There's no upgrade path from any of those releases, and the > community isn't working on one. However, that is not clearly represented by > the project. While we don't intend to retrofit this policy to past > branches, part of proposing this change is to help clear that up going > forward. if that is the general consensus yes i think it would be good to update the docs to highlight that. i had assumed that the goal of ooo was to have all upstream releases be production ready independent of if that is productized downstream. > > > > at least while tripleo is still in the Openstack namespaces and not the x > > namespaces. > > > > I don't think I understand. At least..."what"? How is the OpenStack > namespace related to release models? How does the namespace (which is a > construct of how git repositories are organized aiui), have a relation to > what is included in an OpenStack release? i was more thinking that perhaps if ooo was moving away form the corodianted releases it might make more sense for it to move to a spereate top level project like starlinx so rather then x/ namespace maybe a top level tripleo namesapace to better refalect that while it deploy openstack it does not follow the ame release cadance. coupled with the fact that ooo already those not follow the normal stable backport policies that other openstack proejct follow and the recent dicussing about opting out of the global requirement process it just felt like ooo was moving away form being a part of openstack to being a related porjects like starlingx. > > > > Skipping upstream release is really quite a radical departure form the > > project original goals. > > > > I disagree with how you remember the history, and I think this is an > overstatement. > > > > i think it would also be counter productive to our downstream efforts to > > move our testing close to upstream. > > > if ooo was to lose the ablity to test master for example we would not be > > able to use ooo in our downstream ci to test > > feature that we plan to release osp n+1 that are develop during an > > upstream cycle that wont be productised. > > > > I don't follow the premise. How is it counterproductive to move our testing > close to upstream? > We'd always continue to test master. When it comes time for OpenStack to > branch, such as to create stable/xena in all the service projects, TripleO > may choose not to branch, and I think at that point, TripleO would no > longer have CI jobs running on stable/xena of those service projects. so for other project i guess the impact of that would be removing the triplo job form our our ci piplines for the stable branch. for nova we do not currently hav eany ooo based ci jobs but we had breifly discussed having ooo-standalone job to do some centos/ooo based testing at one point. destack is generally the correct tool to test change to nova but if ooo was to skip stable release i think it would mean it would not be a candiate for other project to use in there ci as an alternivie. that is a valid choice for the ooo to make but it effectlivy means we will never have a ooo voting jobs in nova if ooo does start skiping upstream releases. > > > > > > i do not work on ooo so at the end of the day this wont affect me much but > > to me skipping releases seam counter intuitive > > given the previous efforts to make ooo more usable for development and ci. > > Moving to independent > > to decouple the lifecycle seams more reasonable if the underlying goal is > > not to skip releases. you can release when ready > > rather then scrambling or wating for a deadline. > > > I think the "when ready" is part of the problem here. For example, one > might look at when we released stable/victoria and claim TripleO was ready. > However, when TripleO victoria was released, you could not upgrade from > ussuri to victoria. Likewise, you can't upgrade from victoria to wallaby. > Were we really ready to release? My take is that we shouldn't have released > at all. I think it sends a false signal. well honestly if i was using ooo as my deployment tool i would have expected upgrade support to be completed prior to the reslease yes but i think this is not an artifact of the release cycle but rather an artifact that upgrade support is not part of the DOD of enabling a new feature in ooo. e.g. if master was required to be alwasy upgradable for previous release we would not have this issue. that is a lot more work though. > > An alternative to this entire proposal would be to double down on making > TripleO more fully support each OpenStack stable branch. That would mean > adding update/upgrade jobs for each stable branch, and doing the > development work to actually implement that support (if it's not tested, > it's broken), as well as likely adding other missing jobs instead of > de-emphasizing testing on these branches. yes. personally i would prefer if we went in this direction, again i know why from a downstream persepctive we may not want to do this but in my personal capasity if i was evaluating a deployment tool n to n+1 update in place of upstream release would be part of my minimum viable product. > AIUI, we do not want to be adding additional CI jobs of that scale > upstream, especially given the complaints about node usage from TripleO > from earlier in this cycle. And, we do not have the humans to develop and > maintain this additional work. ack i know we are both human an machine constrained to go this path. actully supporting multinode standalone and upgrades of standalone deployments would have signinficnatly reduced the ci time and resouces required but we also dont have the resouces to eanable that :( we have been trying on and off to enable that for https://opendev.org/openstack/whitebox-tempest-plugin so that we can better test changes before they hit downstream ci but i think that idea has more or less stalled out. > > > > personally i think moving in the other direction so that ooo can release > > sooner > > not later would make the project more appealing as the delay in support of > > a release is often considered a detractor for tripleo vs > > other openstack installers. > > > > I think moving to the independent model does enable us to consider > releasing sooner. > > > > > > i would hope that this change would not have any effect on the rdo > > packaging of non ooo packages. > > the rdo packages are used by other instalation methods (the puppet moduels > > for example) including i belive some of the larger chineese providers that > > have written there own installers. i think it would be damaging to centos > > if rdo was to skip upstream version of say nova. what might need to change > > is the packaging of ooo itself in rdo. > > > > tl;dr im not against the idea of ooo moving to independent model but i > > would hope that it will not affect RDO's packaging of non ooo projects and > > that > > ooo can still be used for ci of master and stable branches of for example > > nova. > > > > We'd continue to always CI master. > > Not all stable branches would remain covered by TripleO. For example, if > TripleO didn't branch and release for xena, you wouldn't have TripleO jobs > on nova stable/xena patches. Those jobs don't provide any meaningful > feedback for TripleO. Perhaps they do for nova as you are backporting a > change through every branch, and you're final destination is a branch where > TripleO is expected to be working, such as wallaby. You would want to know > if the change broke on xena for example, or if it were something on > wallaby. I can see how that would be useful for nova. yes today we use devstack to validate that which honestly is enough 99% of the time where it can be chalanging is if the issue we are fixing is centos/ooo specific granted we dont currently have ooo based ci on the project i work on so its not really a decrese in capability.  we had discussed possible using upstream ooo to start early validation of new feature after the completion of an upstream cycle form a branch that would not be the bases of an downstream relesase. we might be able to get the same effect just using master but the idea was to test earlier. > > However, part of what we're saying is that TripleO is trying to simplify > what we are supporting and testing, so we can get better at supporting the > releases that are most important to our community. Yes, there is some > downstream influence here, in the same way that TripleO doesn't support > deploying with Ubuntu, because it is less important to our (TripleO) > community. I think that's ok, and I see nothing wrong with it. ack, that is fair and im not really concerned by that. i think its correct to serve the need of the comunity that consume the project. > > If the service projects (such as nova) want to commit additional resources > and the upstream CI node count can handle the increase that properly > supporting each stable branch implies, then I think we can weigh that > option as well. However, the current status quo of more or less just > checking the boxes on each stable branch is wrong and sends a false message > in my opinion. That's a big part of what we're trying to correct. > ack. honestly without multinode standalone and upgrade support for the same i dont actully think it would add much value above just having a devstack centos job when we are concerned about cenost/rhel specific things. the reason i reponed initally was mainly propted by the implication that a ooo change in direction woudl nessatatye rdo to also change its release cycle but that is not what was being proposed or what will happen so you can consider my question/comments more or less adressed. From erin at openstack.org Wed Jun 9 16:26:04 2021 From: erin at openstack.org (Erin Disney) Date: Wed, 9 Jun 2021 11:26:04 -0500 Subject: OpenInfra Live - June 10th at 9am CT (1400 UTC) Message-ID: <78AE71C3-FE37-40CF-8053-A6DF742AA2DD@openstack.org> Hi everyone, This week’s OpenInfra Live episode is brought to you by the OpenStack Community. Keeping up with new OpenStack releases can be a challenge. In this continuation of the May 20th OpenInfra Live episode, a panel of large scale OpenStack infrastructure operators from Blizzard Entertainment, OVHcloud, Workday, Vexxhost and CERN join us again to further discuss upgrades. Episode: Upgrades in Large Scale OpenStack Infrastructure: The Discussion Date and time: Thursday, June 10th at 9am CT (1400 UTC) You can watch us live on: YouTube: https://www.youtube.com/watch?v=C2fSy005lDs LinkedIn: https://www.linkedin.com/feed/update/urn:li:ugcPost:6806241782301626368/ Facebook: https://www.facebook.com/openinfradev/videos/474846136944392 WeChat: recording will be posted on OpenStack WeChat after the live stream Speakers: Belmiro Moreira (CERN), Arnaud Morin (OVH). Mohammed Naser (Vexxhost), Imtiaz Chowdhury (Workday), Joshua Slater (Blizzard) First Upgrades OpenInfra Live Episode: https://www.youtube.com/watch?v=yf5iFiCg_Tw Thanks, Erin Erin Disney Event Marketing Open Infrastructure Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Wed Jun 9 16:28:25 2021 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Wed, 9 Jun 2021 17:28:25 +0100 Subject: [glance] How to limit access to particular store In-Reply-To: <49F175A2-A993-424B-97BF-F4EFB8129321@poczta.onet.pl> References: <49F175A2-A993-424B-97BF-F4EFB8129321@poczta.onet.pl> Message-ID: On Fri, Jun 4, 2021 at 1:56 PM at wrote: > Hi, > I have Glance with multi-store config and I want one store (not default) > to be read-only for everyone except cloud Admin. How can I do it? Is there > any way to limit store names visibility (which are visible i.e. in > properties section of "openstack image show IMAGE_NAME" output)? > Best regards > Adam Tomas > > Hi Adam, Such limitations are not possible at the moment. The only way to really do this if needed is to expose that "admin only" storage as a local web server and use the http store with locations api exposed to said users only. - jokke -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Wed Jun 9 19:14:17 2021 From: adivya1.singh at gmail.com (Adivya Singh) Date: Thu, 10 Jun 2021 00:44:17 +0530 Subject: Regarding Floating IP not reachbale Message-ID: Hello Team, I need a hint , where to check, as often my floating IP are not reachable in Openstack , it uses OVS based networking , and most of the time if i changed the l3 agent router it start working, I can ping the gateway from the qrouter namespace but can not ping the actual floating IP Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Jun 9 20:01:47 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 9 Jun 2021 17:01:47 -0300 Subject: [cinder] Bug deputy report for week of 2021-06-09 Message-ID: Hello, Sorry for the late report. This is a bug report from 2021-25-05 to 2021-09-06. You're welcome to join the next Cinder Bug Meeting next week. Weekly on Wednesday at 1500 UTC on #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Critical:- High: - https://bugs.launchpad.net/cinder/+bug/1929678 'Attachment_create api leaks reserved attachments'. Assigned to Felix Huettner. Medium:- Low: - https://bugs.launchpad.net/cinder/+bug/1931003 'Add support for Pacific to RBD driver'. Assigned to Jon Bernard. - https://bugs.launchpad.net/cinder/+bug/1930526 ' Block Storage API V3 (CURRENT) in cinder - wrong URL for backup-detail'. Unassigned. - https://bugs.launchpad.net/cinder/+bug/1928947 ' Block Storage API V2 (DEPRECATED) - Still see v2 endpoints after disabling per documentation'. Unassigned. - https://bugs.launchpad.net/cinder/+bug/1930773 ' Block Storage API V3 (CURRENT) in cinder - remove Optional flag for key_size and cipher'. Assigned to Sofia Enriquez - https://bugs.launchpad.net/os-brick/+bug/1928331 ' Some operating systems use / lib / udev / SCSI_ ID to get the SCI_ WWN error'. Unassigned. Low:- Incomplete: - https://bugs.launchpad.net/cinder/+bug/1929810 'SVC: unable to create data volume using Compressed template'. Unassigned. Cheers, Sofia -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Wed Jun 9 20:07:11 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Wed, 9 Jun 2021 22:07:11 +0200 Subject: [all][stable][release] Ocata - End of Life In-Reply-To: References: Message-ID: Hi, As I wrote in my previous mails, Ocata is not really maintained and gates are borken. So now the Ocata-EOL transitioning patches [1] have been generated for the rest of the projects that were not transitioned yet. Those teams who don't want to / cannot maintain their stable/ocata branch anymore: * please review the transition patch and +1 it and * clean up the unnecessary zuul jobs If a patch is approved, the last patch on stable/ocata will be tagged with *ocata-eol* tag. This can be checked out after stable/ocata is deleted. [1] https://review.opendev.org/q/topic:ocata-eol Thanks, Előd On 2021. 04. 20. 21:31, Előd Illés wrote: > Hi, > > Sorry, this will be long :) as there are 3 topics around old stable > branches and 'End of Life'. > > 1. Deletion of ocata-eol tagged branches > > With the introduction of Extended Maintenance process [1][2] some cycles > ago, the 'End of Life' (EOL) process also changed: > * branches were no longer EOL tagged and "mass-deleted" at the end of >   maintenance phase > * EOL'ing became a project decision > * if a project decides to cease maintenance of a branch that is in >   Extended Maintenance, then they can tag their branch with $series-eol > > However, the EOL-tagging process was not automated or redefined > process-wise, so that meant the branches that were tagged as EOL were > not deleted. Now (after some changing in tooling) Release Management > team finally will start to delete EOL-tagged branches. > > In this mail I'm sending a *WARNING* to consumers of old stable > branches, especially *ocata*, as we will start deleting the > *ocata-eol* tagged branches in a *week*. (And also newer *-eol branches > later on) > > > 2. Ocata branch > > Beyond the 1st topic we must clarify the future of Ocata stable branch > in general: tempest jobs became broken about ~ a year ago. That means > that projects had two ways forward: > > a. drop tempest testing to unblock gate > b. simply don't support ocata branch anymore > > As far as I see the latter one happened and stable/ocata became > unmaintained probably for every projects. > > So my questions are regarding this: > * Is any project still using/maintaining their stable/ocata branch? > * If not: can Release Team initiate a mass-EOL-tagging of stable/ocata? > > > 3. The 'next' old stable branches > > Some projects still support their Pike, Queens and Rocky branches. > These branches use Xenial and py2.7 and both are out of support. This > results broken gates time to time. Especially nowadays. These issues > suggest that these branches are closer and closer to being unmaintained. > So I call the attention of interested parties, who are for example > still consuming these stable branches and using them downstream to put > effort on maintaining the branches and their CI/gates. > > It is a good practice for stable maintainers to check if there are > failures in their projects' periodic-stable jobs [3], as those are > good indicators of the health of their stable branches. And if there > are, then try to fix it as soon as possible. > > > [1] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html > [2] > https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > [3] > http://lists.openstack.org/pipermail/openstack-stable-maint/2021-April/date.html > > > Thanks, > > Előd > > > From skaplons at redhat.com Wed Jun 9 20:51:33 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 09 Jun 2021 22:51:33 +0200 Subject: Regarding Floating IP not reachbale In-Reply-To: References: Message-ID: <4503925.7tzfGOeFMO@p1> Hi, Dnia środa, 9 czerwca 2021 21:14:17 CEST Adivya Singh pisze: > Hello Team, > > I need a hint , where to check, as often my floating IP are not reachable > in Openstack , it uses OVS based networking , and most of the time if i > changed the > l3 agent router it start working, I can ping the gateway from the qrouter > namespace but can not ping the actual floating IP > > Regards > Adivya Singh First of all You need to know what type of router are You using: DVR, DVR-HA, HA or Legacy. Depending on that You can look in different places why FIP is not working. For HA or Legacy type, please start pinging FIP from outside and try to check, e.g. with tcpdump if packets are visible in qrouter namespace on qg- and qr- interfaces. If that is correct, try to check if You can ping Your fixed IP address from that qrouter namespace. With that test You should be able to figure out if problem is somewhere in the external network (the one from which FIP is) or the tenant network. For DVR routers, test should be similar but there is also fip- namespace on the compute node and You should start checking there. Details about different scenarios are also described in the https://docs.openstack.org/ neutron/latest/admin/deploy-ovs.html[1] - maybe that will be useful for You. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gmann at ghanshyammann.com Thu Jun 10 00:18:18 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 09 Jun 2021 19:18:18 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 10th at 1500 UTC In-Reply-To: <179e8e52bd5.e35e022f316095.4772252122737314526@ghanshyammann.com> References: <179e8e52bd5.e35e022f316095.4772252122737314526@ghanshyammann.com> Message-ID: <179f348b4c6.f26451f4454213.7818415117551889676@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on June 10th at 1500 UTC in #openstack-tc IRC OFTC channel. -https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Xena tracker ** https://etherpad.opendev.org/p/tc-xena-tracker * Migration from 'Freenode' to 'OFTC' (gmann) ** https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc * Recommendation on moving the meeting channel to project channel ** https://review.opendev.org/c/openstack/project-team-guide/+/794839 * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Mon, 07 Jun 2021 18:53:23 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > NOTE: TC MEETINGS WILL BE HELD IN #openstack-tc CHANNEL ON OFTC NETWORK (NOT FREENODE) > > Technical Committee's next weekly meeting is scheduled for June 10th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, June 9th , at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > > -gmann > > From melwittt at gmail.com Thu Jun 10 03:27:08 2021 From: melwittt at gmail.com (melanie witt) Date: Wed, 9 Jun 2021 20:27:08 -0700 Subject: [nova][gate] openstack-tox-pep8 job broken Message-ID: Hi all, The openstack-tox-pep8 job is currently failing with the following error: > nova/crypto.py:39:1: error: Library stubs not installed for "paramiko" (or incompatible with Python 3.8) > nova/crypto.py:39:1: note: Hint: "python3 -m pip install types-paramiko" > nova/crypto.py:39:1: note: (or run "mypy --install-types" to install all missing stub packages) > nova/crypto.py:39:1: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports > Found 1 error in 1 file (checked 23 source files) > ERROR: InvocationError for command /usr/bin/bash tools/mypywrap.sh (exited with code 1) Please hold your rechecks until the fix merges: https://review.opendev.org/c/openstack/nova/+/795533 Cheers, -melanie From skaplons at redhat.com Thu Jun 10 06:44:27 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 10 Jun 2021 08:44:27 +0200 Subject: [neutron] Drivers meeting - agenda for 11.06.2021 Message-ID: <24134040.SoyZhvRMfH@p1> Hi, Agenda for our tomorrow's meeting is at [1]. We have to discuss some details about existing spec [2] and one new RFE [3]. [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda[1] [2] https://review.opendev.org/c/openstack/neutron-specs/+/783791[2] [3] https://bugs.launchpad.net/neutron/+bug/1931100[3] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda [2] https://review.opendev.org/c/openstack/neutron-specs/+/783791 [3] https://bugs.launchpad.net/neutron/+bug/1931100 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From bkslash at poczta.onet.pl Thu Jun 10 07:23:41 2021 From: bkslash at poczta.onet.pl (bkslash) Date: Thu, 10 Jun 2021 09:23:41 +0200 Subject: [glance] How to limit access to particular store In-Reply-To: References: Message-ID: Hi Erno, thank you for your answer. In the mean time I've figured out 2 other "workarounds": 1. I'll make a local file store (based on LVM) with LVM volume of the size that I need for all my "public" (and protected) images, so there will be no more space to put any customer images. If I'll need additional space I'll extend the volume/filesystem to fit my new images. Customer images will go to other, default store. 2. Ofcourse modifying filesystem permissions to RO on store folder would also do the trick, but it should be changed back to RW each time I have to modify my images. I think it would be useful to have the ability to block (via oslo.policy) reading some informations (i.e. list stores etc.) and make stores read-only... Best regards Adam Tomaś > On 9 Jun 2021, at 18:28, Erno Kuvaja wrote: > >  >> On Fri, Jun 4, 2021 at 1:56 PM at wrote: >> Hi, >> I have Glance with multi-store config and I want one store (not default) to be read-only for everyone except cloud Admin. How can I do it? Is there any way to limit store names visibility (which are visible i.e. in properties section of "openstack image show IMAGE_NAME" output)? >> Best regards >> Adam Tomas >> > Hi Adam, > > Such limitations are not possible at the moment. The only way to really do this if needed is to expose that "admin only" storage as a local web server and use the http store with locations api exposed to said users only. > > - jokke -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Thu Jun 10 09:16:28 2021 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 10 Jun 2021 11:16:28 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: <4450b0fa40e3decf06f365c10be5e370af923e3b.camel@redhat.com> References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> <4450b0fa40e3decf06f365c10be5e370af923e3b.camel@redhat.com> Message-ID: On 6/9/21 3:57 PM, Sean Mooney wrote: > On Wed, 2021-06-09 at 15:17 +0200, Alfredo Moralejo Alonso wrote: >> On Wed, Jun 9, 2021 at 1:49 PM Sean Mooney wrote: >> >>> On Wed, 2021-06-09 at 12:06 +0300, Marios Andreou wrote: >>>> On Wednesday, June 9, 2021, Alfredo Moralejo Alonso >>> >>>> wrote: >>>> >>>>> >>>>> >>>>> On Wed, Jun 9, 2021 at 2:48 AM Dan Sneddon >>> wrote: >>>>> >>>>>> Thanks for making the announcement. Can you clarify how the >>>>>> feature-freeze dates will be communicated to the greater community of >>>>>> contributors? >>>>>> >>>>>> - Dan Sneddon >>>>>> >>>>>> On Jun 8, 2021, at 8:21 AM, Wesley Hayutin >>> wrote: >>>>>> >>>>>>  >>>>>> >>>>>> Greetings TripleO community! >>>>>> >>>>>> At the most recent TripleO community meetings we have discussed >>> formally >>>>>> changing the OpenStack release model for TripleO [1]. The previous >>>>>> released projects can be found here [2]. TripleO has previously >>> released >>>>>> with release-type[‘trailing’, ‘cycle-with-intermediary’]. >>>>>> >>>>>> To quote the release model doc: >>>>>> >>>>>> ‘Trailing deliverables trail the release, so they cannot, by >>> definition, >>>>>> be independent. They need to pick between cycle-with-rc >>>>>> < >>> https://releases.openstack.org/reference/release_models.html#cycle-with-rc >>>> >>>>>> or cycle-with-intermediary >>>>>> < >>> https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary >>>> >>>>>> models.’ >>>>>> >>>>>> We are proposing to update the release-model to ‘independent’. This >>>>>> would give the TripleO community more flexibility in when we choose >>> to cut >>>>>> a release. In turn this would mean less backporting, less upstream >>> and 3rd >>>>>> party resources used by potentially some future releases. >>>>>> >>>>>> >>>>> What does this change mean in terms of branches and compatibility for >>>>> OpenStack stable releases?. >>>>> >>>>> >>>> >>>> >>>> as i wrote to Dan just now the main thing is that we may delay or even >>> skip >>>> a particular branch. For compatibility I guess it means we would have to >>>> rely on git tags so perhaps making consistently frequent (eg monthly? or >>>> more?) releases for all the tripleo repos. You could then call a >>> particular >>>> range of tags as being compatible with stable/Y for example. Does it >>> sound >>>> sane/doable from an rdo package build perspective? >>>> >>> too me this feels like we are leaking downstream product lifecycle into >>> upstream. >>> even if redhat is overwhelmingly the majority contibutor of reviews and >>> commits to >>> ooo im not sure that changing the upstream lifestyle to align more closely >>> with our product life >>> cycle is the correct thing to do. >>> >>> at least while tripleo is still in the Openstack namespaces and not the x >>> namespaces. >>> Skipping upstream release is really quite a radical departure form the >>> project original goals. >>> i think it would also be counter productive to our downstream efforts to >>> move our testing close to upstream. >>> if ooo was to lose the ablity to test master for example we would not be >>> able to use ooo in our downstream ci to test >>> feature that we plan to release osp n+1 that are develop during an >>> upstream cycle that wont be productised. >>> >>> i do not work on ooo so at the end of the day this wont affect me much but >>> to me skipping releases seam counter intuitive >>> given the previous efforts to make ooo more usable for development and ci. >>> Moving to independent >>> to decouple the lifecycle seams more reasonable if the underlying goal is >>> not to skip releases. you can release when ready >>> rather then scrambling or wating for a deadline. personally i think moving >>> in the other direction so that ooo can release sooner >>> not later would make the project more appealing as the delay in support of >>> a release is often considered a detractor for tripleo vs >>> other openstack installers. >>> >>> i would hope that this change would not have any effect on the rdo >>> packaging of non ooo packages. >>> the rdo packages are used by other instalation methods (the puppet moduels >>> for example) including i belive some of the larger chineese providers that >>> have written there own installers. i think it would be damaging to centos >>> if rdo was to skip upstream version of say nova. what might need to change >>> is the packaging of ooo itself in rdo. >>> >>> tl;dr im not against the idea of ooo moving to independent model but i >>> would hope that it will not affect RDO's packaging of non ooo projects and >>> that >>> ooo can still be used for ci of master and stable branches of for example >>> nova. >>> >>> >> >> RDO has no plans on skipping releases or any other changes affecting >> non-tripleo packages. The impact of this change (unclear at this point) >> should only affect the packages for those repos. > ack >> >> Note that RDO aims at being used and useful for other users and deployment >> tools as Puppet modules, Kolla, or others willing to work in CentOS and >> we'd like to maintain the collaboration with them as needed. > ya that is what i was expecting. thanks for confirming. > provided the possible change in ooo direction does not negatively impact the other > consumes of rdo i dont really have an objection to ooo changing how they work if peolel think it will > make there lives and there customer live simpler in the long run. I'm sceptical about if that makes lives simpler. As we learned earlier in the topic, "stable" tags would still require maintenance branches to be managed manually (with only a 3rd side CI available for that?). And manual solving of drifting dependencies collisions (since no more appropriate requirements-checks automation for independently released tripleo?). Finally, openstack puppet modules and puppet-tripleo look too much specific to OpenStack configuration options, that may drift a lot from release to a release, to be independently released. > > as i said i do not work on or use ooo frequently but i have consumed the output of rdo > via kolla in the past and while i typeically prefer using the source install i know many > do use the centos binary install variant using the rdo packages. > >> >> Regards, >> >> Alfredo >> >> >>> regards >>> sean >>> >>>> >>>> regards, marios >>>> >>>> >>>> >>>> >>>>> To quote the release model doc: >>>>>> >>>>>> ‘Some projects opt to completely bypass the 6-month cycle and release >>>>>> independently. For example, that is the case of projects that >>> support the >>>>>> development infrastructure. The “independent” model describes such >>>>>> projects.’ >>>>>> >>>>>> The discussion here is to merely inform the greater community with >>>>>> regards to the proposal and conversations regarding the release >>> model. >>>>>> This thread is NOT meant to discuss previous releases or their >>> supported >>>>>> status, merely changing the release model here [3] >>>>>> >>>>>> >>>>>> [0] https://etherpad.opendev.org/p/tripleo-meeting-items >>>>>> >>>>>> [1] https://releases.openstack.org/reference/release_models.html >>>>>> >>>>>> [2] https://releases.openstack.org/teams/tripleo.html >>>>>> >>>>>> [3] https://opendev.org/openstack/releases/src/branch/master/ >>>>>> deliverables/xena >>>>>> >>>>>> >>>> >>> >>> >>> > > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Thu Jun 10 09:22:05 2021 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 10 Jun 2021 11:22:05 +0200 Subject: [tripleo] Changing TripleO's release model In-Reply-To: References: <6BC6096D-0A3D-440E-9972-16E9B63F70A4@redhat.com> <074fbe62b53733959016d2769def5cf12449c202.camel@redhat.com> Message-ID: <5f01946e-5576-0097-62ee-822967ba0a27@redhat.com> On 6/9/21 3:58 PM, James Slagle wrote: > > > On Wed, Jun 9, 2021 at 7:54 AM Sean Mooney > wrote: > > too me this feels like we are leaking downstream product lifecycle > into upstream. > even if redhat is overwhelmingly the majority contibutor of reviews > and commits to > ooo im not sure that changing the upstream lifestyle to align more > closely with our product life > cycle is the correct thing to do. > > > I wouldn't characterize it as "leaking". Instead, we are aiming to > accurately reflect what we intend to support as a community (not as any > company), based on who we have working on this project. > > Unfortunately the reality is that no one (community or otherwise) should > use TripleO from rocky/stein/ussuri/victoria other than for dev/test in > my opinion. There's no upgrade path from any of those releases, and the > community isn't working on one. However, that is not clearly represented > by the project. While we don't intend to retrofit this policy to past > branches, part of proposing this change is to help clear that up going > forward. >   > > at least while tripleo is still in the Openstack namespaces and not > the x namespaces. > > > I don't think I understand. At least..."what"? How is the OpenStack > namespace related to release models? How does the namespace (which is a > construct of how git repositories are organized aiui), have a relation > to what is included in an OpenStack release? >   > > Skipping upstream release is really quite a radical departure form > the project original goals. > > > I disagree with how you remember the history, and I think this is an > overstatement. >   > > i think it would also be counter productive to our downstream > efforts to move our testing close to upstream. > > if ooo was to lose the ablity to test master for example we would > not be able to use ooo in our downstream ci to test > feature that we plan to release osp n+1 that are develop during an > upstream cycle that wont be productised. > > > I don't follow the premise. How is it counterproductive to move our > testing close to upstream? > We'd always continue to test master. When it comes time for OpenStack to > branch, such as to create stable/xena in all the service projects, > TripleO may choose not to branch, and I think at that point, TripleO > would no longer have CI jobs running on stable/xena of those service > projects. Since TripleO does not follow the stable branch policy, isn't the same possible as well today without switching to the independent release model? >   > > > i do not work on ooo so at the end of the day this wont affect me > much but to me skipping releases seam counter intuitive > given the previous efforts to make ooo more usable for development > and ci. Moving to independent > to decouple the lifecycle seams more reasonable if the underlying > goal is not to skip releases. you can release when ready > rather then scrambling or wating for a deadline. > > > I think the "when ready" is part of the problem here. For example, one > might look at when we released stable/victoria and claim TripleO was > ready. However, when TripleO victoria was released, you could not > upgrade from ussuri to victoria. Likewise, you can't upgrade from > victoria to wallaby. Were we really ready to release? My take is that we > shouldn't have released at all. I think it sends a false signal. > > An alternative to this entire proposal would be to double down on making > TripleO more fully support each OpenStack stable branch. That would mean > adding update/upgrade jobs for each stable branch, and doing the > development work to actually implement that support (if it's not tested, > it's broken), as well as likely adding other missing jobs instead of > de-emphasizing testing on these branches. > > AIUI, we do not want to be adding additional CI jobs of that scale > upstream, especially given the complaints about node usage from TripleO > from earlier in this cycle. And, we do not have the humans to develop > and maintain this additional work. >   > > personally i think moving in the other direction so that ooo can > release sooner > not later would make the project more appealing as the delay in > support of a release is often considered a detractor for tripleo vs > other openstack installers. > > > I think moving to the independent model does enable us to consider > releasing sooner. >   > > > i would hope that this change would not have any effect on the rdo > packaging of non ooo packages. > the rdo packages are used by other instalation methods (the puppet > moduels for example) including i belive some of the larger chineese > providers that > have written there own installers. i think it would be damaging to > centos if rdo was to skip upstream version of say nova. what might > need to change > is the packaging of ooo itself in rdo. > > tl;dr im not against the idea of ooo moving to independent model but > i would hope that it will not affect RDO's packaging of non ooo > projects and that > ooo can still be used for ci of master and stable branches of for > example nova. > > > We'd continue to always CI master. > > Not all stable branches would remain covered by TripleO. For example, if > TripleO didn't branch and release for xena, you wouldn't have TripleO > jobs on nova stable/xena patches. Those jobs don't provide any > meaningful feedback for TripleO. Perhaps they do for nova as you are > backporting a change through every branch, and you're final destination > is a branch where TripleO is expected to be working, such as wallaby. > You would want to know if the change broke on xena for example, or if it > were something on wallaby. I can see how that would be useful for nova. > > However, part of what we're saying is that TripleO is trying to simplify > what we are supporting and testing, so we can get better at supporting > the releases that are most important to our community. Yes, there is > some downstream influence here, in the same way that TripleO doesn't > support deploying with Ubuntu, because it is less important to our > (TripleO) community. I think that's ok, and I see nothing wrong with it. > > If the service projects (such as nova) want to commit additional > resources and the upstream CI node count can handle the increase that > properly supporting each stable branch implies, then I think we can > weigh that option as well. However, the current status quo of more or > less just checking the boxes on each stable branch is wrong and sends a > false message in my opinion. That's a big part of what we're trying to > correct. > > -- > -- James Slagle > -- -- Best regards, Bogdan Dobrelya, Irc #bogdando From dpeacock at redhat.com Thu Jun 10 12:35:15 2021 From: dpeacock at redhat.com (David Peacock) Date: Thu, 10 Jun 2021 08:35:15 -0400 Subject: [TripleO] Proposing ysandeep for tripleo-ci core In-Reply-To: References: <20210604081837.uurzifkb2h6wyewu@gchamoul-mac> Message-ID: +1 from me but I have no power here; symbolic approval of Sandeep! On Wed, Jun 9, 2021 at 5:20 AM Marios Andreou wrote: > thanks all for voting ... yes said I would add him in yesterday's irc > meeting but weshay beat me to it ;) > > I just checked and see ysandeep is now in the core reviewers group > https://review.opendev.org/admin/groups/0319cee8020840a3016f46359b076fa6b6ea831a,members > > ysandeep go +2 all the CI things ! > > regards, marios > > On Wednesday, June 9, 2021, Wesley Hayutin wrote: > >> Seeing no objections.... >> >> Congrats Sandeep :) >> >> On Fri, Jun 4, 2021 at 2:31 AM Gaël Chamoulaud >> wrote: >> >>> Of course, a big +1! >>> >>> On 02/Jun/2021 14:17, Marios Andreou wrote: >>> > Hello all >>> > >>> > Having discussed this with some members of the tripleo ci team >>> > (weshay, sshnaidm), we would like to propose Sandeep Yadav (irc: >>> > ysandeep) for core on the tripleo-ci repos (tripleo-ci, >>> > tripleo-quickstart and tripleo-quickstart-extras). >>> > >>> > Sandeep joined the team about 1.5 years ago and has from the start >>> > demonstrated his eagerness to learn and an excellent work ethic, >>> > having made many useful code submissions [1] and code reviews [2] to >>> > the CI repos and beyond. Thanks Sandeep and keep up the good work! >>> > >>> > Please reply to this mail with a +1 or -1 for objections in the usual >>> > manner. If there are no objections we can declare it official in a few >>> > days >>> > >>> > regards, marios >>> > >>> > [1] https://review.opendev.org/q/owner:sandeepyadav93 >>> > [2] >>> https://www.stackalytics.io/report/contribution?module=tripleo-group&project_type=openstack&days=180 >>> > >>> > >>> >>> Best Regards, >>> Gaël >>> >>> -- >>> Gaël Chamoulaud - (He/Him/His) >>> >> > > -- > _sent from my mobile - sorry for spacing spelling etc_ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Thu Jun 10 14:04:27 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Thu, 10 Jun 2021 23:04:27 +0900 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? Message-ID: Hi All, I've been working on bug 1926693[1], and am lost about the reasonable solutions we expect. Ideally I'd need to bring this topic in the team meeting but because of the timezone gap and complicated background, I'd like to gather some feedback in ml first. [1] https://bugs.launchpad.net/neutron/+bug/1926693 TL;DR Which one(or ones) would be reasonable solutions for this issue ? (1) https://review.opendev.org/c/openstack/neutron/+/763563 (2) https://review.opendev.org/c/openstack/neutron/+/788893 (3) Implement something different The issue I reported in the bug is that there is an inconsistency between nova and neutron about the way to determine a hypervisor name. Currently neutron uses socket.gethostname() (which always returns shortname) to determine a hypervisor name to search the corresponding resource provider. On the other hand, nova uses libvirt's getHostname function (if libvirt driver is used) which returns a canonical name. Canonical name can be shortname or FQDN (*1) and if FQDN is used then neutron and nova never agree. (*1) IMO this is likely to happen in real deployments. For example, TripelO uses FQDN for canonical names. Neutron already provides the resource_provider_defauly_hypervisors option to override a hypervisor name used. However because this option accepts a map between interface and hypervisor, setting this parameter requires very redundant description especially when a compute node has multiple interfaces/bridges. The following example shows how redundant the current requirement is. ~~~ [OVS] resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ br-data3:1024,1024,br-data4,1024:1024 resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain ~~~ I've submitted a change to propose a new single parameter to override the base hypervisor name but this is currently -2ed, mainly because I lacked analysis about the root cause of mismatch when I proposed this. (1) https://review.opendev.org/c/openstack/neutron/+/763563 On the other hand, I submitted a different change to neutron which implements the logic to get a hypervisor name which is fully compatible with libvirt. While this would save users from even overriding hypervisor names, I'm aware that this might break the other virt driver which depends on a different logic to generate a hypervisor name. IMO the patch is still useful considering the libvirt driver would be the most popular option now, but I'm not fully aware of the impact on the other drivers, especially because I don't know which virt driver would support the minimum QoS feature now. (2) https://review.opendev.org/c/openstack/neutron/+/788893/ In the review of (2), Sean mentioned implementing a logic to determine an appropriate resource provider(3) even if there is a mismatch about host name format, but I'm not sure how I would implement that, tbh. My current thought is to merge (1) as a quick solution first, and discuss whether we should merge (2), but I'd like to ask for some feedback about this plan (like we should NOT merge (2)). I'd appreciate your thoughts about this $topic. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Jun 10 14:35:21 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 10 Jun 2021 16:35:21 +0200 Subject: PTO - Friday 11 Message-ID: Hello, I'm on PTO tomorrow (June 11). See you on Monday -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Thu Jun 10 17:35:09 2021 From: ricolin at ricolky.com (Rico Lin) Date: Fri, 11 Jun 2021 01:35:09 +0800 Subject: [tc][all] Test support for TLS default Message-ID: Dear all In short, can you help to enable tls-proxy for your test jobs and fix/report the issue in [4]? Or it makes no sense for you? Here's all repositories contains jobs with tls-proxy disabled: - neutron - neutron-tempest-plugin - cinder-tempest-plugin - cyborg-tempest-plugin - ec2api-tempest-plugin - freezer-tempest-plugin - grenade - heat - js-openstack-lib - keystone - kuryr-kubernetes - masakari - murano - networking-odl - networking-sfc - python-brick-cinderclient-ext - python-neutronclient - python-zaqarclient - sahara - sahara-dashboard - sahara-tests - solum - tacker - telemetry-tempest-plugin - trove - trove-tempest-plugin - vitrage-tempest-plugin - watcher As I'm looking for y-cycle potential goals, I found the tls-proxy support is not actually ready OpenStack wide (you can find some discussion in [3]). We have multiple projects that disable tls-proxy in test jobs [1] (and stay that way for a long time). For security concerns, I'm currently collecting the missing part for this. And try to figure out if there is any infra issue for current jobs. After I attempt to enable tls-proxy for some projects to check the status. And from the test result shows ([2]), We might have bugs/test infra issues in projects. So I invite projects who still have not switched to TLS default. Please do, and help to fix/report the issue you're facing. As we definitely need some more help on figuring out the actual situation on each project. So I created an etherpad [4] to track actions or related information. Meanwhile, I will attempt to enable tls-proxy on more test jobs (and you will be able to find it in [2]). Which gives us a good chance to review the logs and see how we might get chances to fix it and enable TLS by default. [1] https://codesearch.opendev.org/?q=tls-proxy%3A%20false&i=nope&files=&excludeFiles=&repos= [2] https://review.opendev.org/q/topic:%22exame-tls-proxy%22+(status:open%20OR%20status:merged) [3] https://etherpad.opendev.org/p/community-goals [4] https://etherpad.opendev.org/p/support-tls-default *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From forums at mossakowski.ch Wed Jun 9 11:08:53 2021 From: forums at mossakowski.ch (forums at mossakowski.ch) Date: Wed, 09 Jun 2021 11:08:53 +0000 Subject: [Neutron] sriov network setup for victoria - clarification needed In-Reply-To: References: Message-ID: <5AGF_ceXEUd_hR-Qhz8SHZzx6QrTTYSwcDdd47nuU8rG8W2cX4W9WmwELNTIv4qluwtOd0vKVjav88gQwMbltmcPaV_eE8WPLHiw3iKJN4s=@mossakowski.ch> Thanks for the support! I've patched pyroute and now I'm able to attach sriov ports to a running VMs. Cheers! Piotr Mossakowski Sent from ProtonMail mobile \-------- Original Message -------- On 3 Jun 2021, 15:05, Lajos Katona < katonalala at gmail.com> wrote: > > > > Hi, > > 0.6.3 has another increase for the DEFAULT\_RCVBUF: > > https://github.com/svinota/pyroute2/issues/813 > > > > > > Regards > > Lajos Katona (lajoskatona) > > > > > Rodolfo Alonso Hernandez <[ralonsoh at redhat.com][ralonsoh_redhat.com]> ezt írta (időpont: 2021. jún. 3., Cs, 9:16): > > > > Hi Piotr: > > > > > > > > > > I think you are hitting \[1\]. As you said, each PF has 63 VFs configured. Your error looks very similar to this one reported. > > > > > > > > > > > > Try updating pyroute2 to version 0.6.2. That should contain the fix for this error. > > > > > > > > > > Regards. > > > > > > > > > > > > \[1\]https://github.com/svinota/pyroute2/issues/751 > > > > > > > > > > On Thu, Jun 3, 2021 at 12:06 AM <[forums at mossakowski.ch][forums_mossakowski.ch]> wrote: > > > > > > > Muchas gracias Alonso para tu ayuda! > > > > > > > > > > > > > > > > > > > > > > > > I've commented out the decorator line, new exception popped out, I've updated my gist: > > > > > > > > > > > > > > > https://gist.github.com/8e6272cbe7748b2c5210fab291360e0b > > > > > > > > > > > > > > > > > > > > > > > > BR, > > > > > > > > > > > > > > > Piotr Mossakowski > > > > > > > > > > > > > > > Sent from ProtonMail mobile > > > > > > > > > > > > > > > > > > \-------- Original Message -------- > > > On 31 May 2021, 18:08, Rodolfo Alonso Hernandez < [ralonsoh at redhat.com][ralonsoh_redhat.com]> wrote: > > > > > > > > > > > > > > > > > > > Hello Piotr: > > > > > > > > > > > > > > > > > > > > Maybe you should update the pyroute2 library, but this is a blind shot. > > > > > > > > > > > > > > > > > > > > What I recommend you do is to find the error you have when retrieving the interface VFs. In the same compute node, use this method \[1\] but remove the decorator \[2\]. Then, in a root shell, run python again: > > > > > > > > >>> from neutron.privileged.agent.linux import ip\_lib > > > > >>> ip\_lib.get\_link\_vfs('ens2f0', '') > > > > > > > > > > > > > > > > > > > > That will execute the pyroute2 code without the privsep decorator. You'll see what error is returning the method. > > > > > > > > > > > > > > > > > > > > Regards. > > > > > > > > > > > > > > > > > > > > > > > > \[1\][https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip\_lib.py\#L396-L410][https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L396-L410] > > > > > > > > \[2\][https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip\_lib.py\#L395][https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L395] > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Mon, May 31, 2021 at 5:50 PM <[forums at mossakowski.ch][forums_mossakowski.ch]> wrote: > > > > > > > > > > > > > Hello, > > > > > > > > > > > > > > > I have two victoria environments: > > > > > > > > > > > > > > > 1) a working one, standard setup with separate dedicated interface for sriov (pt0 and pt1) > > > > > > > > > > > > > > > 2) a broken one, where I'm trying to reuse one of already used interfaces (ens2f0 or ens2f1) for sriov. ens2f0 is used for several VLANs (mgmt and storage) and ens2f1 is a neutron external interface which I bridged for VLAN tenant networks. On both I have enabled 63 VFs, it's a standard intetl 10Gb x540 adapter. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On broken environment, when I'm trying to boot a VM with sriov port that I created before, I see this error shown on below gist: > > > > > > > > > > > > > > > https://gist.github.com/moss2k13/8e6272cbe7748b2c5210fab291360e0b > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I'm investigating this for couple days now but I'm out of ideas so I'd like to ask for your support. Is this possible to achieve what I'm trying to do on 2nd environment? To use PF as normal interface and use its VFs for sriov-agent at the same time? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > > > > > > > > > > > > Piotr Mossakowski > > > > > [ralonsoh_redhat.com]: mailto:ralonsoh at redhat.com [forums_mossakowski.ch]: mailto:forums at mossakowski.ch [https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L396-L410]: https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L396-L410 [https_github.com_openstack_neutron_blob_5d4f5d42d0a8c7ee157912cb29cae0e4deff984b_neutron_privileged_agent_linux_ip_lib.py_L395]: https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L395 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: publickey - EmailAddress(s=forums at mossakowski.ch) - 0xDC035524.asc Type: application/pgp-keys Size: 648 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 294 bytes Desc: OpenPGP digital signature URL: From levonmelikbekjan at yahoo.de Thu Jun 10 15:21:53 2021 From: levonmelikbekjan at yahoo.de (levonmelikbekjan at yahoo.de) Date: Thu, 10 Jun 2021 17:21:53 +0200 Subject: AW: AW: Customization of nova-scheduler In-Reply-To: <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> Message-ID: <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> Hi Stephen, I'm trying to customize my nova scheduler. However, if I change the nova.conf as it is written here https://docs.openstack.org/operations-guide/de/ops-customize-compute.html, then my python file cannot be found. How can I configure it correctly? Do you have any idea? My controller node is running with CENTOS 7. I couldn't install devstack because it is only supported for CENTOS 8 version. Best regards Levon -----Ursprüngliche Nachricht----- Von: Stephen Finucane Gesendet: Montag, 31. Mai 2021 18:21 An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org Betreff: Re: AW: Customization of nova-scheduler On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > Hello Stephen, > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > Two cases should be solved! > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. What you're describing is commonly referred to as "preemptible" or "spot" instances. This topic has a long, complicated history in nova and has yet to be implemented. Searching for "preemptible instances openstack" should yield you lots of discussion on the topic along with a few proof-of-concept approaches using external services or out-of-tree modifications to nova. > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? As hinted above, this is likely to be a very difficult project given the fraught history of the idea. I don't want to dissuade you from this work but you should be aware of what you're getting into from the start. If you're serious about pursuing this, I suggest you first do some research on prior art. As noted above, there is lots of information on the internet about this. With this research done, you'll need to decide whether this is something you want to approach within nova itself, via out-of-tree extensions or via a third party project. If you're opting for integration with nova, then you'll need to think long and hard about how you would design such a system and start working on a spec (a design document) outlining your proposed solution. Details on how to write a spec are discussed at [1]. The only extension points nova offers today are scheduler filters and weighers so your options for an out-of-tree extension approach will be limited. A third party project will arguably be the easiest approach but you will be restricted to talking to nova's REST APIs which may limit the design somewhat. This Blazar spec [2] could give you some ideas on this approach (assuming it was never actually implemented, though it may well have been). > My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. > > My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. > > What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? This could potentially work, but I suspect there will be serious performance implications with this, particularly at scale. Scheduler filters are historically used for simple things like "find me a group of hosts that have this metadata attribute I set on my image". Making API calls sounds like something that would take significant time and therefore slow down the schedule process. You'd also have to decide what your heuristic for deciding which VM(s) to delete would be, since there's nothing obvious in nova that you could use. You could use something as simple as filter extra specs or something as complicated as an external service. This should be lots to get you started. Once again, do make sure you're aware of what you're getting yourself into before you start. This could get complicated very quickly :) Cheers, Stephen > I'm very happy to have found you!!! > > Thank you really much for your time! [1] https://specs.openstack.org/openstack/nova-specs/readme.html [2] https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > Best regards > Levon > > -----Ursprüngliche Nachricht----- > Von: Stephen Finucane > Gesendet: Montag, 31. Mai 2021 12:34 > An: Levon Melikbekjan ; > openstack at lists.openstack.org > Betreff: Re: Customization of nova-scheduler > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > Hello Openstack team, > > > > is it possible to customize the nova-scheduler via Python? If yes, how? > > Yes, you can provide your own filters and weighers. This is documented at [1]. > > Hope this helps, > Stephen > > [1] > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-y > our-own-filter > > > > > Best regards > > Levon > > > > From peter.matulis at canonical.com Thu Jun 10 19:51:55 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Thu, 10 Jun 2021 15:51:55 -0400 Subject: [docs] Double headings on every page In-Reply-To: References: Message-ID: Hi Stephen. Did you ever get to circle back to this? On Fri, May 14, 2021 at 7:34 AM Stephen Finucane wrote: > On Tue, 2021-05-11 at 11:14 -0400, Peter Matulis wrote: > > Hi, I'm hitting an oddity in one of my projects where the titles of all > pages show up twice. > > Example: > > > https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/wallaby/app-nova-cells.html > > Source file is here: > > > https://opendev.org/openstack/charm-deployment-guide/src/branch/master/deploy-guide/source/app-nova-cells.rst > > Does anyone see what can be causing this? It appears to happen only for > the current stable release ('wallaby') and 'latest'. > > Thanks, > Peter > > > I suspect you're bumping into issues introduced by a new version of Sphinx > or docutils (new versions of both were released recently). > > Comparing the current nova docs [1] to what you have, I see the duplicate >

element is present but hidden by the following CSS rule: > > .docs-body .section h1 { > > display: none; > > } > > > That works because we have the following HTML in the nova docs: > >
> >

Extra Specs

> > ... > >
> > > while the docs you linked are using the HTML5 semantic '
' tag: > >
> >

Nova Cells

> > ... > >
> > > So to fix this, we'll have to update the openstackdocstheme to handle > these changes. I can try to take a look at this next week but I really > wouldn't mind if someone beat me to it. > > Stephen > > [1] https://docs.openstack.org/nova/latest/configuration/extra-specs.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Jun 11 06:49:32 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 11 Jun 2021 08:49:32 +0200 Subject: [tc][all] Test support for TLS default In-Reply-To: References: Message-ID: <2745966.TcCWilPskB@p1> Hi, Dnia czwartek, 10 czerwca 2021 19:35:09 CEST Rico Lin pisze: > Dear all > > In short, > can you help to enable tls-proxy for your test jobs and fix/report the > issue in [4]? Or it makes no sense for you? > Here's all repositories contains jobs with tls-proxy disabled: > > - neutron > - neutron-tempest-plugin > - cinder-tempest-plugin > - cyborg-tempest-plugin > - ec2api-tempest-plugin > - freezer-tempest-plugin > - grenade > - heat > - js-openstack-lib > - keystone > - kuryr-kubernetes > - masakari > - murano > - networking-odl > - networking-sfc > - python-brick-cinderclient-ext > - python-neutronclient > - python-zaqarclient > - sahara > - sahara-dashboard > - sahara-tests > - solum > - tacker > - telemetry-tempest-plugin > - trove > - trove-tempest-plugin > - vitrage-tempest-plugin > - watcher > > As I'm looking for y-cycle potential goals, I found the tls-proxy support > is not actually ready OpenStack wide (you can find some discussion in [3]). > We have multiple projects that disable tls-proxy in test jobs [1] (and stay > that way for a long time). > For security concerns, I'm currently collecting the missing part for this. > And try to figure out if there is any infra issue for current jobs. > After I attempt to enable tls-proxy for some projects to check the status. > And from the test result shows ([2]), We might have bugs/test infra issues > in projects. > So I invite projects who still have not switched to TLS default. Please do, > and help to fix/report the issue you're facing. > As we definitely need some more help on figuring out the actual situation > on each project. > So I created an etherpad [4] to track actions or related information. > > Meanwhile, I will attempt to enable tls-proxy on more test jobs (and you > will be able to find it in [2]). Which gives us a good chance to review the > logs and see how we might get chances to fix it and enable TLS by default. > > > [1] > https://codesearch.opendev.org/?q=tls-proxy%3A%20false&i=nope&files=&excludeFiles=&repos= > [2] > https://review.opendev.org/q/topic:%22exame-tls-proxy%22+ (status:open%20OR%20status:merged) > [3] https://etherpad.opendev.org/p/community-goals > [4] https://etherpad.opendev.org/p/support-tls-default > > *Rico Lin* > OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, > Senior Software Engineer at EasyStack Thx Rico for that. I just sent patch for neutron-tempest-plugin and will check how it works for neutron jobs. Good thing is that in many jobs we already have it enabled for long time so I hope there will be no many issues there :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 484 bytes Desc: This is a digitally signed message part. URL: From ralonsoh at redhat.com Fri Jun 11 07:57:27 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 11 Jun 2021 09:57:27 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: Hello Takashi and Neutrinos: First of all, thank you for working on this. Currently users have the ability to override the host name using "resource_provider_hypervisors". That means this parameter is always configurable; IMO we are safe on this. The problem we have is how we should retrieve this host name if "resource_provider_hypervisors" is not provided. I think the solution could be a combination of: - A first patch providing the ability to select the hypervisor type. The default one could be "libvirt". Each driver can have a particular host name retrieval implementation. The default one will be the implemented right now: "socket.gethostname()" - https://review.opendev.org/c/openstack/neutron/+/788893, providing full compatibility for libvirt. Those are my two cents. Regards. On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami wrote: > Hi All, > > > I've been working on bug 1926693[1], and am lost about the reasonable > solutions we expect. Ideally I'd need to bring this topic in the team > meeting > but because of the timezone gap and complicated background, I'd like to > gather some feedback in ml first. > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > TL;DR > Which one(or ones) would be reasonable solutions for this issue ? > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > (3) Implement something different > > The issue I reported in the bug is that there is an inconsistency between > nova and neutron about the way to determine a hypervisor name. > Currently neutron uses socket.gethostname() (which always returns > shortname) > to determine a hypervisor name to search the corresponding resource > provider. > On the other hand, nova uses libvirt's getHostname function (if libvirt > driver is used) > which returns a canonical name. Canonical name can be shortname or FQDN > (*1) > and if FQDN is used then neutron and nova never agree. > > (*1) > IMO this is likely to happen in real deployments. For example, TripelO uses > FQDN for canonical names. > > Neutron already provides the resource_provider_defauly_hypervisors option > to override a hypervisor name used. However because this option accepts > a map between interface and hypervisor, setting this parameter requires > very redundant description especially when a compute node has multiple > interfaces/bridges. The following example shows how redundant the current > requirement is. > ~~~ > [OVS] > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ > br-data3:1024,1024,br-data4,1024:1024 > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain > ~~~ > > I've submitted a change to propose a new single parameter to override > the base hypervisor name but this is currently -2ed, mainly because > I lacked analysis about the root cause of mismatch when I proposed this. > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > On the other hand, I submitted a different change to neutron which > implements > the logic to get a hypervisor name which is fully compatible with libvirt. > While this would save users from even overriding hypervisor names, I'm > aware > that this might break the other virt driver which depends on a different > logic > to generate a hypervisor name. IMO the patch is still useful considering > the libvirt driver would be the most popular option now, but I'm not fully > aware of the impact on the other drivers, especially because I don't know > which virt driver would support the minimum QoS feature now. > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > In the review of (2), Sean mentioned implementing a logic to determine > an appropriate resource provider(3) even if there is a mismatch about > host name format, but I'm not sure how I would implement that, tbh. > > > My current thought is to merge (1) as a quick solution first, and discuss > whether > we should merge (2), but I'd like to ask for some feedback about this plan > (like we should NOT merge (2)). > > I'd appreciate your thoughts about this $topic. > > Thank you, > Takashi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Jun 11 08:34:33 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 11 Jun 2021 10:34:33 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: <2993434.SUg3sCx5Oz@p1> Hi, Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez pisze: > Hello Takashi and Neutrinos: > > First of all, thank you for working on this. > > Currently users have the ability to override the host name using > "resource_provider_hypervisors". That means this parameter is always > configurable; IMO we are safe on this. > > The problem we have is how we should retrieve this host name if > "resource_provider_hypervisors" is not provided. I think the solution could > be a combination of: > > - A first patch providing the ability to select the hypervisor type. The > default one could be "libvirt". Each driver can have a particular host name > retrieval implementation. The default one will be the implemented right > now: "socket.gethostname()" > - https://review.opendev.org/c/openstack/neutron/+/788893, providing > full compatibility for libvirt. > > Those are my two cents. We can move on with the patch https://review.opendev.org/c/openstack/neutron/+/ 763563[1] to provide new config option as it's now and additionally implement https:// review.opendev.org/c/openstack/neutron/+/788893[2] so users who are using libvirt will not need to change anything, but if someone is using other hypervisor, this will allow adjustments. Wdyt? > > Regards. > > > > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami > > wrote: > > Hi All, > > > > > > I've been working on bug 1926693[1], and am lost about the reasonable > > solutions we expect. Ideally I'd need to bring this topic in the team > > meeting > > but because of the timezone gap and complicated background, I'd like to > > gather some feedback in ml first. > > > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > > > TL;DR > > > > Which one(or ones) would be reasonable solutions for this issue ? > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > > (3) Implement something different > > > > The issue I reported in the bug is that there is an inconsistency between > > nova and neutron about the way to determine a hypervisor name. > > Currently neutron uses socket.gethostname() (which always returns > > shortname) > > to determine a hypervisor name to search the corresponding resource > > provider. > > On the other hand, nova uses libvirt's getHostname function (if libvirt > > driver is used) > > which returns a canonical name. Canonical name can be shortname or FQDN > > (*1) > > and if FQDN is used then neutron and nova never agree. > > > > (*1) > > IMO this is likely to happen in real deployments. For example, TripelO uses > > FQDN for canonical names. > > > > Neutron already provides the resource_provider_defauly_hypervisors option > > to override a hypervisor name used. However because this option accepts > > a map between interface and hypervisor, setting this parameter requires > > very redundant description especially when a compute node has multiple > > interfaces/bridges. The following example shows how redundant the current > > requirement is. > > ~~~ > > [OVS] > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ > > br-data3:1024,1024,br-data4,1024:1024 > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > > compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain > > ~~~ > > > > I've submitted a change to propose a new single parameter to override > > the base hypervisor name but this is currently -2ed, mainly because > > I lacked analysis about the root cause of mismatch when I proposed this. > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > On the other hand, I submitted a different change to neutron which > > implements > > the logic to get a hypervisor name which is fully compatible with libvirt. > > While this would save users from even overriding hypervisor names, I'm > > aware > > that this might break the other virt driver which depends on a different > > logic > > to generate a hypervisor name. IMO the patch is still useful considering > > the libvirt driver would be the most popular option now, but I'm not fully > > aware of the impact on the other drivers, especially because I don't know > > which virt driver would support the minimum QoS feature now. > > > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > > In the review of (2), Sean mentioned implementing a logic to determine > > an appropriate resource provider(3) even if there is a mismatch about > > host name format, but I'm not sure how I would implement that, tbh. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ralonsoh at redhat.com Fri Jun 11 08:46:44 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 11 Jun 2021 10:46:44 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: <2993434.SUg3sCx5Oz@p1> References: <2993434.SUg3sCx5Oz@p1> Message-ID: I agree with this idea but what https://review.opendev.org/c/openstack/neutron/+/763563 is proposing differs from what I'm saying: instead of providing the hostname (that is something we can do "resource_provider_hypervisors"), we should provide the hypervisor name (default: libvirt). On Fri, Jun 11, 2021 at 10:36 AM Slawek Kaplonski wrote: > Hi, > > Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez pisze: > > > Hello Takashi and Neutrinos: > > > > > > First of all, thank you for working on this. > > > > > > Currently users have the ability to override the host name using > > > "resource_provider_hypervisors". That means this parameter is always > > > configurable; IMO we are safe on this. > > > > > > The problem we have is how we should retrieve this host name if > > > "resource_provider_hypervisors" is not provided. I think the solution > could > > > be a combination of: > > > > > > - A first patch providing the ability to select the hypervisor type. > The > > > default one could be "libvirt". Each driver can have a particular > host name > > > retrieval implementation. The default one will be the implemented > right > > > now: "socket.gethostname()" > > > - https://review.opendev.org/c/openstack/neutron/+/788893, providing > > > full compatibility for libvirt. > > > > > > Those are my two cents. > > We can move on with the patch > https://review.opendev.org/c/openstack/neutron/+/763563 to provide new > config option as it's now and additionally implement > https://review.opendev.org/c/openstack/neutron/+/788893 so users who are > using libvirt will not need to change anything, but if someone is using > other hypervisor, this will allow adjustments. Wdyt? > > > > > > Regards. > > > > > > > > > > > > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami > > > > > > wrote: > > > > Hi All, > > > > > > > > > > > > I've been working on bug 1926693[1], and am lost about the reasonable > > > > solutions we expect. Ideally I'd need to bring this topic in the team > > > > meeting > > > > but because of the timezone gap and complicated background, I'd like to > > > > gather some feedback in ml first. > > > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > > > > > > > TL;DR > > > > > > > > Which one(or ones) would be reasonable solutions for this issue ? > > > > > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > > > > (3) Implement something different > > > > > > > > The issue I reported in the bug is that there is an inconsistency > between > > > > nova and neutron about the way to determine a hypervisor name. > > > > Currently neutron uses socket.gethostname() (which always returns > > > > shortname) > > > > to determine a hypervisor name to search the corresponding resource > > > > provider. > > > > On the other hand, nova uses libvirt's getHostname function (if libvirt > > > > driver is used) > > > > which returns a canonical name. Canonical name can be shortname or FQDN > > > > (*1) > > > > and if FQDN is used then neutron and nova never agree. > > > > > > > > (*1) > > > > IMO this is likely to happen in real deployments. For example, TripelO > uses > > > > FQDN for canonical names. > > > > > > > > Neutron already provides the resource_provider_defauly_hypervisors > option > > > > to override a hypervisor name used. However because this option accepts > > > > a map between interface and hypervisor, setting this parameter requires > > > > very redundant description especially when a compute node has multiple > > > > interfaces/bridges. The following example shows how redundant the > current > > > > requirement is. > > > > ~~~ > > > > [OVS] > > > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ > > > > br-data3:1024,1024,br-data4,1024:1024 > > > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > > > > compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain > > > > ~~~ > > > > > > > > I've submitted a change to propose a new single parameter to override > > > > the base hypervisor name but this is currently -2ed, mainly because > > > > I lacked analysis about the root cause of mismatch when I proposed > this. > > > > > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > > > > > On the other hand, I submitted a different change to neutron which > > > > implements > > > > the logic to get a hypervisor name which is fully compatible with > libvirt. > > > > While this would save users from even overriding hypervisor names, I'm > > > > aware > > > > that this might break the other virt driver which depends on a > different > > > > logic > > > > to generate a hypervisor name. IMO the patch is still useful > considering > > > > the libvirt driver would be the most popular option now, but I'm not > fully > > > > aware of the impact on the other drivers, especially because I don't > know > > > > which virt driver would support the minimum QoS feature now. > > > > > > > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > > > > > > In the review of (2), Sean mentioned implementing a logic to determine > > > > an appropriate resource provider(3) even if there is a mismatch about > > > > host name format, but I'm not sure how I would implement that, tbh. > > > > > > > > > > > > My current thought is to merge (1) as a quick solution first, and > discuss > > > > whether > > > > we should merge (2), but I'd like to ask for some feedback about this > plan > > > > (like we should NOT merge (2)). > > > > > > > > I'd appreciate your thoughts about this $topic. > > > > > > > > Thank you, > > > > Takashi > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Fri Jun 11 08:57:53 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 11 Jun 2021 08:57:53 +0000 (UTC) Subject: Domain tab References: <1268317224.5416429.1623401873029.ref@mail.yahoo.com> Message-ID: <1268317224.5416429.1623401873029@mail.yahoo.com> Hi all, I have two domains in my setup (default & ldap) If I log in as default admin I cannot see a domain tab in the identity dropdown (all cli works fine). I'm sure I had it there before and think it could be a setting in /etc/opoenstack-dashboard/local_settings.py Any pointers as to what I might be missing? Thanks in advance. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Fri Jun 11 09:19:16 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Fri, 11 Jun 2021 17:19:16 +0800 Subject: Domain tab In-Reply-To: <1268317224.5416429.1623401873029@mail.yahoo.com> References: <1268317224.5416429.1623401873029.ref@mail.yahoo.com> <1268317224.5416429.1623401873029@mail.yahoo.com> Message-ID: Hi Derek, I think that you need to give admin privilege to view the domain tab. # see domain tab in admin user openstack user list | grep admin openstack role add --domain default --user [ID from previous command] admin So in my case I have previously used: openstack user list | grep admin openstack role add --domain default --user beae2211cad94afc83173d730cce0c85 admin kind regards, On Fri, 11 Jun 2021 at 16:59, Derek O keeffe wrote: > Hi all, > > I have two domains in my setup (default & ldap) > > If I log in as default admin I cannot see a domain tab in the identity > dropdown (all cli works fine). I'm sure I had it there before and think it > could be a setting in /etc/opoenstack-dashboard/local_settings.py > > Any pointers as to what I might be missing? Thanks in advance. > > Regards, > Derek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Fri Jun 11 09:32:04 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 11 Jun 2021 09:32:04 +0000 (UTC) Subject: Domain tab In-Reply-To: References: <1268317224.5416429.1623401873029.ref@mail.yahoo.com> <1268317224.5416429.1623401873029@mail.yahoo.com> Message-ID: <28347774.8726009.1623403924216@mail.yahoo.com> Hi Tony, Thanks for that, worked first time!! Do you know if it's possible to allow the default domain admin edit the ldap members that are pulled into a domain? I can't seem to be able to see the users in that domain to change their roles through horizon. All works fine over cli. Thanks in advance. Regards,Derek On Friday 11 June 2021, 10:24:37 IST, Tony Pearce wrote: Hi Derek,  I think that you need to give admin privilege to view the domain tab.  # see domain tab in admin useropenstack user list | grep adminopenstack role add --domain default --user [ID from previous command] admin So in my case I have previously used: openstack user list | grep adminopenstack role add --domain default --user beae2211cad94afc83173d730cce0c85 admin kind regards, On Fri, 11 Jun 2021 at 16:59, Derek O keeffe wrote: Hi all, I have two domains in my setup (default & ldap) If I log in as default admin I cannot see a domain tab in the identity dropdown (all cli works fine). I'm sure I had it there before and think it could be a setting in /etc/opoenstack-dashboard/local_settings.py Any pointers as to what I might be missing? Thanks in advance. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Fri Jun 11 09:46:40 2021 From: bkslash at poczta.onet.pl (at) Date: Fri, 11 Jun 2021 11:46:40 +0200 Subject: [masakari] Compute service with name XXXXX not found. Message-ID: <6241D8B0-5DF1-4A46-9089-F2A9A7C978E5@poczta.onet.pl> Hi, I have some problem with masakari. I can create segment (from CLI and Horizon), but can't create host (the same result from Horizon and CLI). openstack segment host create XXXXX COMPUTE SSH segment_id returns BadRequest: Compute service with name XXXXX could not be found. XXXXX is the name which Horizon suggest, and it's a name of compute host. openstack compute service list returns proper list with state up/enabled on compute hosts (zone nova) Maybe I misunderstood some parameters of host create? As "type" I use COMPUTE, what value should it be? From "Binary" column of openstack compute service list? What is "control_attributes" field, because documentation lacks preceise information what value should be there and what is it use for. Tried to found some info on this error but I haven't found anything... Thanks in advance for any help. Best regards Adam Tomas From tonyppe at gmail.com Fri Jun 11 09:50:51 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Fri, 11 Jun 2021 17:50:51 +0800 Subject: Domain tab In-Reply-To: <28347774.8726009.1623403924216@mail.yahoo.com> References: <1268317224.5416429.1623401873029.ref@mail.yahoo.com> <1268317224.5416429.1623401873029@mail.yahoo.com> <28347774.8726009.1623403924216@mail.yahoo.com> Message-ID: Hi Derek, I think I understand - yes you can do that in horizon but you first need to "switch domain context" because default admin user is in domain "default". Now you can see the domains tab, you should see in there a button for setting the ldap context and then you can manage users like add to projects roles. Kind regards, Tony Pearce On Fri, 11 Jun 2021 at 17:32, Derek O keeffe wrote: > Hi Tony, > > Thanks for that, worked first time!! > > Do you know if it's possible to allow the default domain admin edit the > ldap members that are pulled into a domain? I can't seem to be able to see > the users in that domain to change their roles through horizon. All works > fine over cli. Thanks in advance. > > Regards, > Derek > > On Friday 11 June 2021, 10:24:37 IST, Tony Pearce > wrote: > > > Hi Derek, > > I think that you need to give admin privilege to view the domain tab. > > # see domain tab in admin user > openstack user list | grep admin > openstack role add --domain default --user [ID from previous command] admin > > So in my case I have previously used: > openstack user list | grep admin > openstack role add --domain default --user > beae2211cad94afc83173d730cce0c85 admin > > kind regards, > > > On Fri, 11 Jun 2021 at 16:59, Derek O keeffe > wrote: > > Hi all, > > I have two domains in my setup (default & ldap) > > If I log in as default admin I cannot see a domain tab in the identity > dropdown (all cli works fine). I'm sure I had it there before and think it > could be a setting in /etc/opoenstack-dashboard/local_settings.py > > Any pointers as to what I might be missing? Thanks in advance. > > Regards, > Derek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri Jun 11 10:20:25 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 11 Jun 2021 19:20:25 +0900 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: <2993434.SUg3sCx5Oz@p1> Message-ID: Hi Slawek and Radolfo, Thank you for your feedback. On Fri, Jun 11, 2021 at 5:47 PM Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > I agree with this idea but what > https://review.opendev.org/c/openstack/neutron/+/763563 is proposing > differs from what I'm saying: instead of providing the hostname (that is > something we can do "resource_provider_hypervisors"), we should provide the > hypervisor name (default: libvirt). > The main problem is that the logic to determine "hypervisor name" is different in each virt driver. For example libvirt driver uses canonical name while power driver uses [DEFAULT] host in nova.conf . So if we fix compatibility with one virt driver then it would break compatibility with the other driver. Because neutron is not aware of the virt driver used, it's impossible to avoid that inconsistency completely. Thank you, Takashi > > On Fri, Jun 11, 2021 at 10:36 AM Slawek Kaplonski > wrote: > >> Hi, >> >> Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez pisze: >> >> > Hello Takashi and Neutrinos: >> >> > >> >> > First of all, thank you for working on this. >> >> > >> >> > Currently users have the ability to override the host name using >> >> > "resource_provider_hypervisors". That means this parameter is always >> >> > configurable; IMO we are safe on this. >> >> > >> >> > The problem we have is how we should retrieve this host name if >> >> > "resource_provider_hypervisors" is not provided. I think the solution >> could >> >> > be a combination of: >> >> > >> >> > - A first patch providing the ability to select the hypervisor type. >> The >> >> > default one could be "libvirt". Each driver can have a particular >> host name >> >> > retrieval implementation. The default one will be the implemented >> right >> >> > now: "socket.gethostname()" >> >> > - https://review.opendev.org/c/openstack/neutron/+/788893, providing >> >> > full compatibility for libvirt. >> >> > >> >> > Those are my two cents. >> >> We can move on with the patch >> https://review.opendev.org/c/openstack/neutron/+/763563 to provide new >> config option as it's now and additionally implement >> https://review.opendev.org/c/openstack/neutron/+/788893 so users who are >> using libvirt will not need to change anything, but if someone is using >> other hypervisor, this will allow adjustments. Wdyt? >> >> > >> >> > Regards. >> >> > >> >> > >> >> > >> >> > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami >> >> > >> >> > wrote: >> >> > > Hi All, >> >> > > >> >> > > >> >> > > I've been working on bug 1926693[1], and am lost about the reasonable >> >> > > solutions we expect. Ideally I'd need to bring this topic in the team >> >> > > meeting >> >> > > but because of the timezone gap and complicated background, I'd like >> to >> >> > > gather some feedback in ml first. >> >> > > >> >> > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 >> >> > > >> >> > > TL;DR >> >> > > >> >> > > Which one(or ones) would be reasonable solutions for this issue ? >> >> > > >> >> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >> >> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 >> >> > > (3) Implement something different >> >> > > >> >> > > The issue I reported in the bug is that there is an inconsistency >> between >> >> > > nova and neutron about the way to determine a hypervisor name. >> >> > > Currently neutron uses socket.gethostname() (which always returns >> >> > > shortname) >> >> > > to determine a hypervisor name to search the corresponding resource >> >> > > provider. >> >> > > On the other hand, nova uses libvirt's getHostname function (if >> libvirt >> >> > > driver is used) >> >> > > which returns a canonical name. Canonical name can be shortname or >> FQDN >> >> > > (*1) >> >> > > and if FQDN is used then neutron and nova never agree. >> >> > > >> >> > > (*1) >> >> > > IMO this is likely to happen in real deployments. For example, >> TripelO uses >> >> > > FQDN for canonical names. >> >> > > >> >> > > Neutron already provides the resource_provider_defauly_hypervisors >> option >> >> > > to override a hypervisor name used. However because this option >> accepts >> >> > > a map between interface and hypervisor, setting this parameter >> requires >> >> > > very redundant description especially when a compute node has multiple >> >> > > interfaces/bridges. The following example shows how redundant the >> current >> >> > > requirement is. >> >> > > ~~~ >> >> > > [OVS] >> >> > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ >> >> > > br-data3:1024,1024,br-data4,1024:1024 >> >> > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ >> >> > > >> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain >> >> > > ~~~ >> >> > > >> >> > > I've submitted a change to propose a new single parameter to override >> >> > > the base hypervisor name but this is currently -2ed, mainly because >> >> > > I lacked analysis about the root cause of mismatch when I proposed >> this. >> >> > > >> >> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >> >> > > >> >> > > On the other hand, I submitted a different change to neutron which >> >> > > implements >> >> > > the logic to get a hypervisor name which is fully compatible with >> libvirt. >> >> > > While this would save users from even overriding hypervisor names, I'm >> >> > > aware >> >> > > that this might break the other virt driver which depends on a >> different >> >> > > logic >> >> > > to generate a hypervisor name. IMO the patch is still useful >> considering >> >> > > the libvirt driver would be the most popular option now, but I'm not >> fully >> >> > > aware of the impact on the other drivers, especially because I don't >> know >> >> > > which virt driver would support the minimum QoS feature now. >> >> > > >> >> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ >> >> > > >> >> > > In the review of (2), Sean mentioned implementing a logic to determine >> >> > > an appropriate resource provider(3) even if there is a mismatch about >> >> > > host name format, but I'm not sure how I would implement that, tbh. >> >> > > >> >> > > >> >> > > My current thought is to merge (1) as a quick solution first, and >> discuss >> >> > > whether >> >> > > we should merge (2), but I'd like to ask for some feedback about this >> plan >> >> > > (like we should NOT merge (2)). >> >> > > >> >> > > I'd appreciate your thoughts about this $topic. >> >> > > >> >> > > Thank you, >> >> > > Takashi >> >> >> -- >> >> Slawek Kaplonski >> >> Principal Software Engineer >> >> Red Hat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Jun 11 10:39:06 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 11 Jun 2021 12:39:06 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: <2993434.SUg3sCx5Oz@p1> Message-ID: Hello: I think I'm not explaining myself correctly. This is what I'm proposing: to provide a "hypervisor_type" variable in Neutron and implement, for each supported hypervisor, a hostname method retrieval. If we don't support the hypervisor used, the user can always provide the hostname via "resource_provider_hypervisors". Regards. On Fri, Jun 11, 2021 at 12:20 PM Takashi Kajinami wrote: > Hi Slawek and Radolfo, > > Thank you for your feedback. > > On Fri, Jun 11, 2021 at 5:47 PM Rodolfo Alonso Hernandez < > ralonsoh at redhat.com> wrote: > >> I agree with this idea but what >> https://review.opendev.org/c/openstack/neutron/+/763563 is proposing >> differs from what I'm saying: instead of providing the hostname (that is >> something we can do "resource_provider_hypervisors"), we should provide the >> hypervisor name (default: libvirt). >> > > The main problem is that the logic to determine "hypervisor name" is > different in each virt driver. > For example libvirt driver uses canonical name while power driver uses > [DEFAULT] host in nova.conf . > So if we fix compatibility with one virt driver then it would break > compatibility with the other driver. > Because neutron is not aware of the virt driver used, it's impossible to > avoid that inconsistency completely. > > > Thank you, > Takashi > > > > >> >> On Fri, Jun 11, 2021 at 10:36 AM Slawek Kaplonski >> wrote: >> >>> Hi, >>> >>> Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez >>> pisze: >>> >>> > Hello Takashi and Neutrinos: >>> >>> > >>> >>> > First of all, thank you for working on this. >>> >>> > >>> >>> > Currently users have the ability to override the host name using >>> >>> > "resource_provider_hypervisors". That means this parameter is always >>> >>> > configurable; IMO we are safe on this. >>> >>> > >>> >>> > The problem we have is how we should retrieve this host name if >>> >>> > "resource_provider_hypervisors" is not provided. I think the solution >>> could >>> >>> > be a combination of: >>> >>> > >>> >>> > - A first patch providing the ability to select the hypervisor >>> type. The >>> >>> > default one could be "libvirt". Each driver can have a particular >>> host name >>> >>> > retrieval implementation. The default one will be the implemented >>> right >>> >>> > now: "socket.gethostname()" >>> >>> > - https://review.opendev.org/c/openstack/neutron/+/788893, >>> providing >>> >>> > full compatibility for libvirt. >>> >>> > >>> >>> > Those are my two cents. >>> >>> We can move on with the patch >>> https://review.opendev.org/c/openstack/neutron/+/763563 to provide new >>> config option as it's now and additionally implement >>> https://review.opendev.org/c/openstack/neutron/+/788893 so users who >>> are using libvirt will not need to change anything, but if someone is using >>> other hypervisor, this will allow adjustments. Wdyt? >>> >>> > >>> >>> > Regards. >>> >>> > >>> >>> > >>> >>> > >>> >>> > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami >>> >>> > >>> >>> > wrote: >>> >>> > > Hi All, >>> >>> > > >>> >>> > > >>> >>> > > I've been working on bug 1926693[1], and am lost about the reasonable >>> >>> > > solutions we expect. Ideally I'd need to bring this topic in the team >>> >>> > > meeting >>> >>> > > but because of the timezone gap and complicated background, I'd like >>> to >>> >>> > > gather some feedback in ml first. >>> >>> > > >>> >>> > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 >>> >>> > > >>> >>> > > TL;DR >>> >>> > > >>> >>> > > Which one(or ones) would be reasonable solutions for this issue ? >>> >>> > > >>> >>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>> >>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 >>> >>> > > (3) Implement something different >>> >>> > > >>> >>> > > The issue I reported in the bug is that there is an inconsistency >>> between >>> >>> > > nova and neutron about the way to determine a hypervisor name. >>> >>> > > Currently neutron uses socket.gethostname() (which always returns >>> >>> > > shortname) >>> >>> > > to determine a hypervisor name to search the corresponding resource >>> >>> > > provider. >>> >>> > > On the other hand, nova uses libvirt's getHostname function (if >>> libvirt >>> >>> > > driver is used) >>> >>> > > which returns a canonical name. Canonical name can be shortname or >>> FQDN >>> >>> > > (*1) >>> >>> > > and if FQDN is used then neutron and nova never agree. >>> >>> > > >>> >>> > > (*1) >>> >>> > > IMO this is likely to happen in real deployments. For example, >>> TripelO uses >>> >>> > > FQDN for canonical names. >>> >>> > > >>> >>> > > Neutron already provides the resource_provider_defauly_hypervisors >>> option >>> >>> > > to override a hypervisor name used. However because this option >>> accepts >>> >>> > > a map between interface and hypervisor, setting this parameter >>> requires >>> >>> > > very redundant description especially when a compute node has >>> multiple >>> >>> > > interfaces/bridges. The following example shows how redundant the >>> current >>> >>> > > requirement is. >>> >>> > > ~~~ >>> >>> > > [OVS] >>> >>> > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ >>> >>> > > br-data3:1024,1024,br-data4,1024:1024 >>> >>> > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ >>> >>> > > >>> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain >>> >>> > > ~~~ >>> >>> > > >>> >>> > > I've submitted a change to propose a new single parameter to override >>> >>> > > the base hypervisor name but this is currently -2ed, mainly because >>> >>> > > I lacked analysis about the root cause of mismatch when I proposed >>> this. >>> >>> > > >>> >>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>> >>> > > >>> >>> > > On the other hand, I submitted a different change to neutron which >>> >>> > > implements >>> >>> > > the logic to get a hypervisor name which is fully compatible with >>> libvirt. >>> >>> > > While this would save users from even overriding hypervisor names, >>> I'm >>> >>> > > aware >>> >>> > > that this might break the other virt driver which depends on a >>> different >>> >>> > > logic >>> >>> > > to generate a hypervisor name. IMO the patch is still useful >>> considering >>> >>> > > the libvirt driver would be the most popular option now, but I'm not >>> fully >>> >>> > > aware of the impact on the other drivers, especially because I don't >>> know >>> >>> > > which virt driver would support the minimum QoS feature now. >>> >>> > > >>> >>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ >>> >>> > > >>> >>> > > In the review of (2), Sean mentioned implementing a logic to >>> determine >>> >>> > > an appropriate resource provider(3) even if there is a mismatch about >>> >>> > > host name format, but I'm not sure how I would implement that, tbh. >>> >>> > > >>> >>> > > >>> >>> > > My current thought is to merge (1) as a quick solution first, and >>> discuss >>> >>> > > whether >>> >>> > > we should merge (2), but I'd like to ask for some feedback about >>> this plan >>> >>> > > (like we should NOT merge (2)). >>> >>> > > >>> >>> > > I'd appreciate your thoughts about this $topic. >>> >>> > > >>> >>> > > Thank you, >>> >>> > > Takashi >>> >>> >>> -- >>> >>> Slawek Kaplonski >>> >>> Principal Software Engineer >>> >>> Red Hat >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri Jun 11 11:14:03 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 11 Jun 2021 20:14:03 +0900 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: <2993434.SUg3sCx5Oz@p1> Message-ID: Hi Radolfo, Thank you for your clarification and sorry I misread what you wrote. My concern with that approach is that adding the hypervisor_type parameter would mean neutron will implement a logic for the other virt drivers, which is currently maintained in nova or hypervisor like libvirt in the future and it would expand the scope of neutron too much. IIUC current Neutron doesn't care about virt drivers used, and I agree with Slawek that it's better to keep that current design here. Thank you, Takashi On Fri, Jun 11, 2021 at 7:39 PM Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > Hello: > > I think I'm not explaining myself correctly. This is what I'm proposing: > to provide a "hypervisor_type" variable in Neutron and implement, for each > supported hypervisor, a hostname method retrieval. > > If we don't support the hypervisor used, the user can always provide the > hostname via "resource_provider_hypervisors". > > Regards. > > On Fri, Jun 11, 2021 at 12:20 PM Takashi Kajinami > wrote: > >> Hi Slawek and Radolfo, >> >> Thank you for your feedback. >> >> On Fri, Jun 11, 2021 at 5:47 PM Rodolfo Alonso Hernandez < >> ralonsoh at redhat.com> wrote: >> >>> I agree with this idea but what >>> https://review.opendev.org/c/openstack/neutron/+/763563 is proposing >>> differs from what I'm saying: instead of providing the hostname (that is >>> something we can do "resource_provider_hypervisors"), we should provide the >>> hypervisor name (default: libvirt). >>> >> >> The main problem is that the logic to determine "hypervisor name" is >> different in each virt driver. >> For example libvirt driver uses canonical name while power driver uses >> [DEFAULT] host in nova.conf . >> So if we fix compatibility with one virt driver then it would break >> compatibility with the other driver. >> Because neutron is not aware of the virt driver used, it's impossible to >> avoid that inconsistency completely. >> >> >> Thank you, >> Takashi >> >> >> >> >>> >>> On Fri, Jun 11, 2021 at 10:36 AM Slawek Kaplonski >>> wrote: >>> >>>> Hi, >>>> >>>> Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez >>>> pisze: >>>> >>>> > Hello Takashi and Neutrinos: >>>> >>>> > >>>> >>>> > First of all, thank you for working on this. >>>> >>>> > >>>> >>>> > Currently users have the ability to override the host name using >>>> >>>> > "resource_provider_hypervisors". That means this parameter is always >>>> >>>> > configurable; IMO we are safe on this. >>>> >>>> > >>>> >>>> > The problem we have is how we should retrieve this host name if >>>> >>>> > "resource_provider_hypervisors" is not provided. I think the solution >>>> could >>>> >>>> > be a combination of: >>>> >>>> > >>>> >>>> > - A first patch providing the ability to select the hypervisor >>>> type. The >>>> >>>> > default one could be "libvirt". Each driver can have a particular >>>> host name >>>> >>>> > retrieval implementation. The default one will be the implemented >>>> right >>>> >>>> > now: "socket.gethostname()" >>>> >>>> > - https://review.opendev.org/c/openstack/neutron/+/788893, >>>> providing >>>> >>>> > full compatibility for libvirt. >>>> >>>> > >>>> >>>> > Those are my two cents. >>>> >>>> We can move on with the patch >>>> https://review.opendev.org/c/openstack/neutron/+/763563 to provide new >>>> config option as it's now and additionally implement >>>> https://review.opendev.org/c/openstack/neutron/+/788893 so users who >>>> are using libvirt will not need to change anything, but if someone is using >>>> other hypervisor, this will allow adjustments. Wdyt? >>>> >>>> > >>>> >>>> > Regards. >>>> >>>> > >>>> >>>> > >>>> >>>> > >>>> >>>> > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami >>> > >>>> >>>> > >>>> >>>> > wrote: >>>> >>>> > > Hi All, >>>> >>>> > > >>>> >>>> > > >>>> >>>> > > I've been working on bug 1926693[1], and am lost about the >>>> reasonable >>>> >>>> > > solutions we expect. Ideally I'd need to bring this topic in the >>>> team >>>> >>>> > > meeting >>>> >>>> > > but because of the timezone gap and complicated background, I'd >>>> like to >>>> >>>> > > gather some feedback in ml first. >>>> >>>> > > >>>> >>>> > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 >>>> >>>> > > >>>> >>>> > > TL;DR >>>> >>>> > > >>>> >>>> > > Which one(or ones) would be reasonable solutions for this issue ? >>>> >>>> > > >>>> >>>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>>> >>>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 >>>> >>>> > > (3) Implement something different >>>> >>>> > > >>>> >>>> > > The issue I reported in the bug is that there is an inconsistency >>>> between >>>> >>>> > > nova and neutron about the way to determine a hypervisor name. >>>> >>>> > > Currently neutron uses socket.gethostname() (which always returns >>>> >>>> > > shortname) >>>> >>>> > > to determine a hypervisor name to search the corresponding resource >>>> >>>> > > provider. >>>> >>>> > > On the other hand, nova uses libvirt's getHostname function (if >>>> libvirt >>>> >>>> > > driver is used) >>>> >>>> > > which returns a canonical name. Canonical name can be shortname or >>>> FQDN >>>> >>>> > > (*1) >>>> >>>> > > and if FQDN is used then neutron and nova never agree. >>>> >>>> > > >>>> >>>> > > (*1) >>>> >>>> > > IMO this is likely to happen in real deployments. For example, >>>> TripelO uses >>>> >>>> > > FQDN for canonical names. >>>> >>>> > > >>>> >>>> > > Neutron already provides the resource_provider_defauly_hypervisors >>>> option >>>> >>>> > > to override a hypervisor name used. However because this option >>>> accepts >>>> >>>> > > a map between interface and hypervisor, setting this parameter >>>> requires >>>> >>>> > > very redundant description especially when a compute node has >>>> multiple >>>> >>>> > > interfaces/bridges. The following example shows how redundant the >>>> current >>>> >>>> > > requirement is. >>>> >>>> > > ~~~ >>>> >>>> > > [OVS] >>>> >>>> > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ >>>> >>>> > > br-data3:1024,1024,br-data4,1024:1024 >>>> >>>> > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ >>>> >>>> > > >>>> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain >>>> >>>> > > ~~~ >>>> >>>> > > >>>> >>>> > > I've submitted a change to propose a new single parameter to >>>> override >>>> >>>> > > the base hypervisor name but this is currently -2ed, mainly because >>>> >>>> > > I lacked analysis about the root cause of mismatch when I proposed >>>> this. >>>> >>>> > > >>>> >>>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>>> >>>> > > >>>> >>>> > > On the other hand, I submitted a different change to neutron which >>>> >>>> > > implements >>>> >>>> > > the logic to get a hypervisor name which is fully compatible with >>>> libvirt. >>>> >>>> > > While this would save users from even overriding hypervisor names, >>>> I'm >>>> >>>> > > aware >>>> >>>> > > that this might break the other virt driver which depends on a >>>> different >>>> >>>> > > logic >>>> >>>> > > to generate a hypervisor name. IMO the patch is still useful >>>> considering >>>> >>>> > > the libvirt driver would be the most popular option now, but I'm >>>> not fully >>>> >>>> > > aware of the impact on the other drivers, especially because I >>>> don't know >>>> >>>> > > which virt driver would support the minimum QoS feature now. >>>> >>>> > > >>>> >>>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ >>>> >>>> > > >>>> >>>> > > In the review of (2), Sean mentioned implementing a logic to >>>> determine >>>> >>>> > > an appropriate resource provider(3) even if there is a mismatch >>>> about >>>> >>>> > > host name format, but I'm not sure how I would implement that, tbh. >>>> >>>> > > >>>> >>>> > > >>>> >>>> > > My current thought is to merge (1) as a quick solution first, and >>>> discuss >>>> >>>> > > whether >>>> >>>> > > we should merge (2), but I'd like to ask for some feedback about >>>> this plan >>>> >>>> > > (like we should NOT merge (2)). >>>> >>>> > > >>>> >>>> > > I'd appreciate your thoughts about this $topic. >>>> >>>> > > >>>> >>>> > > Thank you, >>>> >>>> > > Takashi >>>> >>>> >>>> -- >>>> >>>> Slawek Kaplonski >>>> >>>> Principal Software Engineer >>>> >>>> Red Hat >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Fri Jun 11 11:19:34 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Fri, 11 Jun 2021 19:19:34 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host Message-ID: I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is not found during the host configure part. PLAY [Verify that the Kayobe Ansible user account is accessible] TASK [Verify that a command can be executed] fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} Python3 is installed on the host. When searching where this platform-python is coming from it returns the kolla-ansible virtual envs: $ grep -rni -e "platform-python" venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: '8': /usr/libexec/platform-python venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: - /usr/libexec/platform-python I had a look through the deployment guide for Kayobe Wallaby and didnt see a note about changing this. Do I need to do further steps to support the ubuntu overcloud host? I have already set (as per the doc): os_distribution: ubuntu os_release: focal Regards, Tony Pearce -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Jun 11 11:26:13 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 11 Jun 2021 13:26:13 +0200 Subject: [masakari] Compute service with name XXXXX not found. In-Reply-To: <6241D8B0-5DF1-4A46-9089-F2A9A7C978E5@poczta.onet.pl> References: <6241D8B0-5DF1-4A46-9089-F2A9A7C978E5@poczta.onet.pl> Message-ID: On Fri, Jun 11, 2021 at 11:49 AM at wrote: > > Hi, Hello, > I have some problem with masakari. I can create segment (from CLI and Horizon), but can't create host (the same result from Horizon and CLI). > > openstack segment host create XXXXX COMPUTE SSH segment_id > > returns BadRequest: Compute service with name XXXXX could not be found. > XXXXX is the name which Horizon suggest, and it's a name of compute host. > > openstack compute service list > returns proper list with state up/enabled on compute hosts (zone nova) > Maybe I misunderstood some parameters of host create? As "type" I use COMPUTE, what value should it be? From "Binary" column of openstack compute service list? What is "control_attributes" field, because documentation lacks preceise information what value should be there and what is it use for. Tried to found some info on this error but I haven't found anything... The name should be as it is listed by nova. Masakari is querying nova for that compute host. The exact query can be run using: openstack compute service list --service nova-compute --host $HOSTNAME where $HOSTNAME is the desired hostname. The type should be "COMPUTE" and folks often use "SSH" for control_attributes (but it has no meaning). -yoctozepto From owalsh at redhat.com Fri Jun 11 11:47:52 2021 From: owalsh at redhat.com (Oliver Walsh) Date: Fri, 11 Jun 2021 12:47:52 +0100 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: Hi Takashi, On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami wrote: > Hi All, > > > I've been working on bug 1926693[1], and am lost about the reasonable > solutions we expect. Ideally I'd need to bring this topic in the team > meeting > but because of the timezone gap and complicated background, I'd like to > gather some feedback in ml first. > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > TL;DR > Which one(or ones) would be reasonable solutions for this issue ? > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > (3) Implement something different > > The issue I reported in the bug is that there is an inconsistency between > nova and neutron about the way to determine a hypervisor name. > Currently neutron uses socket.gethostname() (which always returns > shortname) > socket.gethostname() can return fqdn or shortname - https://docs.python.org/3/library/socket.html#socket.gethostname. I've seen cases where it switched from short to fqdn but I'm not sure of the root cause - DHCP lease setting a hostname/domainname perhaps. Thanks, Ollie to determine a hypervisor name to search the corresponding resource > provider. > On the other hand, nova uses libvirt's getHostname function (if libvirt > driver is used) > which returns a canonical name. Canonical name can be shortname or FQDN > (*1) > and if FQDN is used then neutron and nova never agree. > > (*1) > IMO this is likely to happen in real deployments. For example, TripelO uses > FQDN for canonical names. > > Neutron already provides the resource_provider_defauly_hypervisors option > to override a hypervisor name used. However because this option accepts > a map between interface and hypervisor, setting this parameter requires > very redundant description especially when a compute node has multiple > interfaces/bridges. The following example shows how redundant the current > requirement is. > ~~~ > [OVS] > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ > br-data3:1024,1024,br-data4,1024:1024 > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain > ~~~ > > I've submitted a change to propose a new single parameter to override > the base hypervisor name but this is currently -2ed, mainly because > I lacked analysis about the root cause of mismatch when I proposed this. > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > On the other hand, I submitted a different change to neutron which > implements > the logic to get a hypervisor name which is fully compatible with libvirt. > While this would save users from even overriding hypervisor names, I'm > aware > that this might break the other virt driver which depends on a different > logic > to generate a hypervisor name. IMO the patch is still useful considering > the libvirt driver would be the most popular option now, but I'm not fully > aware of the impact on the other drivers, especially because I don't know > which virt driver would support the minimum QoS feature now. > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > In the review of (2), Sean mentioned implementing a logic to determine > an appropriate resource provider(3) even if there is a mismatch about > host name format, but I'm not sure how I would implement that, tbh. > > > My current thought is to merge (1) as a quick solution first, and discuss > whether > we should merge (2), but I'd like to ask for some feedback about this plan > (like we should NOT merge (2)). > > I'd appreciate your thoughts about this $topic. > > Thank you, > Takashi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Fri Jun 11 12:47:03 2021 From: bkslash at poczta.onet.pl (at) Date: Fri, 11 Jun 2021 14:47:03 +0200 Subject: [masakari] Compute service with name XXXXX not found. In-Reply-To: References: Message-ID: <15C953C2-6BC2-4FC8-A6BD-9AB177911ADC@poczta.onet.pl> Hi, thx for the answer. > openstack compute service list --service nova-compute --host $HOSTNAME so in openstack segment host create I should use name which is displayed in "Host" column, right? So that's what I do :( openstack compute service list --service nova-compute ID Binary Host Zone Status State 20 nova-compute XXXXX nova enabled up openstack segment host create XXXXX COMPUTE SSH 00dd5bxxxxxx and still "Compute service with name XXXXX could not be found"..... How masakari discovers hosts? Best regards Adam Tomas > Wiadomość napisana przez Radosław Piliszek w dniu 11.06.2021, o godz. 13:26: > > On Fri, Jun 11, 2021 at 11:49 AM at wrote: >> >> Hi, > > Hello, > >> I have some problem with masakari. I can create segment (from CLI and Horizon), but can't create host (the same result from Horizon and CLI). >> >> openstack segment host create XXXXX COMPUTE SSH segment_id >> >> returns BadRequest: Compute service with name XXXXX could not be found. >> XXXXX is the name which Horizon suggest, and it's a name of compute host. >> >> openstack compute service list >> returns proper list with state up/enabled on compute hosts (zone nova) >> Maybe I misunderstood some parameters of host create? As "type" I use COMPUTE, what value should it be? From "Binary" column of openstack compute service list? What is "control_attributes" field, because documentation lacks preceise information what value should be there and what is it use for. Tried to found some info on this error but I haven't found anything... > > The name should be as it is listed by nova. Masakari is querying nova > for that compute host. The exact query can be run using: > openstack compute service list --service nova-compute --host $HOSTNAME > where $HOSTNAME is the desired hostname. > The type should be "COMPUTE" and folks often use "SSH" for > control_attributes (but it has no meaning). > > -yoctozepto From radoslaw.piliszek at gmail.com Fri Jun 11 12:56:06 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 11 Jun 2021 14:56:06 +0200 Subject: [masakari] Compute service with name XXXXX not found. In-Reply-To: <15C953C2-6BC2-4FC8-A6BD-9AB177911ADC@poczta.onet.pl> References: <15C953C2-6BC2-4FC8-A6BD-9AB177911ADC@poczta.onet.pl> Message-ID: On Fri, Jun 11, 2021 at 2:47 PM at wrote: > > Hi, thx for the answer. > > openstack compute service list --service nova-compute --host $HOSTNAME > so in > > openstack segment host create > > I should use name which is displayed in "Host" column, right? So that's what I do :( Yes. > openstack compute service list --service nova-compute > > ID Binary Host Zone Status State > 20 nova-compute XXXXX nova enabled up > > openstack segment host create XXXXX COMPUTE SSH 00dd5bxxxxxx > > and still "Compute service with name XXXXX could not be found"..... > > How masakari discovers hosts? I wrote this already: openstack compute service list --service nova-compute --host $HOSTNAME did you try including the same hostname in this command? If it works and Masakari does not, I would make sure you set up Masakari to speak to the right Nova API. Finally, if all else fails, please paste (e.g. https://paste.ubuntu.com/ ) masakari api logs for those rejected host creations. Though do that with debug=True in the config [DEFAULT] section. -yoctozepto From pierre at stackhpc.com Fri Jun 11 13:04:26 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 11 Jun 2021 15:04:26 +0200 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Hi Tony, Kayobe doesn't use platform-python anymore, on both stable/wallaby and stable/victoria: https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 Can you double-check what version you are using, and share how you installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. Best wishes, Pierre On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: > > I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is not found during the host configure part. > > PLAY [Verify that the Kayobe Ansible user account is accessible] > TASK [Verify that a command can be executed] > > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} > > Python3 is installed on the host. When searching where this platform-python is coming from it returns the kolla-ansible virtual envs: > > $ grep -rni -e "platform-python" > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: '8': /usr/libexec/platform-python > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: - /usr/libexec/platform-python > > I had a look through the deployment guide for Kayobe Wallaby and didnt see a note about changing this. > > Do I need to do further steps to support the ubuntu overcloud host? I have already set (as per the doc): > > os_distribution: ubuntu > os_release: focal > > Regards, > > Tony Pearce > From skaplons at redhat.com Fri Jun 11 13:13:52 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 11 Jun 2021 15:13:52 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: <1856963.qdIGSAVlHa@p1> Hi, Dnia piątek, 11 czerwca 2021 13:14:03 CEST Takashi Kajinami pisze: > Hi Radolfo, > > Thank you for your clarification and sorry I misread what you wrote. > > My concern with that approach is that adding the hypervisor_type parameter > would mean > neutron will implement a logic for the other virt drivers, which is > currently maintained in > nova or hypervisor like libvirt in the future and it would expand the scope > of neutron too much. > > IIUC current Neutron doesn't care about virt drivers used, and I agree with > Slawek that > it's better to keep that current design here. > > Thank you, > Takashi > > > On Fri, Jun 11, 2021 at 7:39 PM Rodolfo Alonso Hernandez < > > ralonsoh at redhat.com> wrote: > > Hello: > > > > I think I'm not explaining myself correctly. This is what I'm proposing: > > to provide a "hypervisor_type" variable in Neutron and implement, for each > > supported hypervisor, a hostname method retrieval. > > > > If we don't support the hypervisor used, the user can always provide the > > hostname via "resource_provider_hypervisors". I'm not sure if adding "hypervisor drivers" to neutron is good idea. Solution proposed by Takashi is simpler IMHO. If user just want's to override hostname for all resources, this new option can be used. But in some case, where it's needed to do it "per bridge", that's also possible. I know it's maybe not perfect but IMO still better than nothing. > > > > Regards. > > > > On Fri, Jun 11, 2021 at 12:20 PM Takashi Kajinami > > > > wrote: > >> Hi Slawek and Radolfo, > >> > >> Thank you for your feedback. > >> > >> On Fri, Jun 11, 2021 at 5:47 PM Rodolfo Alonso Hernandez < > >> > >> ralonsoh at redhat.com> wrote: > >>> I agree with this idea but what > >>> https://review.opendev.org/c/openstack/neutron/+/763563 is proposing > >>> differs from what I'm saying: instead of providing the hostname (that is > >>> something we can do "resource_provider_hypervisors"), we should provide the > >>> hypervisor name (default: libvirt). > >> > >> The main problem is that the logic to determine "hypervisor name" is > >> different in each virt driver. > >> For example libvirt driver uses canonical name while power driver uses > >> [DEFAULT] host in nova.conf . > >> So if we fix compatibility with one virt driver then it would break > >> compatibility with the other driver. > >> Because neutron is not aware of the virt driver used, it's impossible to > >> avoid that inconsistency completely. > >> > >> > >> Thank you, > >> Takashi > >> > >>> On Fri, Jun 11, 2021 at 10:36 AM Slawek Kaplonski > >>> > >>> wrote: > >>>> Hi, > >>>> > >>>> Dnia piątek, 11 czerwca 2021 09:57:27 CEST Rodolfo Alonso Hernandez > >>>> > >>>> pisze: > >>>> > Hello Takashi and Neutrinos: > >>>> > > >>>> > > >>>> > > >>>> > First of all, thank you for working on this. > >>>> > > >>>> > > >>>> > > >>>> > Currently users have the ability to override the host name using > >>>> > > >>>> > "resource_provider_hypervisors". That means this parameter is always > >>>> > > >>>> > configurable; IMO we are safe on this. > >>>> > > >>>> > > >>>> > > >>>> > The problem we have is how we should retrieve this host name if > >>>> > > >>>> > "resource_provider_hypervisors" is not provided. I think the solution > >>>> > >>>> could > >>>> > >>>> > be a combination of: > >>>> > - A first patch providing the ability to select the hypervisor > >>>> > >>>> type. The > >>>> > >>>> > default one could be "libvirt". Each driver can have a particular > >>>> > >>>> host name > >>>> > >>>> > retrieval implementation. The default one will be the implemented > >>>> > >>>> right > >>>> > >>>> > now: "socket.gethostname()" > >>>> > > >>>> > - https://review.opendev.org/c/openstack/neutron/+/788893, > >>>> > >>>> providing > >>>> > >>>> > full compatibility for libvirt. > >>>> > > >>>> > Those are my two cents. > >>>> > >>>> We can move on with the patch > >>>> https://review.opendev.org/c/openstack/neutron/+/763563 to provide new > >>>> config option as it's now and additionally implement > >>>> https://review.opendev.org/c/openstack/neutron/+/788893 so users who > >>>> are using libvirt will not need to change anything, but if someone is using > >>>> other hypervisor, this will allow adjustments. Wdyt? > >>>> > >>>> > Regards. > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > On Thu, Jun 10, 2021 at 4:12 PM Takashi Kajinami >>>> > > >>>> > wrote: > >>>> > > Hi All, > >>>> > > > >>>> > > > >>>> > > > >>>> > > > >>>> > > > >>>> > > I've been working on bug 1926693[1], and am lost about the > >>>> > >>>> reasonable > >>>> > >>>> > > solutions we expect. Ideally I'd need to bring this topic in the > >>>> > >>>> team > >>>> > >>>> > > meeting > >>>> > > > >>>> > > but because of the timezone gap and complicated background, I'd > >>>> > >>>> like to > >>>> > >>>> > > gather some feedback in ml first. > >>>> > > > >>>> > > > >>>> > > > >>>> > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > >>>> > > > >>>> > > > >>>> > > > >>>> > > TL;DR > >>>> > > > >>>> > > Which one(or ones) would be reasonable solutions for this issue ? > >>>> > > > >>>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > >>>> > > > >>>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > >>>> > > > >>>> > > (3) Implement something different > >>>> > > > >>>> > > The issue I reported in the bug is that there is an inconsistency > >>>> > >>>> between > >>>> > >>>> > > nova and neutron about the way to determine a hypervisor name. > >>>> > > > >>>> > > Currently neutron uses socket.gethostname() (which always returns > >>>> > > > >>>> > > shortname) > >>>> > > > >>>> > > to determine a hypervisor name to search the corresponding resource > >>>> > > > >>>> > > provider. > >>>> > > > >>>> > > On the other hand, nova uses libvirt's getHostname function (if > >>>> > >>>> libvirt > >>>> > >>>> > > driver is used) > >>>> > > > >>>> > > which returns a canonical name. Canonical name can be shortname or > >>>> > >>>> FQDN > >>>> > >>>> > > (*1) > >>>> > > > >>>> > > and if FQDN is used then neutron and nova never agree. > >>>> > > > >>>> > > > >>>> > > > >>>> > > (*1) > >>>> > > > >>>> > > IMO this is likely to happen in real deployments. For example, > >>>> > >>>> TripelO uses > >>>> > >>>> > > FQDN for canonical names. > >>>> > > > >>>> > > > >>>> > > > >>>> > > Neutron already provides the resource_provider_defauly_hypervisors > >>>> > >>>> option > >>>> > >>>> > > to override a hypervisor name used. However because this option > >>>> > >>>> accepts > >>>> > >>>> > > a map between interface and hypervisor, setting this parameter > >>>> > >>>> requires > >>>> > >>>> > > very redundant description especially when a compute node has > >>>> > >>>> multiple > >>>> > >>>> > > interfaces/bridges. The following example shows how redundant the > >>>> > >>>> current > >>>> > >>>> > > requirement is. > >>>> > > > >>>> > > ~~~ > >>>> > > > >>>> > > [OVS] > >>>> > > > >>>> > > resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024, \ > >>>> > > > >>>> > > br-data3:1024,1024,br-data4,1024:1024 > >>>> > > > >>>> > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > >>>> > >>>> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain > >>>> > >>>> > > ~~~ > >>>> > > > >>>> > > > >>>> > > > >>>> > > I've submitted a change to propose a new single parameter to > >>>> > >>>> override > >>>> > >>>> > > the base hypervisor name but this is currently -2ed, mainly because > >>>> > > > >>>> > > I lacked analysis about the root cause of mismatch when I proposed > >>>> > >>>> this. > >>>> > >>>> > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > >>>> > > > >>>> > > On the other hand, I submitted a different change to neutron which > >>>> > > > >>>> > > implements > >>>> > > > >>>> > > the logic to get a hypervisor name which is fully compatible with > >>>> > >>>> libvirt. > >>>> > >>>> > > While this would save users from even overriding hypervisor names, > >>>> > >>>> I'm > >>>> > >>>> > > aware > >>>> > > > >>>> > > that this might break the other virt driver which depends on a > >>>> > >>>> different > >>>> > >>>> > > logic > >>>> > > > >>>> > > to generate a hypervisor name. IMO the patch is still useful > >>>> > >>>> considering > >>>> > >>>> > > the libvirt driver would be the most popular option now, but I'm > >>>> > >>>> not fully > >>>> > >>>> > > aware of the impact on the other drivers, especially because I > >>>> > >>>> don't know > >>>> > >>>> > > which virt driver would support the minimum QoS feature now. > >>>> > > > >>>> > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > >>>> > > > >>>> > > In the review of (2), Sean mentioned implementing a logic to > >>>> > >>>> determine > >>>> > >>>> > > an appropriate resource provider(3) even if there is a mismatch > >>>> > >>>> about > >>>> > >>>> > > host name format, but I'm not sure how I would implement that, tbh. > >>>> > > > >>>> > > > >>>> > > > >>>> > > > >>>> > > > >>>> > > My current thought is to merge (1) as a quick solution first, and > >>>> > >>>> discuss > >>>> > >>>> > > whether > >>>> > > > >>>> > > we should merge (2), but I'd like to ask for some feedback about > >>>> > >>>> this plan > >>>> > >>>> > > (like we should NOT merge (2)). > >>>> > > > >>>> > > > >>>> > > > >>>> > > I'd appreciate your thoughts about this $topic. > >>>> > > > >>>> > > > >>>> > > > >>>> > > Thank you, > >>>> > > > >>>> > > Takashi > >>>> > >>>> -- > >>>> > >>>> Slawek Kaplonski > >>>> > >>>> Principal Software Engineer > >>>> > >>>> Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From balazs.gibizer at est.tech Fri Jun 11 14:02:26 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 11 Jun 2021 16:02:26 +0200 Subject: [nova][gate] openstack-tox-pep8 job broken In-Reply-To: References: Message-ID: <20JJUQ.TK9GSFVBOA1F1@est.tech> Hi, We merged https://review.opendev.org/c/openstack/nova/+/795744 instead to disable the failing job as the requirement patch bounced multiple times from the gate. The gate is unblocked now, but please be aware that we are tracking multiple nova CI instabilities that still causing the need of excessive rechecks. cheers, gibi On Wed, Jun 9, 2021 at 20:27, melanie witt wrote: > Hi all, > > The openstack-tox-pep8 job is currently failing with the following > error: > >> nova/crypto.py:39:1: error: Library stubs not installed for >> "paramiko" (or incompatible with Python 3.8) >> nova/crypto.py:39:1: note: Hint: "python3 -m pip install >> types-paramiko" >> nova/crypto.py:39:1: note: (or run "mypy --install-types" to install >> all missing stub packages) >> nova/crypto.py:39:1: note: See >> https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports >> Found 1 error in 1 file (checked 23 source files) >> ERROR: InvocationError for command /usr/bin/bash tools/mypywrap.sh >> (exited with code 1) > > Please hold your rechecks until the fix merges: > > https://review.opendev.org/c/openstack/nova/+/795533 > > Cheers, > -melanie > From vikash.kumarprasad at siemens.com Fri Jun 11 07:49:28 2021 From: vikash.kumarprasad at siemens.com (Kumar Prasad, Vikash) Date: Fri, 11 Jun 2021 07:49:28 +0000 Subject: How to flush UE context from USRP B210 Message-ID: Dear All, I am using USRP B210 for my eNB. USRP B210 is storing the context of previous connected UEs, I want to flush the UE context from USRP, could anyone suggest me how I can flush this previously connected UEs contexts? Thanks Vikash kumar prasad -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Fri Jun 11 08:35:46 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Fri, 11 Jun 2021 16:35:46 +0800 Subject: kayobe deploying Openstack - Train or Victoria Message-ID: I am running into issues with deploying Openstack using Kayobe. Is this list group the best place to raise? If not, my apologies - please could you share where I need to go? 1. kayobe hangs during deployment, does not time out, does not error out when has previously been successful and without configuration changes. 2. deployment fails due to breaking the network relating to the bridge. Also changes login password which locks out of console. Details: 1. Environment deployment: Train all-in-one host: centos7 Ansible control host: Ubuntu 18 With the first issue, I've seen this multiple times but have not been able to find the root cause. I searched online and came across other ansible users that state their playbooks were hanging and were able to resolve by clearing cache. I've tried clearing out ~/.ansible/* and /tmp/* on the control host. Also tried doing the same on the all-in-one host without success. This issue came about after doing a full destroy of the environment and then redeploying, making a minor config change and then redeploying. 2. Environment deployment: Victoria all-in-one host: centos8 Ansible control host: Ubuntu 20 Because I couldnt resolve the issue above and centos 8.4 is available, I decided to try and go to centos 8 and deploy Victoria. I hit 2 issues: 1. I am unable to login with "kayobe_ansible_user" using after "kayobe overcloud host configure" with "wrong password" message. Resetting the password resolves but the password seems changed again with a host configure. 2. deployment fails when "TASK [openvswitch : Ensuring OVS bridge is properly setup]" Looking at the ovs container, it's unable to add the physical interface to the bridge bond0 interface, complaining that the device is busy or is already up. I saw some log messages relating to ipv6 so I tried disabling ipv6 and redeploying but the same issue. I then rebuilt the host again and opted not to use a bond0 interface however the same loss of network occurs. If I log into the openvswitch_db container then there are CLI commands where I can delete the bridge to restore the network. So at this point, after turning off ipv6 again and running without bond0 bond, I tried another deploy wile tailing `/var/log/kolla/openvswitch/ovs-vswitchd.log` and now I do not see any errors but the network is lost to the host and the script fails to finish deployment. I've attached the logs that appear at the point the network dies: https://pasteboard.co/K65qwzR.png Are these known issues and does anyone have any information as to how I can work through or around them? Regards, Tony Pearce -------------- next part -------------- An HTML attachment was scrubbed... URL: From surabhi.kumari at siemens.com Fri Jun 11 13:04:10 2021 From: surabhi.kumari at siemens.com (Kumari, Surabhi) Date: Fri, 11 Jun 2021 13:04:10 +0000 Subject: How to flush UE context from USRP B210 In-Reply-To: References: Message-ID: In addition to the query asked by Vikash, When we connect our eNB(USRP210) to core, we get continue log for LTE_RRCConnectionReestablishmentRequest and uplink failure timer timeout. Can anyone suggest what's the reason behind these errors? RRC] [FRAME 00507][eNB][MOD 00][RNTI 4bbc] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00507][eNB][MOD 00][RNTI 4bbc] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti 4bbc) [RRC] [FRAME 00557][eNB][MOD 00][RNTI 411c] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e1a8c1) [RRC] [FRAME 00557][eNB][MOD 00][RNTI 411c] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00557][eNB][MOD 00][RNTI 411c] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00557][eNB][MOD 00][RNTI 411c] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti 411c) [RRC] [FRAME 00559][eNB][MOD 00][RNTI c84e] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e156c1) [RRC] [FRAME 00559][eNB][MOD 00][RNTI c84e] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00559][eNB][MOD 00][RNTI c84e] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00559][eNB][MOD 00][RNTI c84e] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti c84e) [RRC] Removing UE 8413 instance, because of uplink failure timer timeout [RRC] [eNB 0] Removing UE RNTI 8413 [RRC] Put UE 8413 into freeList [MAC] rrc_mac_remove_ue: UE 8413 not found [RRC] [FRAME 00000][eNB][MOD 00][RNTI 8413] Removed UE context [RRC] [release_UE_in_freeList] remove UE 8413 from freeList [RRC] Removing UE 5ce6 instance, because of uplink failure timer timeout [RRC] [eNB 0] Removing UE RNTI 5ce6 [RRC] Put UE 5ce6 into freeList [MAC] rrc_mac_remove_ue: UE 5ce6 not found [RRC] [FRAME 00000][eNB][MOD 00][RNTI 5ce6] Removed UE context [RRC] [release_UE_in_freeList] remove UE 5ce6 from freeList [RRC] [FRAME 00611][eNB][MOD 00][RNTI baef] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e0b1e1) [RRC] [FRAME 00611][eNB][MOD 00][RNTI baef] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00611][eNB][MOD 00][RNTI baef] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00611][eNB][MOD 00][RNTI baef] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti baef) [RRC] [FRAME 00629][eNB][MOD 00][RNTI 4750] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e1a8c1) [RRC] [FRAME 00629][eNB][MOD 00][RNTI 4750] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00629][eNB][MOD 00][RNTI 4750] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00629][eNB][MOD 00][RNTI 4750] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti 4750) [RRC] Removing UE 81f0 instance, because of uplink failure timer timeout [RRC] [eNB 0] Removing UE RNTI 81f0 [RRC] Put UE 81f0 into freeList [MAC] rrc_mac_remove_ue: UE 81f0 not found [RRC] [FRAME 00000][eNB][MOD 00][RNTI 81f0] Removed UE context [RRC] [release_UE_in_freeList] remove UE 81f0 from freeList [RRC] [FRAME 00711][eNB][MOD 00][RNTI d37d] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e156c1) [RRC] [FRAME 00711][eNB][MOD 00][RNTI d37d] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00711][eNB][MOD 00][RNTI d37d] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00711][eNB][MOD 00][RNTI d37d] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti d37d) [RRC] Removing UE 6f7e instance, because of uplink failure timer timeout [RRC] [eNB 0] Removing UE RNTI 6f7e [RRC] Put UE 6f7e into freeList [MAC] rrc_mac_remove_ue: UE 6f7e not found [RRC] [FRAME 00000][eNB][MOD 00][RNTI 6f7e] Removed UE context [RRC] [release_UE_in_freeList] remove UE 6f7e from freeList [RRC] [FRAME 00739][eNB][MOD 00][RNTI 2ac7] Decoding UL CCCH 0.0.0.0.0.0 (0x564f70e0b1e1) [RRC] [FRAME 00739][eNB][MOD 00][RNTI 2ac7] LTE_RRCConnectionReestablishmentRequest cause reconfigurationFailure [RRC] [FRAME 00739][eNB][MOD 00][RNTI 2ac7] LTE_RRCConnectionReestablishmentRequest without UE context, let's reject the UE [RRC] [FRAME 00739][eNB][MOD 00][RNTI 2ac7] [RAPROC] Logical Channel DL-CCCH, Generating LTE_RRCConnectionReestablishmentReject (bytes 1) [MAC] Removing UE 0 from Primary CC_id 0 (rnti 2ac7) [RRC] Removing UE c48f instance, because of uplink failure timer timeout Regards, Surabhi From: openair5g-user-request at lists.eurecom.fr On Behalf Of Kumar Prasad, Vikash Sent: Friday, June 11, 2021 1:19 PM To: openair5g-user at lists.eurecom.fr; openstack-discuss at lists.openstack.org Subject: How to flush UE context from USRP B210 Dear All, I am using USRP B210 for my eNB. USRP B210 is storing the context of previous connected UEs, I want to flush the UE context from USRP, could anyone suggest me how I can flush this previously connected UEs contexts? Thanks Vikash kumar prasad -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Fri Jun 11 15:10:25 2021 From: bkslash at poczta.onet.pl (bkslash) Date: Fri, 11 Jun 2021 17:10:25 +0200 Subject: [masakari] Compute service with name XXXXX not found. In-Reply-To: References: Message-ID: <212EC217-274F-44B4-829B-D4C0D2F949FF@poczta.onet.pl> > openstack compute service list --service nova-compute --host $HOSTNAME > did you try including the same hostname in this command? yes, and it returns the same as "openstack compute service list" but of course only for host XXXXX > If it works and Masakari does not, I would make sure you set up > Masakari to speak to the right Nova API. I'm using kolla-ansible, all masakari configuration was generated based on globals.yaml and inventory file while deployment, so it should work almost "out of the box". Does masakari speak to nova via RabbitMQ? How else can I check which port/IP masakari speaks to? In logs I can only see requests TO masakari API, not where masakari tries to check hypervisor... > Though do that with debug=True in the config [DEFAULT] section. not much in logs, even with debug enabled.... 2021-06-11 14:45:49.111 959 DEBUG masakari.compute.nova [req-e9a58522-858d-4025-9c43-f9fee744a0db nova - - - -] Creating a Nova client using "nova" user novaclient /var/lib/kolla/venv/lib/python3.8/site-packages/masakari/compute/nova.py:102 2021-06-11 14:45:49.232 959 INFO masakari.compute.nova [req-e9a58522-858d-4025-9c43-f9fee744a0db nova - - - -] Call compute service find command to get list of matching hypervisor name 'XXXXX' 2021-06-11 14:45:49.829 959 INFO masakari.api.openstack.wsgi [req-e9a58522-858d-4025-9c43-f9fee744a0db nova - - - -] HTTP exception thrown: Compute service with name XXXXX could not be found. 2021-06-11 14:45:49.831 959 DEBUG masakari.api.openstack.wsgi [req-e9a58522-858d-4025-9c43-f9fee744a0db nova - - - -] Returning 400 to user: Compute service with name XXXXX could not be found. __call__ /var/lib/kolla/venv/lib/python3.8/site-packages/masakari/api/openstack/wsgi.py:1038 > On 11 Jun 2021, at 14:56, Radosław Piliszek wrote: > > On Fri, Jun 11, 2021 at 2:47 PM at wrote: >> >> Hi, thx for the answer. >>> openstack compute service list --service nova-compute --host $HOSTNAME >> so in >> >> openstack segment host create >> >> I should use name which is displayed in "Host" column, right? So that's what I do :( > > Yes. > >> openstack compute service list --service nova-compute >> >> ID Binary Host Zone Status State >> 20 nova-compute XXXXX nova enabled up >> >> openstack segment host create XXXXX COMPUTE SSH 00dd5bxxxxxx >> >> and still "Compute service with name XXXXX could not be found"..... >> >> How masakari discovers hosts? > > I wrote this already: > openstack compute service list --service nova-compute --host $HOSTNAME > did you try including the same hostname in this command? > If it works and Masakari does not, I would make sure you set up > Masakari to speak to the right Nova API. > > Finally, if all else fails, please paste (e.g. > https://paste.ubuntu.com/ ) masakari api logs for those rejected host > creations. > Though do that with debug=True in the config [DEFAULT] section. > > -yoctozepto From pierre at stackhpc.com Fri Jun 11 15:12:21 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 11 Jun 2021 17:12:21 +0200 Subject: kayobe deploying Openstack - Train or Victoria In-Reply-To: References: Message-ID: Hi Tony, This isn't a bad place to ask questions if you like email discussion. Just make sure to prefix your email subject lines with [kolla]. We are also on IRC (#openstack-kolla on the OFTC network). I will reply to your questions inline. On Fri, 11 Jun 2021 at 16:49, Tony Pearce wrote: > > I am running into issues with deploying Openstack using Kayobe. Is this list group the best place to raise? If not, my apologies - please could you share where I need to go? > > 1. kayobe hangs during deployment, does not time out, does not error out when has previously been successful and without configuration changes. > > 2. deployment fails due to breaking the network relating to the bridge. Also changes login password which locks out of console. > > Details: > 1. > Environment deployment: Train > all-in-one host: centos7 > Ansible control host: Ubuntu 18 > > With the first issue, I've seen this multiple times but have not been able to find the root cause. I searched online and came across other ansible users that state their playbooks were hanging and were able to resolve by clearing cache. I've tried clearing out ~/.ansible/* and /tmp/* on the control host. Also tried doing the same on the all-in-one host without success. This issue came about after doing a full destroy of the environment and then redeploying, making a minor config change and then redeploying. We would need more information to help you. What does your configuration look like, in particular the network and this bridge you mention? What command did you run and at which step does Kayobe hang? Are you able to SSH to your hosts successfully? > 2. > Environment deployment: Victoria > all-in-one host: centos8 > Ansible control host: Ubuntu 20 > > Because I couldnt resolve the issue above and centos 8.4 is available, I decided to try and go to centos 8 and deploy Victoria. I hit 2 issues: > > 1. I am unable to login with "kayobe_ansible_user" using after "kayobe overcloud host configure" with "wrong password" message. Resetting the password resolves but the password seems changed again with a host configure. Kayobe doesn't set a password for the "stack" user, you should use SSH keys to connect. > 2. deployment fails when "TASK [openvswitch : Ensuring OVS bridge is properly setup]" > Looking at the ovs container, it's unable to add the physical interface to the bridge bond0 interface, complaining that the device is busy or is already up. I saw some log messages relating to ipv6 so I tried disabling ipv6 and redeploying but the same issue. > I then rebuilt the host again and opted not to use a bond0 interface however the same loss of network occurs. If I log into the openvswitch_db container then there are CLI commands where I can delete the bridge to restore the network. > > So at this point, after turning off ipv6 again and running without bond0 bond, I tried another deploy wile tailing `/var/log/kolla/openvswitch/ovs-vswitchd.log` and now I do not see any errors but the network is lost to the host and the script fails to finish deployment. I've attached the logs that appear at the point the network dies: https://pasteboard.co/K65qwzR.png Again it would be good to see your network configuration. Do you have a single interface without any VLAN tagging that you are trying to use with Kayobe? From tkajinam at redhat.com Fri Jun 11 15:46:04 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Sat, 12 Jun 2021 00:46:04 +0900 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: On Fri, Jun 11, 2021 at 8:48 PM Oliver Walsh wrote: > Hi Takashi, > > On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami > wrote: > >> Hi All, >> >> >> I've been working on bug 1926693[1], and am lost about the reasonable >> solutions we expect. Ideally I'd need to bring this topic in the team >> meeting >> but because of the timezone gap and complicated background, I'd like to >> gather some feedback in ml first. >> >> [1] https://bugs.launchpad.net/neutron/+bug/1926693 >> >> TL;DR >> Which one(or ones) would be reasonable solutions for this issue ? >> (1) https://review.opendev.org/c/openstack/neutron/+/763563 >> (2) https://review.opendev.org/c/openstack/neutron/+/788893 >> (3) Implement something different >> >> The issue I reported in the bug is that there is an inconsistency between >> nova and neutron about the way to determine a hypervisor name. >> Currently neutron uses socket.gethostname() (which always returns >> shortname) >> > > socket.gethostname() can return fqdn or shortname - > https://docs.python.org/3/library/socket.html#socket.gethostname. > You are correct and my statement was not accurate. So socket.gethostname() returns what is returned by gethostname system call, and gethostname/sethostname accept both FQDN and short name, socket.gethostname() can return one of FQDN or short name. However the root problem is that this logic is not completely same as the ones used in each virt driver. Of cause we can require people the "correct" format usage for canonical name as well as "hostname", but fixthing this problem in neutron would be much more helpful considering the effect caused by enforcing users to "fix" hostname/canonical name formatting at this point. > I've seen cases where it switched from short to fqdn but I'm not sure of > the root cause - DHCP lease setting a hostname/domainname perhaps. > > Thanks, > Ollie > > to determine a hypervisor name to search the corresponding resource >> provider. >> On the other hand, nova uses libvirt's getHostname function (if libvirt >> driver is used) >> which returns a canonical name. Canonical name can be shortname or FQDN >> (*1) >> and if FQDN is used then neutron and nova never agree. >> >> (*1) >> IMO this is likely to happen in real deployments. For example, TripelO >> uses >> FQDN for canonical names. >> > >> Neutron already provides the resource_provider_defauly_hypervisors option >> to override a hypervisor name used. However because this option accepts >> a map between interface and hypervisor, setting this parameter requires >> very redundant description especially when a compute node has multiple >> interfaces/bridges. The following example shows how redundant the current >> requirement is. >> ~~~ >> [OVS] >> resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ >> br-data3:1024,1024,br-data4,1024:1024 >> resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ >> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain >> ~~~ >> >> I've submitted a change to propose a new single parameter to override >> the base hypervisor name but this is currently -2ed, mainly because >> I lacked analysis about the root cause of mismatch when I proposed this. >> (1) https://review.opendev.org/c/openstack/neutron/+/763563 >> >> >> On the other hand, I submitted a different change to neutron which >> implements >> the logic to get a hypervisor name which is fully compatible with libvirt. >> While this would save users from even overriding hypervisor names, I'm >> aware >> that this might break the other virt driver which depends on a different >> logic >> to generate a hypervisor name. IMO the patch is still useful considering >> the libvirt driver would be the most popular option now, but I'm not fully >> aware of the impact on the other drivers, especially because I don't know >> which virt driver would support the minimum QoS feature now. >> (2) https://review.opendev.org/c/openstack/neutron/+/788893/ >> >> >> In the review of (2), Sean mentioned implementing a logic to determine >> an appropriate resource provider(3) even if there is a mismatch about >> host name format, but I'm not sure how I would implement that, tbh. >> >> >> My current thought is to merge (1) as a quick solution first, and discuss >> whether >> we should merge (2), but I'd like to ask for some feedback about this plan >> (like we should NOT merge (2)). >> >> I'd appreciate your thoughts about this $topic. >> >> Thank you, >> Takashi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Fri Jun 11 16:23:43 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 11 Jun 2021 16:23:43 +0000 Subject: [ops] Automatically recover guests from down host Message-ID: <0670B960225633449A24709C291A5252511B1193@COM01.performair.local> All; What is the most effective means of having the OpenStack cluster restart guests when a hypervisor host fails? We're running OpenStack Victoria, installed manually through packages. My apologies, but my Google foo fails me on this issue; I don't know how to ask it the question. I recognize that OpenStack covers a great many different deployment scenarios, and in many of these this isn't feasible. In our case, images, volumes, and ephemeral storage are all on our Ceph cluster, so all storage is always available to all hypervisor hosts. I also recognize that resource restrictions mean that even in an environment such as mine, not all failed guests may be able to be restarted on new hosts. I'm ok with a dumb best effort, at least for now. Is there something already present in OpenStack which would allow this? Thank you, Dominic L. Hilsbos, MBA Vice President - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From syedammad83 at gmail.com Fri Jun 11 18:10:04 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Fri, 11 Jun 2021 23:10:04 +0500 Subject: [ops] Automatically recover guests from down host In-Reply-To: <0670B960225633449A24709C291A5252511B1193@COM01.performair.local> References: <0670B960225633449A24709C291A5252511B1193@COM01.performair.local> Message-ID: Hi, There is an option in nova to evacuate host. Triggering this will rebuild all the vms running on failed host to be scheduled on other host or reserved host. You can also try Openstack Masakri that is the instance HA service for openstack. Ammad On Fri, Jun 11, 2021 at 9:28 PM wrote: > All; > > What is the most effective means of having the OpenStack cluster restart > guests when a hypervisor host fails? We're running OpenStack Victoria, > installed manually through packages. > > My apologies, but my Google foo fails me on this issue; I don't know how > to ask it the question. > > I recognize that OpenStack covers a great many different deployment > scenarios, and in many of these this isn't feasible. In our case, images, > volumes, and ephemeral storage are all on our Ceph cluster, so all storage > is always available to all hypervisor hosts. > > I also recognize that resource restrictions mean that even in an > environment such as mine, not all failed guests may be able to be restarted > on new hosts. I'm ok with a dumb best effort, at least for now. > > Is there something already present in OpenStack which would allow this? > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Jun 11 22:44:48 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 11 Jun 2021 17:44:48 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 11th June, 21: Reading: 5 min Message-ID: <179fd3fcf2b.c8bcaf5d574870.5064027474273690788@ghanshyammann.com> Hello Everyone, Here is last week's summary of the Technical Committee activities. 1. What we completed this week: ========================= * Added 'vulnerability:managed' tag for os-brick[1]. * Replaced the ATC (Active Technical Contributors) terminology with AC (Active Contributors). ** TC resolution is also merged [2]. 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - https://meetings.opendev.org/meetings/tc/2021/tc.2021-06-10-15.00.log.html * We will have next week's meeting on June 17th, Thursday 15:00 UTC[3]. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ TC is using the etherpad[4] for Xena cycle working item. We will be checking and updating the status biweekly in the same etherpad. Open Reviews ----------------- * One open review for ongoing activities[5]. Migration from Freenode to OFTC ----------------------------------------- * All the required work for this migration is tracked in this etherpad[6] * Today we are changing the Freenode channels Topic about this migration. * We are in 'Communicate with community' work where all projects need to update all contributor doc etc. Please finish this in your project and mark the progress in etherpad[6]. * This migration has been published in Open Infra newsletter OpenStack's news also[8]. 'Y' release naming process ------------------------------- * Y release naming nomination is closed now. I have started the CIVS poll with TC members as the electorate. Retiring devstack-gate -------------------------- * As communicated over email, we are finally retiring the devstack-gate. It will keep supporting the stable branch until stable/wallaby goes to EOL[9]. The governance patch for officially retirement is also up[10] Updating project-team-guide for the meeting channel preference ---------------------------------------------------------------------------- * As communicated over email during migration to OFTC, we are adding the meeting channel preference to project own channel in project team guide[11] Test support for TLS default: ---------------------------------- Rico has started a separate email thread over testing with tls-proxy enabled[12], we encourage projects to participate in that testing and help to enable the tls-proxy in gate testing. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [14] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [15] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://review.opendev.org/c/openstack/governance/+/794680 [2] https://review.opendev.org/c/openstack/governance/+/794366 [3] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [4] https://etherpad.opendev.org/p/tc-xena-tracker [5] https://review.opendev.org/q/project:openstack/governance+status:open [6] https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc [7] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022780.html [8] https://superuser.openstack.org/articles/inside-open-infrastructure-the-latest-from-the-openinfra-foundation-4/ [9] https://review.opendev.org/q/topic:%22deprecate-devstack-gate%22+(status:open%20OR%20status:merged) [10] https://review.opendev.org/c/openstack/governance/+/795385 [11] https://review.opendev.org/c/openstack/project-team-guide/+/794839 [12] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023000.html [13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [14] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [15] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From whayutin at redhat.com Sun Jun 13 23:24:52 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 13 Jun 2021 17:24:52 -0600 Subject: master jobs down Message-ID: Greetings, Having some issues w/ infra... https://bugs.launchpad.net/tripleo/+bug/1931821 Details are in the bug.. this will block upstream master jobs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Sun Jun 13 23:29:05 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 13 Jun 2021 17:29:05 -0600 Subject: master jobs down In-Reply-To: References: Message-ID: sorry.. this was meant for [tripleo] and "infra" is not upstream openstack infra.. this is rdo infra. On Sun, Jun 13, 2021 at 5:24 PM Wesley Hayutin wrote: > Greetings, > > Having some issues w/ infra... > https://bugs.launchpad.net/tripleo/+bug/1931821 > > Details are in the bug.. this will block upstream master jobs. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Sun Jun 13 23:29:33 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 13 Jun 2021 17:29:33 -0600 Subject: [tripleo] master jobs down Message-ID: Having some issues w/ infra... https://bugs.launchpad.net/tripleo/+bug/1931821 Details are in the bug.. this will block upstream master jobs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Jun 13 23:30:20 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 13 Jun 2021 23:30:20 +0000 Subject: [tripleo] master jobs down In-Reply-To: References: Message-ID: <20210613233019.3axtibwd34iz4jk7@yuggoth.org> On 2021-06-13 17:24:52 -0600 (-0600), Wesley Hayutin wrote: > Greetings, > > Having some issues w/ infra... > https://bugs.launchpad.net/tripleo/+bug/1931821 > > Details are in the bug.. this will block upstream master jobs. I've added a tripleo subject tag in my reply, since it caught my attention and I thought you were saying there was a new problem I hadn't heard about yet in the OpenDev CI infrastructure. Or was this intended for a different mailing list? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Sun Jun 13 23:34:02 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 13 Jun 2021 23:34:02 +0000 Subject: [tripleo] master jobs down In-Reply-To: <20210613233019.3axtibwd34iz4jk7@yuggoth.org> References: <20210613233019.3axtibwd34iz4jk7@yuggoth.org> Message-ID: <20210613233402.j42lbj2kj4kfp75a@yuggoth.org> On 2021-06-13 23:30:20 +0000 (+0000), Jeremy Stanley wrote: > On 2021-06-13 17:24:52 -0600 (-0600), Wesley Hayutin wrote: > > Greetings, > > > > Having some issues w/ infra... > > https://bugs.launchpad.net/tripleo/+bug/1931821 > > > > Details are in the bug.. this will block upstream master jobs. > > I've added a tripleo subject tag in my reply, since it caught my > attention and I thought you were saying there was a new problem I > hadn't heard about yet in the OpenDev CI infrastructure. > > Or was this intended for a different mailing list? Nevermind, I see you sent some follow-up clarification while my question was crossing the electron seas. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tonyppe at gmail.com Mon Jun 14 06:20:09 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Mon, 14 Jun 2021 14:20:09 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Hi Pierre, thanks for replying to my message. To install kayobe I followed the documentation which summarise: installing a few system packages and setting up the kayobe virtual environment and then pulling the correct kayobe git version for the openstack to be installed. After configuring the yaml files I have run these commands: - kayobe control host bootstrap - kayobe overcloud host configure -> this one is failing with /usr/libexec/platform-python: not found After reading your message on the weekend I concluded that maybe I had done something wrong. Today, I re-pulled the kayobe wallaby git and manually transferred the configuration over to the new directory structure on the ansible host and set up again as per the guide but the same issue is seen. What I ended up doing to try and resolve was finding where this "platform-python" is coming from. It is coming from the virtual environment which is being set up during the kayobe ansible host bootstrap. Initially, I found the base.yml and it looks like it tries to match what the host is. I noticed that there is no ubuntu 20 listed there so I created it however it did not resolve the issue. So then I tried systematically replacing this reference in the other files found in the same location "venvs\kayobe\share\kayobe\ansible". The file I changed which allowed it to progress is "kayobe-target-venv.yml" But unfortunately it fails a bit further on, failing to find an selinux package [1] Seeing as the error is mentioning selinux (a RedHat security feature not installed on ubuntu) could the root cause issue be that kayobe is not matching the host as ubuntu? I did already set in kayobe that I am using ubuntu OS distribution within globals.yml [2]. Are there any extra steps that I need to complete that maybe are not listed in the documentation / guide? [1] TASK [MichaelRigart.interfaces : Debian | install current/latest network package - Pastebin.com [2] ---# Kayobe global configuration.######################################### - Pastebin.com Regards, Tony Pearce On Fri, 11 Jun 2021 at 21:05, Pierre Riteau wrote: > Hi Tony, > > Kayobe doesn't use platform-python anymore, on both stable/wallaby and > stable/victoria: > https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 > > Can you double-check what version you are using, and share how you > installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. > > Best wishes, > Pierre > > On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: > > > > I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 > machine to deploy Wallaby. I'm getting an error that python is not found > during the host configure part. > > > > PLAY [Verify that the Kayobe Ansible user account is accessible] > > TASK [Verify that a command can be executed] > > > > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": > "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": > "", "msg": "The module failed to execute correctly, you probably need to > set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} > > > > Python3 is installed on the host. When searching where this > platform-python is coming from it returns the kolla-ansible virtual envs: > > > > $ grep -rni -e "platform-python" > > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: > '8': /usr/libexec/platform-python > > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: > - /usr/libexec/platform-python > > > > I had a look through the deployment guide for Kayobe Wallaby and didnt > see a note about changing this. > > > > Do I need to do further steps to support the ubuntu overcloud host? I > have already set (as per the doc): > > > > os_distribution: ubuntu > > os_release: focal > > > > Regards, > > > > Tony Pearce > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon Jun 14 06:59:45 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 14 Jun 2021 08:59:45 +0200 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: Hello: I'll approve [1] although I see no need for it. Having "resource_provider_hypervisors", there is no need for a second configuration parameter to provide the same information, regardless of the comfort of providing one single string and not a list of tuples. Regards. [1]https://review.opendev.org/c/openstack/neutron/+/763563 On Fri, Jun 11, 2021 at 5:51 PM Takashi Kajinami wrote: > On Fri, Jun 11, 2021 at 8:48 PM Oliver Walsh wrote: > >> Hi Takashi, >> >> On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami >> wrote: >> >>> Hi All, >>> >>> >>> I've been working on bug 1926693[1], and am lost about the reasonable >>> solutions we expect. Ideally I'd need to bring this topic in the team >>> meeting >>> but because of the timezone gap and complicated background, I'd like to >>> gather some feedback in ml first. >>> >>> [1] https://bugs.launchpad.net/neutron/+bug/1926693 >>> >>> TL;DR >>> Which one(or ones) would be reasonable solutions for this issue ? >>> (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>> (2) https://review.opendev.org/c/openstack/neutron/+/788893 >>> (3) Implement something different >>> >>> The issue I reported in the bug is that there is an inconsistency between >>> nova and neutron about the way to determine a hypervisor name. >>> Currently neutron uses socket.gethostname() (which always returns >>> shortname) >>> >> >> socket.gethostname() can return fqdn or shortname - >> https://docs.python.org/3/library/socket.html#socket.gethostname. >> > You are correct and my statement was not accurate. > So socket.gethostname() returns what is returned by gethostname system > call, > and gethostname/sethostname accept both FQDN and short name, > socket.gethostname() > can return one of FQDN or short name. > > However the root problem is that this logic is not completely same as the > ones used > in each virt driver. Of cause we can require people the "correct" format > usage for > canonical name as well as "hostname", but fixthing this problem in neutron > would > be much more helpful considering the effect caused by enforcing users to > "fix" > hostname/canonical name formatting at this point. > > >> I've seen cases where it switched from short to fqdn but I'm not sure of >> the root cause - DHCP lease setting a hostname/domainname perhaps. >> >> Thanks, >> Ollie >> >> to determine a hypervisor name to search the corresponding resource >>> provider. >>> On the other hand, nova uses libvirt's getHostname function (if libvirt >>> driver is used) >>> which returns a canonical name. Canonical name can be shortname or FQDN >>> (*1) >>> and if FQDN is used then neutron and nova never agree. >>> >>> (*1) >>> IMO this is likely to happen in real deployments. For example, TripelO >>> uses >>> FQDN for canonical names. >>> >> >>> Neutron already provides the resource_provider_defauly_hypervisors option >>> to override a hypervisor name used. However because this option accepts >>> a map between interface and hypervisor, setting this parameter requires >>> very redundant description especially when a compute node has multiple >>> interfaces/bridges. The following example shows how redundant the current >>> requirement is. >>> ~~~ >>> [OVS] >>> resource_provider_bandwidths=br-data1:1024:1024,br-data2:1024:1024,\ >>> br-data3:1024,1024,br-data4,1024:1024 >>> resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ >>> compute0.mydomain,br-data3:compute0.mydomain,br-data4:compute0.mydomain >>> ~~~ >>> >>> I've submitted a change to propose a new single parameter to override >>> the base hypervisor name but this is currently -2ed, mainly because >>> I lacked analysis about the root cause of mismatch when I proposed this. >>> (1) https://review.opendev.org/c/openstack/neutron/+/763563 >>> >>> >>> On the other hand, I submitted a different change to neutron which >>> implements >>> the logic to get a hypervisor name which is fully compatible with >>> libvirt. >>> While this would save users from even overriding hypervisor names, I'm >>> aware >>> that this might break the other virt driver which depends on a different >>> logic >>> to generate a hypervisor name. IMO the patch is still useful considering >>> the libvirt driver would be the most popular option now, but I'm not >>> fully >>> aware of the impact on the other drivers, especially because I don't know >>> which virt driver would support the minimum QoS feature now. >>> (2) https://review.opendev.org/c/openstack/neutron/+/788893/ >>> >>> >>> In the review of (2), Sean mentioned implementing a logic to determine >>> an appropriate resource provider(3) even if there is a mismatch about >>> host name format, but I'm not sure how I would implement that, tbh. >>> >>> >>> My current thought is to merge (1) as a quick solution first, and >>> discuss whether >>> we should merge (2), but I'd like to ask for some feedback about this >>> plan >>> (like we should NOT merge (2)). >>> >>> I'd appreciate your thoughts about this $topic. >>> >>> Thank you, >>> Takashi >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Jun 14 07:24:42 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 14 Jun 2021 09:24:42 +0200 Subject: [ops] Automatically recover guests from down host In-Reply-To: References: <0670B960225633449A24709C291A5252511B1193@COM01.performair.local> Message-ID: On Fri, Jun 11, 2021 at 8:13 PM Ammad Syed wrote: > > Hi, > > There is an option in nova to evacuate host. Triggering this will rebuild all the vms running on failed host to be scheduled on other host or reserved host. > > You can also try Openstack Masakri that is the instance HA service for openstack. Just following up on this: the project is called Masakari and has docs in: https://docs.openstack.org/masakari/latest/ The team can be reached on OFTC ( https://www.oftc.net/ ) at #openstack-masakari or via this mailing list with [masakari] tag in subject. > Ammad > > On Fri, Jun 11, 2021 at 9:28 PM wrote: >> >> All; >> >> What is the most effective means of having the OpenStack cluster restart guests when a hypervisor host fails? We're running OpenStack Victoria, installed manually through packages. >> >> My apologies, but my Google foo fails me on this issue; I don't know how to ask it the question. >> >> I recognize that OpenStack covers a great many different deployment scenarios, and in many of these this isn't feasible. In our case, images, volumes, and ephemeral storage are all on our Ceph cluster, so all storage is always available to all hypervisor hosts. >> >> I also recognize that resource restrictions mean that even in an environment such as mine, not all failed guests may be able to be restarted on new hosts. I'm ok with a dumb best effort, at least for now. >> >> Is there something already present in OpenStack which would allow this? One of the goals Masakari has is to introduce a system of recovery prioritisation to go beyond the "dumb best effort" mentioned. For now it's pretty simple but matches your requirements. -yoctozepto From radoslaw.piliszek at gmail.com Mon Jun 14 07:35:07 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 14 Jun 2021 09:35:07 +0200 Subject: [masakari] Compute service with name XXXXX not found. In-Reply-To: <212EC217-274F-44B4-829B-D4C0D2F949FF@poczta.onet.pl> References: <212EC217-274F-44B4-829B-D4C0D2F949FF@poczta.onet.pl> Message-ID: The line > 2021-06-11 14:45:49.829 959 INFO masakari.api.openstack.wsgi [req-e9a58522-858d-4025-9c43-f9fee744a0db nova - - - -] HTTP exception thrown: Compute service with name XXXXX could not be found. suggests that nova actively disagrees that this compute node actually exists. As for the exercised behaviour: this is tested both in Masakari and Kolla Ansible CIs, and it works. I am afraid the answer to why this fails lies in the format of that hidden XXXXX. At the moment, I can't really think of how that could affect the outcome. Is XXXXX 100% the same between the different logs? If you can't somehow share how XXXXX looks, then you might want to check the Nova API logs (again, debug=True might help a bit) and compare between how the openstack client query works vs how the Masakari query works. Perhaps, there is a glitch at some stage that causes the XXXXX to get garbled. You can also append --debug to the openstack commands to get the client side of the conversation. On Fri, Jun 11, 2021 at 5:10 PM bkslash wrote: > > > openstack compute service list --service nova-compute --host $HOSTNAME > > did you try including the same hostname in this command? > yes, and it returns the same as "openstack compute service list" but of course only for host XXXXX > > > If it works and Masakari does not, I would make sure you set up > > Masakari to speak to the right Nova API. > I'm using kolla-ansible, all masakari configuration was generated based on globals.yaml and inventory file while deployment, so it should work almost "out of the box". Does masakari speak to nova via RabbitMQ? How else can I check which port/IP masakari speaks to? In logs I can only see requests TO masakari API, not where masakari tries to check hypervisor... Masakari speaks to Nova via Nova API only. If you used Kolla Ansible, then it's set up correctly unless you manually overrode that. By default, Masakari looks up the Nova endpoint from the Keystone catalogue. -yoctozepto From mark at stackhpc.com Mon Jun 14 08:10:31 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 14 Jun 2021 09:10:31 +0100 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: > > Hi Pierre, thanks for replying to my message. > > To install kayobe I followed the documentation which summarise: installing a few system packages and setting up the kayobe virtual environment and then pulling the correct kayobe git version for the openstack to be installed. After configuring the yaml files I have run these commands: > > - kayobe control host bootstrap > - kayobe overcloud host configure -> this one is failing with /usr/libexec/platform-python: not found > > After reading your message on the weekend I concluded that maybe I had done something wrong. Today, I re-pulled the kayobe wallaby git and manually transferred the configuration over to the new directory structure on the ansible host and set up again as per the guide but the same issue is seen. > > What I ended up doing to try and resolve was finding where this "platform-python" is coming from. It is coming from the virtual environment which is being set up during the kayobe ansible host bootstrap. Initially, I found the base.yml and it looks like it tries to match what the host is. I noticed that there is no ubuntu 20 listed there so I created it however it did not resolve the issue. > > So then I tried systematically replacing this reference in the other files found in the same location "venvs\kayobe\share\kayobe\ansible". The file I changed which allowed it to progress is "kayobe-target-venv.yml" > > But unfortunately it fails a bit further on, failing to find an selinux package [1] > > Seeing as the error is mentioning selinux (a RedHat security feature not installed on ubuntu) could the root cause issue be that kayobe is not matching the host as ubuntu? I did already set in kayobe that I am using ubuntu OS distribution within globals.yml [2]. > > Are there any extra steps that I need to complete that maybe are not listed in the documentation / guide? > > [1] TASK [MichaelRigart.interfaces : Debian | install current/latest network package - Pastebin.com > [2] ---# Kayobe global configuration.######################################### - Pastebin.com Hi Tony, That's definitely not a recent Wallaby checkout you're using. Ubuntu no longer uses that MichaelRigart.interfaces role. Check that you have recent commits. Here is the most recent on stable/wallaby: 13169077aaec0f7a28ae1f15b419dafc2456faf7. Mark > > Regards, > > Tony Pearce > > > > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau wrote: >> >> Hi Tony, >> >> Kayobe doesn't use platform-python anymore, on both stable/wallaby and >> stable/victoria: >> https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 >> >> Can you double-check what version you are using, and share how you >> installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. >> >> Best wishes, >> Pierre >> >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: >> > >> > I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is not found during the host configure part. >> > >> > PLAY [Verify that the Kayobe Ansible user account is accessible] >> > TASK [Verify that a command can be executed] >> > >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} >> > >> > Python3 is installed on the host. When searching where this platform-python is coming from it returns the kolla-ansible virtual envs: >> > >> > $ grep -rni -e "platform-python" >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: '8': /usr/libexec/platform-python >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: - /usr/libexec/platform-python >> > >> > I had a look through the deployment guide for Kayobe Wallaby and didnt see a note about changing this. >> > >> > Do I need to do further steps to support the ubuntu overcloud host? I have already set (as per the doc): >> > >> > os_distribution: ubuntu >> > os_release: focal >> > >> > Regards, >> > >> > Tony Pearce >> > From tonyppe at gmail.com Mon Jun 14 08:40:40 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Mon, 14 Jun 2021 16:40:40 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Hi Mark, I followed this guide to do a "git clone" specifying the branch "-b" to "stable/wallaby" [1]. What additional steps do I need to do to get the latest commits? [1] OpenStack Docs: Overcloud Kind regards, Tony Pearce On Mon, 14 Jun 2021 at 16:10, Mark Goddard wrote: > On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: > > > > Hi Pierre, thanks for replying to my message. > > > > To install kayobe I followed the documentation which summarise: > installing a few system packages and setting up the kayobe virtual > environment and then pulling the correct kayobe git version for the > openstack to be installed. After configuring the yaml files I have run > these commands: > > > > - kayobe control host bootstrap > > - kayobe overcloud host configure -> this one is failing with > /usr/libexec/platform-python: not found > > > > After reading your message on the weekend I concluded that maybe I had > done something wrong. Today, I re-pulled the kayobe wallaby git and > manually transferred the configuration over to the new directory structure > on the ansible host and set up again as per the guide but the same issue is > seen. > > > > What I ended up doing to try and resolve was finding where this > "platform-python" is coming from. It is coming from the virtual environment > which is being set up during the kayobe ansible host bootstrap. Initially, > I found the base.yml and it looks like it tries to match what the host is. > I noticed that there is no ubuntu 20 listed there so I created it however > it did not resolve the issue. > > > > So then I tried systematically replacing this reference in the other > files found in the same location "venvs\kayobe\share\kayobe\ansible". The > file I changed which allowed it to progress is "kayobe-target-venv.yml" > > > > But unfortunately it fails a bit further on, failing to find an selinux > package [1] > > > > Seeing as the error is mentioning selinux (a RedHat security feature not > installed on ubuntu) could the root cause issue be that kayobe is not > matching the host as ubuntu? I did already set in kayobe that I am using > ubuntu OS distribution within globals.yml [2]. > > > > Are there any extra steps that I need to complete that maybe are not > listed in the documentation / guide? > > > > [1] TASK [MichaelRigart.interfaces : Debian | install current/latest > network package - Pastebin.com > > [2] ---# Kayobe global > configuration.######################################### - Pastebin.com > > Hi Tony, > > That's definitely not a recent Wallaby checkout you're using. Ubuntu > no longer uses that MichaelRigart.interfaces role. Check that you have > recent commits. Here is the most recent on stable/wallaby: > 13169077aaec0f7a28ae1f15b419dafc2456faf7. > > Mark > > > > > Regards, > > > > Tony Pearce > > > > > > > > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau wrote: > >> > >> Hi Tony, > >> > >> Kayobe doesn't use platform-python anymore, on both stable/wallaby and > >> stable/victoria: > >> https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 > >> > >> Can you double-check what version you are using, and share how you > >> installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. > >> > >> Best wishes, > >> Pierre > >> > >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: > >> > > >> > I'm trying to run "kayobe overcloud host configure" against an ubuntu > 20 machine to deploy Wallaby. I'm getting an error that python is not found > during the host configure part. > >> > > >> > PLAY [Verify that the Kayobe Ansible user account is accessible] > >> > TASK [Verify that a command can be executed] > >> > > >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": > "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": > "", "msg": "The module failed to execute correctly, you probably need to > set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} > >> > > >> > Python3 is installed on the host. When searching where this > platform-python is coming from it returns the kolla-ansible virtual envs: > >> > > >> > $ grep -rni -e "platform-python" > >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: > '8': /usr/libexec/platform-python > >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: > - /usr/libexec/platform-python > >> > > >> > I had a look through the deployment guide for Kayobe Wallaby and > didnt see a note about changing this. > >> > > >> > Do I need to do further steps to support the ubuntu overcloud host? I > have already set (as per the doc): > >> > > >> > os_distribution: ubuntu > >> > os_release: focal > >> > > >> > Regards, > >> > > >> > Tony Pearce > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Jun 14 10:30:32 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 14 Jun 2021 11:30:32 +0100 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: On Sat, 2021-06-12 at 00:46 +0900, Takashi Kajinami wrote: > On Fri, Jun 11, 2021 at 8:48 PM Oliver Walsh wrote: > > Hi Takashi, > > > > On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami > > wrote: > > > Hi All, > > > > > > > > > I've been working on bug 1926693[1], and am lost about the > > > reasonable > > > solutions we expect. Ideally I'd need to bring this topic in the > > > team meeting > > > but because of the timezone gap and complicated background, I'd > > > like to > > > gather some feedback in ml first. > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > > > > > TL;DR > > >  Which one(or ones) would be reasonable solutions for this issue ? > > >   (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > >   (2) https://review.opendev.org/c/openstack/neutron/+/788893 > > >   (3) Implement something different > > > > > > The issue I reported in the bug is that there is an inconsistency > > > between > > > nova and neutron about the way to determine a hypervisor name. > > > Currently neutron uses socket.gethostname() (which always returns > > > shortname) > > > > > > > > > socket.gethostname() can return fqdn or shortname -  > > https://docs.python.org/3/library/socket.html#socket.gethostname. > > > > You are correct and my statement was not accurate. > So socket.gethostname() returns what is returned by gethostname system > call, > and gethostname/sethostname accept both FQDN and short name, > socket.gethostname() > can return one of FQDN or short name. > > However the root problem is that this logic is not completely same as > the ones used > in each virt driver. Of cause we can require people the "correct" > format usage for > canonical name as well as "hostname", but fixthing this problem in > neutron would > be much more helpful considering the effect caused by enforcing users > to "fix" > hostname/canonical name formatting at this point. this is not really something that can be fixed in neutron we can either create a common funciton in oslo.utils or placement-lib that we can use in nova, neutron and all other project or we can use the config option. if we want to "fix" this in neutron then neutron should either try looking up the RP using the host name and then fall back to using the fqdn or we shoudl look at using the hypervior api as we discussed a few years ago when this last came up http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011044.html i dont think neutron shoudl know anything about hyperviors so i would just proceed with the new config option that takashi has proposed but i would not implemente Rodolfo's solution of adding a hypervisor_type. just as nova has no awareness of the neutron backend and trys to treat all fo them the same neutron should remain hypervior independent and we should look to provide common code that can be reused to identify the RP in a seperate lib as a longer term solution. for many deployment that do not set the fqdn as the canonical host name in /etc/host the current default behavior works out of the box whatever solution we take we need to ensure that no existing deployment is affected by the change which means we cannot default to only using the fqdn or similar as that would be an upgrade breakage so we have to maintain the current behavior by default and enhance neutron to either fall back to the fqdn if the hostname based lookup fails or use the new config intoduc ed by takashi's patch where the fqdn is used as the server canonical hostname. >   > > I've seen cases where it switched from short to fqdn but I'm not sure > > of the root cause - DHCP lease setting a hostname/domainname perhaps. > > > > Thanks, > > Ollie > > > > > to determine a hypervisor name to search the corresponding resource > > > provider. > > > On the other hand, nova uses libvirt's getHostname function (if > > > libvirt driver is used) > > > which returns a canonical name. Canonical name can be shortname or > > > FQDN (*1) > > > and if FQDN is used then neutron and nova never agree. > > > > > > (*1) > > > IMO this is likely to happen in real deployments. For example, > > > TripelO uses > > > FQDN for canonical names.   > > > > > > > > > Neutron already provides the resource_provider_defauly_hypervisors > > > option > > > to override a hypervisor name used. However because this option > > > accepts > > > a map between interface and hypervisor, setting this parameter > > > requires > > > very redundant description especially when a compute node has > > > multiple > > > interfaces/bridges. The following example shows how redundant the > > > current > > > requirement is. > > > ~~~ > > > [OVS] > > > resource_provider_bandwidths=br-data1:1024:1024,br- > > > data2:1024:1024,\ > > > br-data3:1024,1024,br-data4,1024:1024 > > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > > > compute0.mydomain,br-data3:compute0.mydomain,br- > > > data4:compute0.mydomain > > > ~~~ > > > > > > I've submitted a change to propose a new single parameter to > > > override > > > the base hypervisor name but this is currently -2ed, mainly because > > > I lacked analysis about the root cause of mismatch when I proposed > > > this. > > >  (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > > > > > > On the other hand, I submitted a different change to neutron which > > > implements > > > the logic to get a hypervisor name which is fully compatible with > > > libvirt. > > > While this would save users from even overriding hypervisor names, > > > I'm aware > > > that this might break the other virt driver which depends on a > > > different logic > > > to generate a hypervisor name. IMO the patch is still useful > > > considering > > > the libvirt driver would be the most popular option now, but I'm > > > not fully > > > aware of the impact on the other drivers, especially because I > > > don't know > > > which virt driver would support the minimum QoS feature now. > > >  (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > > > > > > > In the review of (2), Sean mentioned implementing a logic to > > > determine > > > an appropriate resource provider(3) even if there is a mismatch > > > about > > > host name format, but I'm not sure how I would implement that, tbh. > > > > > > > > > My current thought is to merge (1) as a quick solution first, and > > > discuss whether > > > we should merge (2), but I'd like to ask for some feedback about > > > this plan > > > (like we should NOT merge (2)). > > > > > > I'd appreciate your thoughts about this $topic. > > > > > > Thank you, > > > Takashi From mark at stackhpc.com Mon Jun 14 10:36:34 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 14 Jun 2021 11:36:34 +0100 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: On Mon, 14 Jun 2021 at 09:40, Tony Pearce wrote: > > Hi Mark, > > I followed this guide to do a "git clone" specifying the branch "-b" to "stable/wallaby" [1]. What additional steps do I need to do to get the latest commits? That should be sufficient. When you install it via pip, note that 'pip install kayobe' will still pull from PyPI, even if there is a local kayobe directory. Use ./kayobe, or 'pip install .' if in the same directory. Mark > > [1] OpenStack Docs: Overcloud > > Kind regards, > > Tony Pearce > > > On Mon, 14 Jun 2021 at 16:10, Mark Goddard wrote: >> >> On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: >> > >> > Hi Pierre, thanks for replying to my message. >> > >> > To install kayobe I followed the documentation which summarise: installing a few system packages and setting up the kayobe virtual environment and then pulling the correct kayobe git version for the openstack to be installed. After configuring the yaml files I have run these commands: >> > >> > - kayobe control host bootstrap >> > - kayobe overcloud host configure -> this one is failing with /usr/libexec/platform-python: not found >> > >> > After reading your message on the weekend I concluded that maybe I had done something wrong. Today, I re-pulled the kayobe wallaby git and manually transferred the configuration over to the new directory structure on the ansible host and set up again as per the guide but the same issue is seen. >> > >> > What I ended up doing to try and resolve was finding where this "platform-python" is coming from. It is coming from the virtual environment which is being set up during the kayobe ansible host bootstrap. Initially, I found the base.yml and it looks like it tries to match what the host is. I noticed that there is no ubuntu 20 listed there so I created it however it did not resolve the issue. >> > >> > So then I tried systematically replacing this reference in the other files found in the same location "venvs\kayobe\share\kayobe\ansible". The file I changed which allowed it to progress is "kayobe-target-venv.yml" >> > >> > But unfortunately it fails a bit further on, failing to find an selinux package [1] >> > >> > Seeing as the error is mentioning selinux (a RedHat security feature not installed on ubuntu) could the root cause issue be that kayobe is not matching the host as ubuntu? I did already set in kayobe that I am using ubuntu OS distribution within globals.yml [2]. >> > >> > Are there any extra steps that I need to complete that maybe are not listed in the documentation / guide? >> > >> > [1] TASK [MichaelRigart.interfaces : Debian | install current/latest network package - Pastebin.com >> > [2] ---# Kayobe global configuration.######################################### - Pastebin.com >> >> Hi Tony, >> >> That's definitely not a recent Wallaby checkout you're using. Ubuntu >> no longer uses that MichaelRigart.interfaces role. Check that you have >> recent commits. Here is the most recent on stable/wallaby: >> 13169077aaec0f7a28ae1f15b419dafc2456faf7. >> >> Mark >> >> > >> > Regards, >> > >> > Tony Pearce >> > >> > >> > >> > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau wrote: >> >> >> >> Hi Tony, >> >> >> >> Kayobe doesn't use platform-python anymore, on both stable/wallaby and >> >> stable/victoria: >> >> https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 >> >> >> >> Can you double-check what version you are using, and share how you >> >> installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. >> >> >> >> Best wishes, >> >> Pierre >> >> >> >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: >> >> > >> >> > I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is not found during the host configure part. >> >> > >> >> > PLAY [Verify that the Kayobe Ansible user account is accessible] >> >> > TASK [Verify that a command can be executed] >> >> > >> >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} >> >> > >> >> > Python3 is installed on the host. When searching where this platform-python is coming from it returns the kolla-ansible virtual envs: >> >> > >> >> > $ grep -rni -e "platform-python" >> >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: '8': /usr/libexec/platform-python >> >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: - /usr/libexec/platform-python >> >> > >> >> > I had a look through the deployment guide for Kayobe Wallaby and didnt see a note about changing this. >> >> > >> >> > Do I need to do further steps to support the ubuntu overcloud host? I have already set (as per the doc): >> >> > >> >> > os_distribution: ubuntu >> >> > os_release: focal >> >> > >> >> > Regards, >> >> > >> >> > Tony Pearce >> >> > From hberaud at redhat.com Mon Jun 14 11:27:05 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 14 Jun 2021 13:27:05 +0200 Subject: [release] Release countdown for week R-16, Jun 14 - Jun 18 Message-ID: Development Focus ----------------- The Xena-2 milestone will happen next month, on 15 July, 2021. Xena-related specs should now be finalized so that teams can move to implementation ASAP. Some teams observe specific deadlines on the second milestone (mostly spec freezes): please refer to https://releases.openstack.org/xena/schedule.html for details. General Information ------------------- Please remember that libraries need to be released at least once per milestone period. At milestone 2, the release team will propose releases for any library that has not been otherwise released since milestone 1. Other non-library deliverables that follow the cycle-with-intermediary release model should have an intermediary release before milestone-2. Those who haven't will be proposed to switch to the cycle-with-rc model, which is more suited to deliverables that are released only once per cycle. At milestone-2 we also freeze the contents of the final release. If you have a new deliverable that should be included in the final release, you should make sure it has a deliverable file in: https://opendev.org/openstack/releases/src/branch/master/deliverables/xena You should request a beta release (or intermediary release) for those new deliverables by milestone-2. We understand some may not be quite ready for a full release yet, but if you have something minimally viable to get released it would be good to do a 0.x release to exercise the release tooling for your deliverables. See the MembershipFreeze description for more details: https://releases.openstack.org/xena/schedule.html#x-mf Finally, now may be a good time for teams to check on any stable releases that need to be done for your deliverables. If you have bug fixes that have been backported, but no stable release getting those. If you are unsure what is out there committed but not released, in the openstack/releases repo, running the command "tools/list_stable_unreleased_changes.sh " gives a nice report. Upcoming Deadlines & Dates -------------------------- Xena-2 Milestone: 15 July, 2021 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Mon Jun 14 11:50:55 2021 From: adivya1.singh at gmail.com (Adivya Singh) Date: Mon, 14 Jun 2021 17:20:55 +0530 Subject: Regaring Volume not getting attached Message-ID: Hello Team, I am facing a issue, where i am unable to attach a volume to Instances , The third party is using qnap storage, what i am seeing is whenever i am trying to attach, i can see the volume goes from reserved state to attached state and finally goes to receive state and give a error state, regarding HTTP 400, Regards Adivya Singh 9590986094 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sz_cuitao at 163.com Mon Jun 14 13:43:25 2021 From: sz_cuitao at 163.com (tommy) Date: Mon, 14 Jun 2021 21:43:25 +0800 Subject: Install v version failed on centos 8 Message-ID: <000001d76123$40768de0$c163a9a0$@163.com> Why?? [root at control ~]# packstack --answer-file=./openstack.ini Welcome to the Packstack setup utility The installation log file is available at: /var/tmp/packstack/20210614-203611-y8gez97l/openstack-setup.log Installing: Clean Up [ DONE ] Discovering ip protocol version [ DONE ] root at 192.168.10.32's password: root at 192.168.10.30's password: root at 192.168.10.30's password: root at 192.168.10.31's password: root at 192.168.10.33's password: Setting up ssh keys [ DONE ] Preparing servers [ DONE ] Pre installing Puppet and discovering hosts' details [ DONE ] Preparing pre-install entries [ DONE ] Setting up CACERT [ DONE ] Preparing AMQP entries [ DONE ] Preparing MariaDB entries [ DONE ] Fixing Keystone LDAP config parameters to be undef if empty[ DONE ] Preparing Keystone entries [ DONE ] Preparing Glance entries [ DONE ] Preparing Nova API entries [ DONE ] Creating ssh keys for Nova migration [ DONE ] Gathering ssh host keys for Nova migration [ DONE ] Preparing Nova Compute entries [ DONE ] Preparing Nova Scheduler entries [ DONE ] Preparing Nova VNC Proxy entries [ DONE ] Preparing OpenStack Network-related Nova entries [ DONE ] Preparing Nova Common entries [ DONE ] Preparing Neutron API entries [ DONE ] Preparing Neutron L3 entries [ DONE ] Preparing Neutron L2 Agent entries [ DONE ] Preparing Neutron DHCP Agent entries [ DONE ] Preparing Neutron Metering Agent entries [ DONE ] Checking if NetworkManager is enabled and running [ DONE ] Preparing OpenStack Client entries [ DONE ] Preparing Horizon entries [ DONE ] Preparing Swift builder entries [ DONE ] Preparing Swift proxy entries [ DONE ] Preparing Swift storage entries [ DONE ] Preparing Gnocchi entries [ DONE ] Preparing Redis entries [ DONE ] Preparing Ceilometer entries [ DONE ] Preparing Aodh entries [ DONE ] Preparing Puppet manifests [ DONE ] Copying Puppet modules and manifests [ DONE ] Applying 192.168.10.30_controller.pp 192.168.10.30_controller.pp: [ DONE ] Applying 192.168.10.32_network.pp Applying 192.168.10.30_network.pp Applying 192.168.10.31_network.pp Applying 192.168.10.33_network.pp 192.168.10.31_network.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.10.31_network.pp Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Missing title. The title expression resulted in undef (file: /var/tmp/packstack/a201c33193194a599c3654c79945e4a1/modules/ovn/manifests/co ntroller/port.pp, line: 11, column: 13) (file: /var/tmp/packstack/a201c33193194a599c3654c79945e4a1/modules/ovn/manifests/co ntroller.pp, line: 137) on node comps1 You will find full trace in log /var/tmp/packstack/20210614-203611-y8gez97l/manifests/192.168.10.31_network. pp.log Please check log file /var/tmp/packstack/20210614-203611-y8gez97l/openstack-setup.log for more information Additional information: * Parameter CONFIG_NEUTRON_L2_AGENT: You have chosen OVN Neutron backend. Note that this backend does not support the VPNaaS plugin. Geneve will be used as the encapsulation method for tenant networks * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * File /root/keystonerc_admin has been created on OpenStack client host 192.168.10.30. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://192.168.10.30/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. -------------- next part -------------- An HTML attachment was scrubbed... URL: From antonio.paulo at cern.ch Mon Jun 14 14:28:07 2021 From: antonio.paulo at cern.ch (=?UTF-8?Q?Ant=c3=b3nio_Paulo?=) Date: Mon, 14 Jun 2021 16:28:07 +0200 Subject: [nova] GPU VMs using MIG? Message-ID: <803dae06-8317-27f4-42ac-365f72ff31f4@cern.ch> Hi! Has anyone looked into instancing VMs with NVIDIA's Multi-Instance GPU (MIG) devices [1] without having to rely on vGPUs? Unfortunately, NVIDIA vGPUs lack tracing and profiling support that our users need. I could not find anything specific to MIG in the OpenStack docs but I was wondering if doing PCI passthrough [2] of MIG devices is an option that someone has seen or tested? Maybe some massaging to expose the MIG as a Linux device is required [3]? Cheers, António [1] https://docs.nvidia.com/datacenter/tesla/mig-user-guide/ [2] https://docs.openstack.org/nova/pike/admin/pci-passthrough.html [3] https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#device-nodes From rafaelweingartner at gmail.com Mon Jun 14 14:29:53 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 14 Jun 2021 11:29:53 -0300 Subject: [CLOUDKITTY] Missed CloudKitty meeting today Message-ID: Hello guys, I would like to apologize for missing the CloudKitty meeting today. I was concentrating on some work, and my alarm for the meeting did not ring. If you need something, just let me know. Sorry for the inconvenience; see you guys at our next meeting. -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon Jun 14 14:34:16 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 14 Jun 2021 08:34:16 -0600 Subject: [tripleo] master jobs down In-Reply-To: References: Message-ID: On Sun, Jun 13, 2021 at 5:29 PM Wesley Hayutin wrote: > Having some issues w/ infra... > https://bugs.launchpad.net/tripleo/+bug/1931821 > > Details are in the bug.. this will block upstream master jobs. > The incorrect dlrn aggregate hash on master has been corrected. This problem is now resolved. Details in the bug. Thanks whayutin> marios|ruck, anbanerj|rover rlandy jpena https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo.md5 :) [08:13:23] we're now at the right hash [08:13:30] thanks for all your time folks [08:13:46] ++ [08:13:50] https://trunk.rdoproject.org/api-centos8-master-uc/api/civotes_agg_detail.html?ref_hash=ee4aecfe06de7e8ca63aed041b3e42a8 [08:14:14] http://images.rdoproject.org/centos8/master/rdo_trunk/ee4aecfe06de7e8ca63aed041b3e42a8/ [08:14:35] https://hub.docker.com/r/tripleomaster/openstack-base/tags?page=1&ordering=last_updated [08:14:45] TAG [08:14:45] ee4aecfe06de7e8ca63aed041b3e42a8_manifest [08:14:45] docker pull tripleomaster/openstack-base:ee4aecfe06de7e8ca63aed041b3e42a8_manifest [08:14:45] Last pushed16 hours agobyrdotripleomirror [08:14:45] DIGEST [08:14:45] OS/ARCH [08:14:45] COMPRESSED SIZE [08:14:45] 4041f5fa79ea [08:14:45] linux/amd64 [08:14:45] 197.73 MB [08:14:45] 6a6f59b227f5 [08:14:45] linux/ppc64le -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Jun 14 14:45:23 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 14 Jun 2021 07:45:23 -0700 Subject: [TC] Open Infra Live- Open Source Governance Message-ID: Hello TC Folks :) So I have been tasked with helping to collect a couple volunteers for our July 29th episode of Open Infra Live (at 14:00 UTC) on open source governance. I am also working on getting a couple members from the k8s steering committee to join us that day. If you are interested in participating, please let me know! I only need like two volunteers, but if we have more people than that dying to join in, I am sure we can work it out. Thanks! -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Mon Jun 14 15:23:37 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Mon, 14 Jun 2021 15:23:37 +0000 (UTC) Subject: Hiding tabs for users with _member_ role References: <1621683371.10146158.1623684217887.ref@mail.yahoo.com> Message-ID: <1621683371.10146158.1623684217887@mail.yahoo.com> Hi all, I have enabled the octavia dashboard and also swift object store. I would like to keep these for the admin or project admin and not display them (tabs) to regular users. Is this possible? thanks in advance. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jun 14 15:29:48 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 14 Jun 2021 10:29:48 -0500 Subject: [TC] Open Infra Live- Open Source Governance In-Reply-To: References: Message-ID: <17a0b24a55b.f7ebe436657274.7104555013371533433@ghanshyammann.com> Thanks, Kendall for information, I can volunteer for that. gmann ---- On Mon, 14 Jun 2021 09:45:23 -0500 Kendall Nelson wrote ---- > Hello TC Folks :) > So I have been tasked with helping to collect a couple volunteers for our July 29th episode of Open Infra Live (at 14:00 UTC) on open source governance. > I am also working on getting a couple members from the k8s steering committee to join us that day. > If you are interested in participating, please let me know! I only need like two volunteers, but if we have more people than that dying to join in, I am sure we can work it out. > Thanks! > -Kendall Nelson (diablo_rojo) From haleyb.dev at gmail.com Mon Jun 14 15:42:06 2021 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 14 Jun 2021 11:42:06 -0400 Subject: [neutron] Bug deputy report for week of June 7th Message-ID: Hi, I was Neutron bug deputy last week. Below is a short summary about the reported bugs. -Brian Critical bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1931220 - _ObjectChangeHandler.handle_event failing on port after_create event - https://review.opendev.org/c/openstack/neutron/+/795260 High bugs --------- * https://bugs.launchpad.net/neutron/+bug/1931244 - ovn sriov broken from ussuri onwards - broken by https://review.opendev.org/c/openstack/neutron/+/765874 - https://review.opendev.org/c/openstack/neutron/+/795781 * https://bugs.launchpad.net/neutron/+bug/1931583 - Wrong status of trunk sub-port after setting binding_profile - Kamil is working on it * https://bugs.launchpad.net/neutron/+bug/1931639 - [OVN Octavia Provider] Load Balancer not reachable from some Subnets - Flavio is working on it Medium bugs ----------- * https://bugs.launchpad.net/neutron/+bug/1931098 - With pyroute 0.6.2, eventlet fails in lower-constraints test - https://review.opendev.org/c/openstack/neutron/+/795082 Low bugs -------- * https://bugs.launchpad.net/bugs/1931259 - API "subnet-segmentid-writable" does not include "is_filter" in the "segment_id" field - https://review.opendev.org/c/openstack/neutron-lib/+/795340 Misc bugs --------- * https://bugs.launchpad.net/neutron/+bug/1926045 - Restrictions on FIP binding - Re-opened without supplying reason, asked for more information * https://bugs.launchpad.net/neutron/+bug/1931513/ - neutron bootstrap container failed during deploy openstack victoria - Asked for more information as it looks like an error communicating with SQL server Wishlist bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1931100 - [rfe] Add RBAC support for BGPVPNs From sbauza at redhat.com Mon Jun 14 16:01:45 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 14 Jun 2021 18:01:45 +0200 Subject: [nova] GPU VMs using MIG? In-Reply-To: <803dae06-8317-27f4-42ac-365f72ff31f4@cern.ch> References: <803dae06-8317-27f4-42ac-365f72ff31f4@cern.ch> Message-ID: On Mon, Jun 14, 2021 at 4:37 PM António Paulo wrote: > Hi! > > Has anyone looked into instancing VMs with NVIDIA's Multi-Instance GPU > (MIG) devices [1] without having to rely on vGPUs? Unfortunately, NVIDIA > vGPUs lack tracing and profiling support that our users need. > > I could not find anything specific to MIG in the OpenStack docs but I > was wondering if doing PCI passthrough [2] of MIG devices is an option > that someone has seen or tested? > > Maybe some massaging to expose the MIG as a Linux device is required [3]? > > Nividia MIG feature is orthogonal to virtual GPUs and hardware dependent. As the latter, this is not really something we can "support" upstream as our upstream CI can't just verify it. Some downstream vendors tho have work efforts for trying to test this with their own solutions but again, not something we can discuss it here. Cheers, > António > > [1] https://docs.nvidia.com/datacenter/tesla/mig-user-guide/ > [2] https://docs.openstack.org/nova/pike/admin/pci-passthrough.html > [3] https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#device-nodes > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sz_cuitao at 163.com Mon Jun 14 16:07:28 2021 From: sz_cuitao at 163.com (tommy) Date: Tue, 15 Jun 2021 00:07:28 +0800 Subject: Install v version failed on centos 8 In-Reply-To: <000001d76123$40768de0$c163a9a0$@163.com> References: <000001d76123$40768de0$c163a9a0$@163.com> Message-ID: <001b01d76137$5fc8f520$1f5adf60$@163.com> I have resolved it. From: openstack-discuss-bounces+sz_cuitao=163.com at lists.openstack.org On Behalf Of tommy Sent: Monday, June 14, 2021 9:43 PM To: 'OpenStack Discuss' Subject: Install v version failed on centos 8 Why?? [root at control ~]# packstack --answer-file=./openstack.ini Welcome to the Packstack setup utility The installation log file is available at: /var/tmp/packstack/20210614-203611-y8gez97l/openstack-setup.log Installing: Clean Up [ DONE ] Discovering ip protocol version [ DONE ] root at 192.168.10.32's password: root at 192.168.10.30's password: root at 192.168.10.30's password: root at 192.168.10.31's password: root at 192.168.10.33's password: Setting up ssh keys [ DONE ] Preparing servers [ DONE ] Pre installing Puppet and discovering hosts' details [ DONE ] Preparing pre-install entries [ DONE ] Setting up CACERT [ DONE ] Preparing AMQP entries [ DONE ] Preparing MariaDB entries [ DONE ] Fixing Keystone LDAP config parameters to be undef if empty[ DONE ] Preparing Keystone entries [ DONE ] Preparing Glance entries [ DONE ] Preparing Nova API entries [ DONE ] Creating ssh keys for Nova migration [ DONE ] Gathering ssh host keys for Nova migration [ DONE ] Preparing Nova Compute entries [ DONE ] Preparing Nova Scheduler entries [ DONE ] Preparing Nova VNC Proxy entries [ DONE ] Preparing OpenStack Network-related Nova entries [ DONE ] Preparing Nova Common entries [ DONE ] Preparing Neutron API entries [ DONE ] Preparing Neutron L3 entries [ DONE ] Preparing Neutron L2 Agent entries [ DONE ] Preparing Neutron DHCP Agent entries [ DONE ] Preparing Neutron Metering Agent entries [ DONE ] Checking if NetworkManager is enabled and running [ DONE ] Preparing OpenStack Client entries [ DONE ] Preparing Horizon entries [ DONE ] Preparing Swift builder entries [ DONE ] Preparing Swift proxy entries [ DONE ] Preparing Swift storage entries [ DONE ] Preparing Gnocchi entries [ DONE ] Preparing Redis entries [ DONE ] Preparing Ceilometer entries [ DONE ] Preparing Aodh entries [ DONE ] Preparing Puppet manifests [ DONE ] Copying Puppet modules and manifests [ DONE ] Applying 192.168.10.30_controller.pp 192.168.10.30_controller.pp: [ DONE ] Applying 192.168.10.32_network.pp Applying 192.168.10.30_network.pp Applying 192.168.10.31_network.pp Applying 192.168.10.33_network.pp 192.168.10.31_network.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.10.31_network.pp Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Missing title. The title expression resulted in undef (file: /var/tmp/packstack/a201c33193194a599c3654c79945e4a1/modules/ovn/manifests/co ntroller/port.pp, line: 11, column: 13) (file: /var/tmp/packstack/a201c33193194a599c3654c79945e4a1/modules/ovn/manifests/co ntroller.pp, line: 137) on node comps1 You will find full trace in log /var/tmp/packstack/20210614-203611-y8gez97l/manifests/192.168.10.31_network. pp.log Please check log file /var/tmp/packstack/20210614-203611-y8gez97l/openstack-setup.log for more information Additional information: * Parameter CONFIG_NEUTRON_L2_AGENT: You have chosen OVN Neutron backend. Note that this backend does not support the VPNaaS plugin. Geneve will be used as the encapsulation method for tenant networks * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * File /root/keystonerc_admin has been created on OpenStack client host 192.168.10.30. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://192.168.10.30/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Jun 14 16:28:43 2021 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 14 Jun 2021 11:28:43 -0500 Subject: [all] Naming Poll for my Y Release Name Vote Message-ID: <7d28ec7e-d737-7912-ded1-bed711a3adf8@gmail.com> All, As I reviewed the excellent list of names for the 'Y' release I had a hard time choosing.  This inspired me to create a poll to get the community's feedback on how you all would like me to vote as a member of the Technical Committee. To cast your vote please visit: https://www.surveymonkey.com/r/RHR3NCY Happy voting! Jay (jungleboyj) From dangerzonen at gmail.com Sun Jun 13 03:58:43 2021 From: dangerzonen at gmail.com (dangerzone ar) Date: Sun, 13 Jun 2021 11:58:43 +0800 Subject: [Magnum] Rocky Openstack Magnum Error Message-ID: Hi, I'm getting error while trying to verify my magnum installation Here is my keystone_admin details:- *unset OS_SERVICE_TOKEN* * export OS_USERNAME=admin* * export OS_PASSWORD='b6cf5552a9be44e7'* * export OS_REGION_NAME=RegionOne* * export OS_AUTH_URL=http://192.168.0.122:5000/v3 * * export PS1='[\u@\h \W(keystone_admin)]\$ '* *export OS_PROJECT_NAME=admin* *export OS_USER_DOMAIN_NAME=Default* *export OS_PROJECT_DOMAIN_NAME=Default* *export OS_IDENTITY_API_VERSION=3* Pls find attached my magnum.conf file and this is error output:- *[root at myosptac ~(keystone_admin)]# magnum --debug service-list* *DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader')* *DEBUG (extension:189) found extension EntryPoint.parse('token_endpoint = openstackclient.api.auth_plugin:TokenEndpoint')* *DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader')* *DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token')* *DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth')* *DEBUG (extension:189) found extension EntryPoint.parse('*v3oauth1* = keystoneauth1.extras.oauth1._loading:V3OAuth1')* *DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken')* *DEBUG (extension:189) found extension EntryPoint.parse('*v3oidcauthcode* = keystoneauth1.loading._plugins.identity.v3:*OpenIDConnectAuthorie*')* *DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password')* *DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password')* *DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password')* *DEBUG (extension:189) found extension EntryPoint.parse('*v3adfspassword* = keystoneauth1.extras._saml2._loading:ADFSPassword')* *DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAcce* *DEBUG (extension:189) found extension EntryPoint.parse('*v3oidcpassword* = keystoneauth1.loading._plugins.identity.v3:*OpenIDConnectPasswor *DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos')* *DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token')* *DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConneredentials')* *DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth')* *DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token')* *DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP')* *DEBUG (extension:189) found extension EntryPoint.parse('* v3applicationcredential* = keystoneauth1.loading._plugins.identity.v3:* Applicationl*')* *DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password')* *DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:*MappedKerberos*')* *DEBUG (extension:189) found extension EntryPoint.parse('gnocchi-basic = * gnocchiclient.auth*:GnocchiBasicLoader')* *DEBUG (extension:189) found extension EntryPoint.parse('gnocchi-noauth = * gnocchiclient.auth*:GnocchiNoAuthLoader')* *DEBUG (extension:189) found extension EntryPoint.parse('aodh-noauth = aodhclient.noauth:AodhNoAuthLoader')* *DEBUG (session:448) REQ: curl -g -i -X GET http://192.168.0.122:5000/v3 -H "Accept: application/json" -H "User-Agent: magnum *keystoneaut* python-requests/2.19.1 CPython/2.7.5"* *DEBUG (connectionpool:207) Starting new HTTP connection (1): 192.168.0.122* *DEBUG (connectionpool:395) http://192.168.0.122:5000 "GET /v3 HTTP/1.1" 200 196* *DEBUG (session:479) RESP: [200] Connection: Keep-Alive Content-Encoding: gzip Content-Length: 196 Content-Type: application/json Date: Sn 2021 18:28:25 GMT Keep-Alive: timeout=15, max=100 Server: Apache/2.4.6 (CentOS) Vary: X-Auth-Token,Accept-Encoding x-openstack-requeste8a01587-a1ea-4b9e-9960-b7f1c64cebd6* *DEBUG (session:511) RESP BODY: {"version": {"status": "stable", "updated": "2018-10-15T00:00:00Z", "media-types": [{"base": "applicationtype": "application/vnd.openstack.identity-v3+json"}], "id": "v3.11", "links": [{"href": "http://192.168.0.122:5000/v3/ ", "rel": "self"}* *DEBUG (session:853) GET call to http://192.168.0.122:5000/v3 used request id req-e8a01587-a1ea-4b9e-9960-b7f1c64cebd6* *DEBUG (base:176) Making authentication request to http://192.168.0.122:5000/v3/auth/tokens * *DEBUG (connectionpool:395) http://192.168.0.122:5000 "POST /v3/auth/tokens HTTP/1.1" 201 10983* *DEBUG (base:181) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "ff6467cba06d4325b5dc26ffec6d67ea", "name": "h_owner"}, {"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}, {"id": "37dc1fb164684e77bba1df8ced2faa9f", "name": "reader"}, a78bcad80fe4ae897adf16e5b2b265a", "name": "member"}, {"id": "3c2996b04b5c46189c6d618f69b4aa8b", "name": "admin"}], "expires_at": "2021-08:25.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "d430c3698e8f4d7d874ea7e708ac17d7", "name": "admin"}, " [{"endpoints": [{"url": "http://192.168.0.122:8000/v1 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "14b4e2798b31fa00eaaa823"}, {"url": "http://192.168.0.122:8000/v1 ", "interface": "internal", "region": "RegionOne", "region_id": "RegionOn "9e33b93ca2f24206983d6f6b2e58d2f9"}, {"url": "http://192.168.0.122:8000/v1 ", "interface": "public", "region": "RegionOne", "region_id":ne", "id": "fc5eadc7dbfd492299f9b9e9585c8201"}], "type": "cloudformation", "id": "001cd752638e46eb8fbc96c9181a9efc", "name": "heat-cfn"}ints": [{"url": "http://192.168.0.122:8080/v1/AUTH_d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "admin", "region": "RegionOne", "regiRegionOne", "id": "0030669365ad497d8eefd13a99850c68"}, {"url": "http://192.168.0.122:8080/v1/AUTH_d430c3698e8f4d7d874ea7e708ac17d7 ", "in "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "53e2d455b96c4fc0951d98d5ce7790cf"}, {"url": "http://192.168.0.122:8TH_d430c3698e8f4d7d874ea7e708ac17d7", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "9f037da9221f43a1a3017d5"}], "type": "object-store", "id": "0610a7bebc1642f398e6de9f25b582e8", "name": "swift"}, {"endpoints": [{"url": "http://192.168.0.12 "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "553632abff5a4380911acb11a5e16b74"}, {"url": "http://1922:9696 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "6fb59f7c44d44ceaa0da2dfe0a4e4935"}, {"url": "http8.0.122:9696", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "c234ac3128294662b2a3c5b445b81d28"}], "typerk", "id": "068e962709db4d4d8904e740824503c8", "name": "neutron"}, {"endpoints": [{"url": "http://192.168.0.122:8004/v1/d430c3698e8f4d7d8ac17d7 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "2ff81a5c643546b49900256cc12aa739"}, {"url": "htt68.0.122:8004/v1/d430c3698e8f4d7d874ea7e708ac17d7", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "8ffb8f2bf17cfc24206ac99"}, {"url": "http://192.168.0.122:8004/v1/d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "internal", "region": "Regioegion_id": "RegionOne", "id": "925912fbfcc14e8eac063aae807021c2"}], "type": "orchestration", "id": "08c8970db6ca4c4c9592070b78002ea3", "eat"}, {"endpoints": [{"url": "http://192.168.0.122:5000/v3 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "i49a40c7542fbb6f031d8eb3d66ac"}, {"url": "http://192.168.0.122:5000/v3 ", "interface": "internal", "region": "RegionOne", "region_id": "Re "id": "8b49647ee89246bba936e6717a6915f9"}, {"url": "http://192.168.0.122:35357/v3 ", "interface": "admin", "region": "RegionOne", "regioegionOne", "id": "e80547d366fa403a87af0a04926d5fe8"}], "type": "identity", "id": "0c65ebd11cb549008dd49fe960623ed9", "name": "keystone"}ints": [{"url": "http://192.168.0.122:8776/v3/d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "internal", "region": "RegionOne", "regiongionOne", "id": "2ad2414e6c9a4bc78fa83fa53a1fbf4e"}, {"url": "http://192.168.0.122:8776/v3/d430c3698e8f4d7d874ea7e708ac17d7 ", "interfacec", "region": "RegionOne", "region_id": "RegionOne", "id": "2b70431e0c0049a88c7d0bf76d04a033"}, {"url": "http://192.168.0.122:8776/v3/d4f4d7d874ea7e708ac17d7 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "64e454f408c24549aeb1e720295992ce"}: "volumev3", "id": "1b07bd83eeb4460d9b8330794ba33850", "name": "cinderv3"}, {"endpoints": [{"url": "http://192.168.0.122:9890/ ", "interdmin", "region": "RegionOne", "region_id": "RegionOne", "id": "0473f73ee7e44b918b08736636570df8"}, {"url": "http://192.168.0.122:9890/ ",ce": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "89af54ac6f1e4f109324b8c4e9b8bca5"}, {"url": "http://192.168.0.1 , "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "d8deb8aa66264ff292b206424ebe735a"}], "type": "nfv-orche, "id": "22d0c99d2b9147c992c430bc01955b59", "name": "tacker"}, {"endpoints": [{"url": "http://192.168.0.122:8776/v2/d430c3698e8f4d7d874e7d7 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "11cee7c429b24730abff08cbcc99d4b7"}, {"url": "http:/0.122:8776/v2/d430c3698e8f4d7d874ea7e708ac17d7", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "caf5b4b874a44321adb7db9"}, {"url": "http://192.168.0.122:8776/v2/d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "admin", "region": "RegionOnen_id": "RegionOne", "id": "eab5457ca880490b8cbe357ef870fcaa"}], "type": "volumev2", "id": "488472e3277d4e568a169246b899d223", "name": "c, {"endpoints": [{"url": "http://192.168.0.122:8776/v1/d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "admin", "region": "RegionOne", "": "RegionOne", "id": "24e810d702ff47c6880b0c5231aa9f1a"}, {"url": "http://192.168.0.122:8776/v1/d430c3698e8f4d7d874ea7e708ac17d7 ", "int"internal", "region": "RegionOne", "region_id": "RegionOne", "id": "3d887ec2f90a48d9bb2cf6c71264b6bc"}, {"url": "http://192.168.0.122:870c3698e8f4d7d874ea7e708ac17d7", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "532bf115ce0741b0afe052b3b], "type": "volume", "id": "4941273cd7734cdcb01eb6423dd3c2d9", "name": "cinder"}, {"endpoints": [{"url": "http://192.168.0.122:9311 ", "i: "public", "region": "RegionOne", "region_id": "RegionOne", "id": "9ccc3fbd5cb240ae972aa26b3e7a0082"}, {"url": "http://192.168.0.122:93erface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "e3bc06b47fad453c8880f80950d3978c"}, {"url": "http://192.168.0 ., "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "e9f9e7f2a5814d2cbe52b126288bd399"}], "type": "key-mand": "4d342a315084482b86f616299eebfd3a", "name": "barbican"}, {"endpoints": [{"url": "http://192.168.0.122:8774/v2.1/d430c3698e8f4d7d874e7d7 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "01b84df8a50f47df96c66dca731c8999"}, {"url": "http:/0.122:8774/v2.1/d430c3698e8f4d7d874ea7e708ac17d7", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "2c339fca57d65acce177bcd"}, {"url": "http://192.168.0.122:8774/v2.1/d430c3698e8f4d7d874ea7e708ac17d7 ", "interface": "admin", "region": "Regioegion_id": "RegionOne", "id": "898b9568e1bc46309870bfb1a432995e"}], "type": "compute", "id": "83e61d9ee67e4c4b9da5f2aeb342ac33", "name": {"endpoints": [{"url": "http://192.168.0.122:8042 ", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "3da4ffea975d229f4208cab"}, {"url": "http://192.168.0.122:8042 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "iee5bb6a049ab944e828c66af861c"}, {"url": "http://192.168.0.122:8042 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOn "8745e0169c8c498fac9033e301cfeb3e"}], "type": "alarming", "id": "989e7012ea604c7794455a197fdedf6b", "name": "aodh"}, {"endpoints": [{"up://192.168.0.122:8778/placement ", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "9b81b11df55d4651bffe0af"}, {"url": "http://192.168.0.122:8778/placement ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "c1940aebbed105332160986"}, {"url": "http://192.168.0.122:8778/placement ", "interface": "admin", "region": "RegionOne", "region_id": "Region": "f6fc036a48bf47f1ba034d5c46a3257d"}], "type": "placement", "id": "b975d296f13141539055f90971b870a7", "name": "placement"}, {"endpointl": "http://192.168.0.122:9511/v1 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "3d83664ffd1e4fafb605adb"}, {"url": "http://192.168.0.122:9511/v1 ", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "7f06217f1297822167470ca"}, {"url": "http://192.168.0.122:9511/v1 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "71d466a953e3727aea930f2"}], "type": "container-infra", "id": "bb891040e2f24a4f86aa596f780a668e", "name": "magnum"}, {"endpoints": [{"url//192.168.0.122:9292 ", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "0355aeb6a9bf41e19c33adb5d207c40b"} "http://192.168.0.122:9292 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "3e894b75a3494f2792e8c479493c"url": "http://192.168.0.122:9292 ", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "5baff29672d44f308eb33f4"}], "type": "image", "id": "da951f6f026b46918337129a67331f0d", "name": "glance"}, {"endpoints": [{"url": "http://192.168.0.122:8777face": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "753f5b41610c4de3b7ea283dcecab32a"}, {"url": "http://192.168.0.1 "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "b5c21dfe1d1e496093c1bfb16d9d5161"}, {"url": "http://1922:8777 ", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "cfc236dbffd846cb94f3add4cfbb5448"}], "type": "me"id": "e378bc2a0adc451e905a35ffdc59cf35", "name": "ceilometer"}, {"endpoints": [{"url": "http://192.168.0.122:8041 ", "interface": "adminn": "RegionOne", "region_id": "RegionOne", "id": "3ea207d844b44d77806a359bf49b3b46"}, {"url": "http://192.168.0.122:8041 ", "interface": ", "region": "RegionOne", "region_id": "RegionOne", "id": "78d7ac10e85745148af1f01f2942ed83"}, {"url": "http://192.168.0.122:8041 ", "int"public", "region": "RegionOne", "region_id": "RegionOne", "id": "f15500eca821457c994ce8d8c4c859f6"}], "type": "metric", "id": "ebd1ffd8161a292ff66140d", "name": "gnocchi"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "ad": "9c91d3f3e3c943f09b57fe70889894b8"}, "audit_ids": ["vOQFHBd6QJCOBEAz_1Y7jA"], "issued_at": "2021-06-12T18:28:25.000000Z"}}* *DEBUG (session:448) REQ: curl -g -i -X GET http://192.168.0.122:9511/v1/mservices -H "Accept: application/json" -H "Content-Type: applicn" -H "OpenStack-API-Version: container-infra latest" -H "User-Agent: None" -H "X-Auth-Token: {SHA1}1f364e2302d9282710c8314355929963d3b4* *DEBUG (connectionpool:207) Starting new HTTP connection (1): 192.168.0.122* *DEBUG (connectionpool:395) http://192.168.0.122:9511 "GET /v1/mservices HTTP/1.1" 503 218* *DEBUG (session:479) RESP: [503] Content-Length: 218 Content-Type: application/json Date: Sat, 12 Jun 2021 18:28:26 GMT Server: Werkzeug/thon/2.7.5 x-openstack-request-id: req-d9894aff-b197-4a36-b9b9-f752d38f29d3* *DEBUG (session:511) RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.

\nThe Keysice is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"}* *DEBUG (session:844) GET call to container-infra for http://192.168.0.122:9511/v1/mservices used request id req-d9894aff-b197-4a36-b9b9-fd3* *DEBUG (shell:643) 'errors'* *Traceback (most recent call last):* * File "/usr/lib/python2.7/site-packages/magnumclient/shell.py", line 640, in main* * OpenStackMagnumShell().main(map(encodeutils.safe_decode, sys.argv[1:]))* * File "/usr/lib/python2.7/site-packages/magnumclient/shell.py", line 552, in main* * args.func(self.cs, args)* * File "/usr/lib/python2.7/site-packages/magnumclient/v1/mservices_shell.py", line 22, in do_service_list* * mservices = cs.mservices.list()* * File "/usr/lib/python2.7/site-packages/magnumclient/v1/mservices.py", line 68, in list* * return self._list(self._path(path), "mservices")* * File "/usr/lib/python2.7/site-packages/magnumclient/common/base.py", line 121, in _list* * resp, body = self.api.json_request('GET', url)* * File "/usr/lib/python2.7/site-packages/magnumclient/common/httpclient.py", line 368, in json_request* * resp = self._http_request(url, method, **kwargs)* * File "/usr/lib/python2.7/site-packages/magnumclient/common/httpclient.py", line 349, in _http_request* * error_json = _extract_error_json(resp.content)* * File "/usr/lib/python2.7/site-packages/magnumclient/common/httpclient.py", line 55, in _extract_error_json* * error_body = body_json['errors'][0]* *KeyError: 'errors'* *ERROR: 'errors'* I'm running rocky openstack, install magnum and not use barbican. Appreciate if someone could advise magnum.conf or anything to resolve the above error. Please help. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: magnum.conf Type: application/octet-stream Size: 74098 bytes Desc: not available URL: From kennelson11 at gmail.com Mon Jun 14 16:37:51 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 14 Jun 2021 09:37:51 -0700 Subject: [TC][All] Election Officiating! Message-ID: Come join the election officials! I setup a meeting for June 29th at 16:00 UTC for those that would like to join and learn about running an election. We can adjust it. I just wanted to get something on the calendar before its too late and we are behind again. The election officials team really could use your help! I will walk through the scripts we use to generate dates and the electorate and answer questions you might have about the README.rst that has the entire process in it. Hope to see you there! - Kendall Nelson (diablo_rojo) [1] https://opendev.org/openstack/election/src/branch/master/README.rst -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1532 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting-95186227884.ics Type: text/calendar Size: 1581 bytes Desc: not available URL: From gouthampravi at gmail.com Mon Jun 14 17:42:53 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 14 Jun 2021 10:42:53 -0700 Subject: [all] Naming Poll for my Y Release Name Vote In-Reply-To: <7d28ec7e-d737-7912-ded1-bed711a3adf8@gmail.com> References: <7d28ec7e-d737-7912-ded1-bed711a3adf8@gmail.com> Message-ID: On Mon, Jun 14, 2021 at 9:33 AM Jay Bryant wrote: > All, > > As I reviewed the excellent list of names for the 'Y' release I had a > hard time choosing. This inspired me to create a poll to get the > community's feedback on how you all would like me to vote as a member of > the Technical Committee. > > To cast your vote please visit: https://www.surveymonkey.com/r/RHR3NCY I'm glad you asked :) Thanks Jay! > > > Happy voting! > > Jay > > (jungleboyj) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Jun 14 17:56:15 2021 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 14 Jun 2021 12:56:15 -0500 Subject: [TC] Open Infra Live- Open Source Governance In-Reply-To: References: Message-ID: <187a78ef-0e29-d7dd-5506-73515fb28dbd@gmail.com> On 6/14/2021 9:45 AM, Kendall Nelson wrote: > Hello TC Folks :) > > So I have been tasked with helping to collect a couple volunteers for > our July 29th episode of Open Infra Live (at 14:00 UTC) on open source > governance. > > I am also working on getting a couple members from the k8s steering > committee to join us that day. > > If you are interested in participating, please let me know! I only > need like two volunteers, but if we have more people than that dying > to join in, I am sure we can work it out. > I can help if you need another person.  Let me know. Jay > Thanks! > > -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From akanevsk at redhat.com Mon Jun 14 18:49:26 2021 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Mon, 14 Jun 2021 13:49:26 -0500 Subject: [Interop] co-chair requirement Message-ID: team, after discussing with a few board members I think we should drop the requirement that one of the co-chairs of the Interop WG is from the OIF board. Are you comfortable with this? Thanks, -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Mon Jun 14 19:22:02 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 14 Jun 2021 12:22:02 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: Ping? On Mon, Jun 7, 2021 at 2:27 PM Pete Zhang wrote: > Julie, > > The original email is too long and requires moderator approval. So I have > a new email thread instead. > > The openstack-vswitch is required (>=11.0.0 < 12.0.0) by openstack-neutron > (v15.0.0, from openstack-release-train, the release we chose). > I downloaded openstack-vswitch-11.0.0 from > https://forge.puppet.com/modules/openstack/vswitch/11.0.0. > > Where I can download the missing *librte and its dependencies*? I don't > think we have a yum-repo for Centos Extra so I might need to have those > dependencies downloaded as well. > > Thanks a lot! > > Pete > > -- > > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jun 14 20:27:47 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 14 Jun 2021 15:27:47 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 17th at 1500 UTC Message-ID: <17a0c35752f.fad2fd90669617.8187659333285607760@ghanshyammann.com> Hello Everyone, NOTE: TC MEETINGS WILL BE HELD IN #openstack-tc CHANNEL ON OFTC NETWORK (NOT FREENODE) Technical Committee's next weekly meeting is scheduled for June 17th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, June 16th , at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From juliaashleykreger at gmail.com Mon Jun 14 20:47:37 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 14 Jun 2021 13:47:37 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: Sorry, I thought the thread had addressed this. Did http://lists.openstack.org/pipermail/openstack-discuss/2021-June/022965.html not help? That being said, To reiterate what was said in the various replies. Train is in Extended Maintenance. The community cannot cut new releases of the old packages to address and fix issues. Your best bet is a newer, current, release of OpenStack and related packaging. The only case I personally would advise installing Train on a *new* deployment is if you explicitly have a vendor and their downstream packages/testing/processes supporting you. On Mon, Jun 14, 2021 at 12:22 PM Pete Zhang wrote: > Ping? > > On Mon, Jun 7, 2021 at 2:27 PM Pete Zhang > wrote: > >> Julie, >> >> The original email is too long and requires moderator approval. So >> I have a new email thread instead. >> >> The openstack-vswitch is required (>=11.0.0 < 12.0.0) by >> openstack-neutron (v15.0.0, from openstack-release-train, the release we >> chose). >> I downloaded openstack-vswitch-11.0.0 from >> https://forge.puppet.com/modules/openstack/vswitch/11.0.0. >> >> Where I can download the missing *librte and its dependencies*? I don't >> think we have a yum-repo for Centos Extra so I might need to have those >> dependencies downloaded as well. >> >> Thanks a lot! >> >> Pete >> >> -- >> >> >> > > > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Mon Jun 14 21:48:12 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 14 Jun 2021 14:48:12 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: Julia, Thanks for the update. In our environment, we don't have access to centos-extra repos. Does anyone know the site where we can download those missing/needed rpms? thx. Pete On Mon, Jun 14, 2021 at 1:47 PM Julia Kreger wrote: > Sorry, I thought the thread had addressed this. Did > http://lists.openstack.org/pipermail/openstack-discuss/2021-June/022965.html > not help? > > That being said, To reiterate what was said in the various replies. Train > is in Extended Maintenance. The community cannot cut new releases of the > old packages to address and fix issues. Your best bet is a newer, current, > release of OpenStack and related packaging. The only case I personally > would advise installing Train on a *new* deployment is if you explicitly > have a vendor and their downstream packages/testing/processes supporting > you. > > On Mon, Jun 14, 2021 at 12:22 PM Pete Zhang > wrote: > >> Ping? >> >> On Mon, Jun 7, 2021 at 2:27 PM Pete Zhang >> wrote: >> >>> Julie, >>> >>> The original email is too long and requires moderator approval. So >>> I have a new email thread instead. >>> >>> The openstack-vswitch is required (>=11.0.0 < 12.0.0) by >>> openstack-neutron (v15.0.0, from openstack-release-train, the release we >>> chose). >>> I downloaded openstack-vswitch-11.0.0 from >>> https://forge.puppet.com/modules/openstack/vswitch/11.0.0. >>> >>> Where I can download the missing *librte and its dependencies*? I don't >>> think we have a yum-repo for Centos Extra so I might need to have those >>> dependencies downloaded as well. >>> >>> Thanks a lot! >>> >>> Pete >>> >>> -- >>> >>> >>> >> >> >> -- >> >> >> > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Jun 15 00:17:57 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 15 Jun 2021 09:17:57 +0900 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: Thank you all for your additional thoughts. Because I've not received very strong objections about existing two patches[1][2], I updated these patches to resolve conflicts between these patches. [1] https://review.opendev.org/c/openstack/neutron/+/763563 [2] https://review.opendev.org/c/openstack/neutron/+/788893 I made the patch to add default hypervisor name as base one because it doesn't change behavior and would be "safe" for backports. So far we have received positive feedback about fixing compatibility with libvirt (in master) but I'll create a backport of that change as well to ask some feedback about its profit and risk for backport. I think strategy is now clear with this feedback but please feel free to put your thoughts in this thread or the above patches. > if we want to "fix" this in neutron then neutron should either try > looking up the RP using the host name and then fall back to using the > fqdn or we should look at using the hypervior api as we discussed a few > years ago when this last came up > http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011044.html I feel like this discussion would be a good chance to revisit the requirement of basic client implementation for placement. (or abstraction layer like castellan) Currently each components like nova, neutron, and cyborg(?) have their own placement client implementation (and logic to query resource providers) but IMO it is more efficient if we can maintain the common client implementation instead. > for many deployment that do not set the fqdn as the canonical host name > in /etc/host the current default behavior works out of the box > whatever solution we take we need to ensure that no existing deployment > is affected by the change which means we cannot default to only using > the fqdn or similar as that would be an upgrade breakage so we have > to maintain the current behavior by default and enhance neutron to > either fall back to the fqdn if the hostname based lookup fails or use > the new config intoduc ed by takashi's patch where the fqdn is used as > the server canonical hostname. Thank you for pointing this out. To be clear, the behavior change I proposed[2] doesn't break any deployment with libvirt but would break deployments with non-libvirt drivers. This point should be considered when reviewing that change. So far most of the feedback I received is that it is preferred to fix compatibility with libvirt as it's the "default" option but please share your thoughts on the patch. On Mon, Jun 14, 2021 at 7:30 PM Sean Mooney wrote: > On Sat, 2021-06-12 at 00:46 +0900, Takashi Kajinami wrote: > > On Fri, Jun 11, 2021 at 8:48 PM Oliver Walsh wrote: > > > Hi Takashi, > > > > > > On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami > > > wrote: > > > > Hi All, > > > > > > > > > > > > I've been working on bug 1926693[1], and am lost about the > > > > reasonable > > > > solutions we expect. Ideally I'd need to bring this topic in the > > > > team meeting > > > > but because of the timezone gap and complicated background, I'd > > > > like to > > > > gather some feedback in ml first. > > > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > > > > > > > TL;DR > > > > Which one(or ones) would be reasonable solutions for this issue ? > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > (2) https://review.opendev.org/c/openstack/neutron/+/788893 > > > > (3) Implement something different > > > > > > > > The issue I reported in the bug is that there is an inconsistency > > > > between > > > > nova and neutron about the way to determine a hypervisor name. > > > > Currently neutron uses socket.gethostname() (which always returns > > > > shortname) > > > > > > > > > > > > > socket.gethostname() can return fqdn or shortname - > > > https://docs.python.org/3/library/socket.html#socket.gethostname. > > > > > > > You are correct and my statement was not accurate. > > So socket.gethostname() returns what is returned by gethostname system > > call, > > and gethostname/sethostname accept both FQDN and short name, > > socket.gethostname() > > can return one of FQDN or short name. > > > > However the root problem is that this logic is not completely same as > > the ones used > > in each virt driver. Of cause we can require people the "correct" > > format usage for > > canonical name as well as "hostname", but fixthing this problem in > > neutron would > > be much more helpful considering the effect caused by enforcing users > > to "fix" > > hostname/canonical name formatting at this point. > this is not really something that can be fixed in neutron > we can either create a common funciton in oslo.utils or placement-lib > that we can use in nova, neutron and all other project or we can use > the config option. > > if we want to "fix" this in neutron then neutron should either try > looking up the RP using the host name and then fall back to using the > fqdn or we shoudl look at using the hypervior api as we discussed a few > years ago when this last came up > > http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011044.html > > i dont think neutron shoudl know anything about hyperviors so i would > just proceed with the new config option that takashi has proposed but i > would not implemente Rodolfo's solution of adding a hypervisor_type. > > just as nova has no awareness of the neutron backend and trys to treat > all fo them the same neutron should remain hypervior independent and we > should look to provide common code that can be reused to identify the > RP in a seperate lib as a longer term solution. > > for many deployment that do not set the fqdn as the canonical host name > in /etc/host the current default behavior works out of the box > whatever solution we take we need to ensure that no existing deployment > is affected by the change which means we cannot default to only using > the fqdn or similar as that would be an upgrade breakage so we have > to maintain the current behavior by default and enhance neutron to > either fall back to the fqdn if the hostname based lookup fails or use > the new config intoduc ed by takashi's patch where the fqdn is used as > the server canonical hostname. > > > > > I've seen cases where it switched from short to fqdn but I'm not sure > > > of the root cause - DHCP lease setting a hostname/domainname perhaps. > > > > > > Thanks, > > > Ollie > > > > > > > to determine a hypervisor name to search the corresponding resource > > > > provider. > > > > On the other hand, nova uses libvirt's getHostname function (if > > > > libvirt driver is used) > > > > which returns a canonical name. Canonical name can be shortname or > > > > FQDN (*1) > > > > and if FQDN is used then neutron and nova never agree. > > > > > > > > (*1) > > > > IMO this is likely to happen in real deployments. For example, > > > > TripelO uses > > > > FQDN for canonical names. > > > > > > > > > > > > Neutron already provides the resource_provider_defauly_hypervisors > > > > option > > > > to override a hypervisor name used. However because this option > > > > accepts > > > > a map between interface and hypervisor, setting this parameter > > > > requires > > > > very redundant description especially when a compute node has > > > > multiple > > > > interfaces/bridges. The following example shows how redundant the > > > > current > > > > requirement is. > > > > ~~~ > > > > [OVS] > > > > resource_provider_bandwidths=br-data1:1024:1024,br- > > > > data2:1024:1024,\ > > > > br-data3:1024,1024,br-data4,1024:1024 > > > > resource_provider_hypervisors=br-data1:compute0.mydomain,br-data2:\ > > > > compute0.mydomain,br-data3:compute0.mydomain,br- > > > > data4:compute0.mydomain > > > > ~~~ > > > > > > > > I've submitted a change to propose a new single parameter to > > > > override > > > > the base hypervisor name but this is currently -2ed, mainly because > > > > I lacked analysis about the root cause of mismatch when I proposed > > > > this. > > > > (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > > > > > > > > > On the other hand, I submitted a different change to neutron which > > > > implements > > > > the logic to get a hypervisor name which is fully compatible with > > > > libvirt. > > > > While this would save users from even overriding hypervisor names, > > > > I'm aware > > > > that this might break the other virt driver which depends on a > > > > different logic > > > > to generate a hypervisor name. IMO the patch is still useful > > > > considering > > > > the libvirt driver would be the most popular option now, but I'm > > > > not fully > > > > aware of the impact on the other drivers, especially because I > > > > don't know > > > > which virt driver would support the minimum QoS feature now. > > > > (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > > > > > > > > > > In the review of (2), Sean mentioned implementing a logic to > > > > determine > > > > an appropriate resource provider(3) even if there is a mismatch > > > > about > > > > host name format, but I'm not sure how I would implement that, tbh. > > > > > > > > > > > > My current thought is to merge (1) as a quick solution first, and > > > > discuss whether > > > > we should merge (2), but I'd like to ask for some feedback about > > > > this plan > > > > (like we should NOT merge (2)). > > > > > > > > I'd appreciate your thoughts about this $topic. > > > > > > > > Thank you, > > > > Takashi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Jun 15 02:35:21 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 14 Jun 2021 21:35:21 -0500 Subject: [openstack-helm] No meeting tomorrow Message-ID: Hey team, Since there are no agenda items [0] for the IRC meeting tomorrow, the meeting is cancelled. Our next meeting will be June 22nd. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Tue Jun 15 05:42:11 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Tue, 15 Jun 2021 07:42:11 +0200 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: On Mon, Jun 14, 2021 at 02:48:12PM -0700, Pete Zhang wrote: > Julia, > > Thanks for the update. > > In our environment, we don't have access to centos-extra repos. Does > anyone know the site where we can download those missing/needed rpms? thx. > > Pete > > On Mon, Jun 14, 2021 at 1:47 PM Julia Kreger > wrote: CentOS without CentOS Extras (which is enabled by default) sounds broken to me. Place that file in /etc/yum.repos.d, if you happen to run CentOS. [stack at devstack yum.repos.d]$ cat CentOS-Linux-Extras.repo # CentOS-Linux-Extras.repo # # The mirrorlist system uses the connecting IP address of the client and the # update status of each mirror to pick current mirrors that are geographically # close to the client. You should use this for CentOS updates unless you are # manually picking other mirrors. # # If the mirrorlist does not work for you, you can try the commented out # baseurl line instead. [extras] name=CentOS Linux $releasever - Extras mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra #baseurl=http://mirror.centos.org/$contentdir/$releasever/extras/$basearch/os/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial Matthias -- Matthias Runge From tonyppe at gmail.com Tue Jun 15 05:50:46 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Tue, 15 Jun 2021 13:50:46 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Hi Mark, I had never used the "pip install ." method. Maybe a miscomprehension on my side, from the documentation [1] there are three ways to install kayobe. I had opted for the first way which is "pip install kayobe" since January 2020. The understanding was as conveyed in the doc "Installing from PyPI ensures the use of well used and tested software". I have since followed your steps in your mail which is the installation from source. I had new problems: *During ansible bootstrap:* During ansible host bootstrap it errors out and says the kolla_ansible is not found and needs to be installed in the same virtual environment. In all previous times, I had understood that kolla ansible is installed by kayobe at this point. I eventually done "pip install kolla-ansible" and it seemed to take care of that and allowed me to move on to "host configure" *During host configure:* I was able to get past the previous python issue but then it failed on the network due to a "duplicate bond name", though this config was deployed successfully in Train. I dont think I really need a bond at this point so I deleted the bond and the host configure is now successful. (fyi this is an all-in-one host.) *During kayobe service deploy:* This then fails with "no module named docker" on the host. To troubleshoot this I logged into the host and activated the kayobe virtual env (/opt/kayobe/venvs/kayobe/bin/activate) and then "pip install docker". It was already installed. Eventually, I issued "pip install --ignore-installed docker" within these three (environment) locations which resolved this and allowed the kayobe command to complete successfully and progress further: - /opt/kayobe/venvs/kayobe/ - /opt/kayobe/venvs/kolla-ansible/ - native on the host after deactivating the venv. Now the blocker is the following failure; TASK [nova-cell : Waiting for nova-compute services to register themselves] ********************************************************************************************** FAILED - RETRYING: Waiting for nova-compute services to register themselves (20 retries left). FAILED - RETRYING: Waiting for nova-compute services to register themselves (19 retries left). I haven't seen this one before but previously I had seen something similar with mariadb because the API dns was not available. What I have been using here is a /etc/hosts entry for this. I checked that this entry is available on the host and in the nova containers. I decided to reboot the host anyway (previously resolved similar mariadb issue) to restart the containers just in case the dns was not available in one of them and I missed it. Unfortunately I now have two additional issues which are hard blockers: 1. The network is no longer working on the host after reboot, so I am unable to ssh 2. The user password has been changed by kayobe, so I am unable to login using the console Due to the above, I am unable to login to the host to investigate or remediate. Previously when this happened with centos I could use the root user to log in. This time around as it's ubuntu I do not have a root user. The user I am using for both "kolla_ansible_user" and "kayobe_ansible_user" is the same - is this causing a problem with Victoria and Wallaby? I had this user password change issue beginning with Victoria. So at this point I need to re-install the host and go back to the host configure before service deploy. *Summary* Any guidance is well appreciated as I'm at a loss at this point. Last week I had a working Openstack Train deployment in a single host. "Kayobe" stopped working (maybe because I had previously always used pip install kayobe). I would like to deploy Wallaby, should I be able to successfully do this today or should I be using Victoria at the moment (or even, Train)? [1] OpenStack Docs: Installation Regards, Tony Pearce On Mon, 14 Jun 2021 at 18:36, Mark Goddard wrote: > On Mon, 14 Jun 2021 at 09:40, Tony Pearce wrote: > > > > Hi Mark, > > > > I followed this guide to do a "git clone" specifying the branch "-b" to > "stable/wallaby" [1]. What additional steps do I need to do to get the > latest commits? > > That should be sufficient. When you install it via pip, note that 'pip > install kayobe' will still pull from PyPI, even if there is a local > kayobe directory. Use ./kayobe, or 'pip install .' if in the same > directory. > > Mark > > > > [1] OpenStack Docs: Overcloud > > > > Kind regards, > > > > Tony Pearce > > > > > > On Mon, 14 Jun 2021 at 16:10, Mark Goddard wrote: > >> > >> On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: > >> > > >> > Hi Pierre, thanks for replying to my message. > >> > > >> > To install kayobe I followed the documentation which summarise: > installing a few system packages and setting up the kayobe virtual > environment and then pulling the correct kayobe git version for the > openstack to be installed. After configuring the yaml files I have run > these commands: > >> > > >> > - kayobe control host bootstrap > >> > - kayobe overcloud host configure -> this one is failing with > /usr/libexec/platform-python: not found > >> > > >> > After reading your message on the weekend I concluded that maybe I > had done something wrong. Today, I re-pulled the kayobe wallaby git and > manually transferred the configuration over to the new directory structure > on the ansible host and set up again as per the guide but the same issue is > seen. > >> > > >> > What I ended up doing to try and resolve was finding where this > "platform-python" is coming from. It is coming from the virtual environment > which is being set up during the kayobe ansible host bootstrap. Initially, > I found the base.yml and it looks like it tries to match what the host is. > I noticed that there is no ubuntu 20 listed there so I created it however > it did not resolve the issue. > >> > > >> > So then I tried systematically replacing this reference in the other > files found in the same location "venvs\kayobe\share\kayobe\ansible". The > file I changed which allowed it to progress is "kayobe-target-venv.yml" > >> > > >> > But unfortunately it fails a bit further on, failing to find an > selinux package [1] > >> > > >> > Seeing as the error is mentioning selinux (a RedHat security feature > not installed on ubuntu) could the root cause issue be that kayobe is not > matching the host as ubuntu? I did already set in kayobe that I am using > ubuntu OS distribution within globals.yml [2]. > >> > > >> > Are there any extra steps that I need to complete that maybe are not > listed in the documentation / guide? > >> > > >> > [1] TASK [MichaelRigart.interfaces : Debian | install current/latest > network package - Pastebin.com > >> > [2] ---# Kayobe global > configuration.######################################### - Pastebin.com > >> > >> Hi Tony, > >> > >> That's definitely not a recent Wallaby checkout you're using. Ubuntu > >> no longer uses that MichaelRigart.interfaces role. Check that you have > >> recent commits. Here is the most recent on stable/wallaby: > >> 13169077aaec0f7a28ae1f15b419dafc2456faf7. > >> > >> Mark > >> > >> > > >> > Regards, > >> > > >> > Tony Pearce > >> > > >> > > >> > > >> > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau > wrote: > >> >> > >> >> Hi Tony, > >> >> > >> >> Kayobe doesn't use platform-python anymore, on both stable/wallaby > and > >> >> stable/victoria: > >> >> > https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 > >> >> > >> >> Can you double-check what version you are using, and share how you > >> >> installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. > >> >> > >> >> Best wishes, > >> >> Pierre > >> >> > >> >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: > >> >> > > >> >> > I'm trying to run "kayobe overcloud host configure" against an > ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is > not found during the host configure part. > >> >> > > >> >> > PLAY [Verify that the Kayobe Ansible user account is accessible] > >> >> > TASK [Verify that a command can be executed] > >> >> > > >> >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, > "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", > "module_stdout": "", "msg": "The module failed to execute correctly, you > probably need to set the interpreter.\nSee stdout/stderr for the exact > error", "rc": 127} > >> >> > > >> >> > Python3 is installed on the host. When searching where this > platform-python is coming from it returns the kolla-ansible virtual envs: > >> >> > > >> >> > $ grep -rni -e "platform-python" > >> >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: > '8': /usr/libexec/platform-python > >> >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: > - /usr/libexec/platform-python > >> >> > > >> >> > I had a look through the deployment guide for Kayobe Wallaby and > didnt see a note about changing this. > >> >> > > >> >> > Do I need to do further steps to support the ubuntu overcloud > host? I have already set (as per the doc): > >> >> > > >> >> > os_distribution: ubuntu > >> >> > os_release: focal > >> >> > > >> >> > Regards, > >> >> > > >> >> > Tony Pearce > >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Tue Jun 15 06:58:46 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Mon, 14 Jun 2021 23:58:46 -0700 Subject: Missing dependency on librte_xxxx when installing openstack-nova-scheduler In-Reply-To: References: Message-ID: Matthias, I got the missing dkdp*.rpm from http://ftp.riken.jp/Linux/cern/centos/7/extras/x86_64/Packages/ and resolved the dependencies, thx. Pete On Mon, Jun 14, 2021 at 10:51 PM Matthias Runge wrote: > On Mon, Jun 14, 2021 at 02:48:12PM -0700, Pete Zhang wrote: > > Julia, > > > > Thanks for the update. > > > > In our environment, we don't have access to centos-extra repos. Does > > anyone know the site where we can download those missing/needed rpms? > thx. > > > > Pete > > > > On Mon, Jun 14, 2021 at 1:47 PM Julia Kreger < > juliaashleykreger at gmail.com> > > wrote: > > CentOS without CentOS Extras (which is enabled by default) sounds broken > to me. > > Place that file in /etc/yum.repos.d, if you happen to run CentOS. > > [stack at devstack yum.repos.d]$ cat CentOS-Linux-Extras.repo > > # CentOS-Linux-Extras.repo > # > # The mirrorlist system uses the connecting IP address of the client and > the > # update status of each mirror to pick current mirrors that are > geographically > # close to the client. You should use this for CentOS updates unless you > are > # manually picking other mirrors. > # > # If the mirrorlist does not work for you, you can try the commented out > # baseurl line instead. > > [extras] > name=CentOS Linux $releasever - Extras > mirrorlist= > https://urldefense.com/v3/__http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra__;!!DCbAVzZNrAf4!VBdfNjMCYCHdfNC-bRTUdURIp7XctPb4TtJmDt9fjEO3oKSJyyksdsCtP6mq0pj6hRXnOHQ$ > #baseurl= > https://urldefense.com/v3/__http://mirror.centos.org/$contentdir/$releasever/extras/$basearch/os/__;!!DCbAVzZNrAf4!VBdfNjMCYCHdfNC-bRTUdURIp7XctPb4TtJmDt9fjEO3oKSJyyksdsCtP6mq0pj6ilFPzes$ > gpgcheck=1 > enabled=1 > gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial > > > Matthias > -- > Matthias Runge > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From bxzhu_5355 at 163.com Tue Jun 15 07:49:00 2021 From: bxzhu_5355 at 163.com (Boxiang Zhu) Date: Tue, 15 Jun 2021 15:49:00 +0800 (GMT+08:00) Subject: [cinder] revert volume to snapshot Message-ID: <782fa353.71d3.17a0ea5201f.Coremail.bxzhu_5355@163.com> Hi, There is a restful api[1] to revert volume to snapshot. But the description means we can only use this api to revert volume to its latest snapshot. Are some drivers limited to rolling back only to the latest snapshot? Or just nobody helps to improve the api to revert volume to any snapshots of the volume? [1] https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=revert-volume-to-snapshot-detail#revert-volume-to-snapshot Thanks, Boxiang -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Jun 15 08:02:43 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 15 Jun 2021 09:02:43 +0100 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: On Tue, 15 Jun 2021 at 06:51, Tony Pearce wrote: > > Hi Mark, > > I had never used the "pip install ." method. Maybe a miscomprehension on my side, from the documentation [1] there are three ways to install kayobe. I had opted for the first way which is "pip install kayobe" since January 2020. The understanding was as conveyed in the doc "Installing from PyPI ensures the use of well used and tested software". That is true, but since Wallaby has not been released for Kayobe yet, it is not on PyPI. If you do install from PyPI, I would advise using a version constraint to ensure you get the release series you need. > > I have since followed your steps in your mail which is the installation from source. I had new problems: > > During ansible bootstrap: > During ansible host bootstrap it errors out and says the kolla_ansible is not found and needs to be installed in the same virtual environment. In all previous times, I had understood that kolla ansible is installed by kayobe at this point. I eventually done "pip install kolla-ansible" and it seemed to take care of that and allowed me to move on to "host configure" Kolla Ansible should be installed automatically during 'kayobe control host bootstrap', in a separate virtualenv from Kayobe. You should not need to install it manually, and I would again advise against doing so without version constraints. > > During host configure: > I was able to get past the previous python issue but then it failed on the network due to a "duplicate bond name", though this config was deployed successfully in Train. I dont think I really need a bond at this point so I deleted the bond and the host configure is now successful. (fyi this is an all-in-one host.) > > During kayobe service deploy: > This then fails with "no module named docker" on the host. To troubleshoot this I logged into the host and activated the kayobe virtual env (/opt/kayobe/venvs/kayobe/bin/activate) and then "pip install docker". It was already installed. Eventually, I issued "pip install --ignore-installed docker" within these three (environment) locations which resolved this and allowed the kayobe command to complete successfully and progress further: > - /opt/kayobe/venvs/kayobe/ > - /opt/kayobe/venvs/kolla-ansible/ > - native on the host after deactivating the venv. > > Now the blocker is the following failure; > > TASK [nova-cell : Waiting for nova-compute services to register themselves] ********************************************************************************************** > FAILED - RETRYING: Waiting for nova-compute services to register themselves (20 retries left). > FAILED - RETRYING: Waiting for nova-compute services to register themselves (19 retries left). > > I haven't seen this one before but previously I had seen something similar with mariadb because the API dns was not available. What I have been using here is a /etc/hosts entry for this. I checked that this entry is available on the host and in the nova containers. I decided to reboot the host anyway (previously resolved similar mariadb issue) to restart the containers just in case the dns was not available in one of them and I missed it. I'd check the nova compute logs here, to find why they are not registering themselves. > > Unfortunately I now have two additional issues which are hard blockers: > 1. The network is no longer working on the host after reboot, so I am unable to ssh > 2. The user password has been changed by kayobe, so I am unable to login using the console > > Due to the above, I am unable to login to the host to investigate or remediate. Previously when this happened with centos I could use the root user to log in. This time around as it's ubuntu I do not have a root user. > The user I am using for both "kolla_ansible_user" and "kayobe_ansible_user" is the same - is this causing a problem with Victoria and Wallaby? I had this user password change issue beginning with Victoria. > > So at this point I need to re-install the host and go back to the host configure before service deploy. > > Summary > Any guidance is well appreciated as I'm at a loss at this point. Last week I had a working Openstack Train deployment in a single host. "Kayobe" stopped working (maybe because I had previously always used pip install kayobe). > > I would like to deploy Wallaby, should I be able to successfully do this today or should I be using Victoria at the moment (or even, Train)? We are very close to release of Wallaby, and I expect that it should generally work, but Ubuntu is a new distro for Kayobe, and Wallaby is a new release. There may be teething problems, so if you're looking for something more stable then I'd suggest CentOS & Victoria. > > [1] OpenStack Docs: Installation > > Regards, > > Tony Pearce > > > On Mon, 14 Jun 2021 at 18:36, Mark Goddard wrote: >> >> On Mon, 14 Jun 2021 at 09:40, Tony Pearce wrote: >> > >> > Hi Mark, >> > >> > I followed this guide to do a "git clone" specifying the branch "-b" to "stable/wallaby" [1]. What additional steps do I need to do to get the latest commits? >> >> That should be sufficient. When you install it via pip, note that 'pip >> install kayobe' will still pull from PyPI, even if there is a local >> kayobe directory. Use ./kayobe, or 'pip install .' if in the same >> directory. >> >> Mark >> > >> > [1] OpenStack Docs: Overcloud >> > >> > Kind regards, >> > >> > Tony Pearce >> > >> > >> > On Mon, 14 Jun 2021 at 16:10, Mark Goddard wrote: >> >> >> >> On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: >> >> > >> >> > Hi Pierre, thanks for replying to my message. >> >> > >> >> > To install kayobe I followed the documentation which summarise: installing a few system packages and setting up the kayobe virtual environment and then pulling the correct kayobe git version for the openstack to be installed. After configuring the yaml files I have run these commands: >> >> > >> >> > - kayobe control host bootstrap >> >> > - kayobe overcloud host configure -> this one is failing with /usr/libexec/platform-python: not found >> >> > >> >> > After reading your message on the weekend I concluded that maybe I had done something wrong. Today, I re-pulled the kayobe wallaby git and manually transferred the configuration over to the new directory structure on the ansible host and set up again as per the guide but the same issue is seen. >> >> > >> >> > What I ended up doing to try and resolve was finding where this "platform-python" is coming from. It is coming from the virtual environment which is being set up during the kayobe ansible host bootstrap. Initially, I found the base.yml and it looks like it tries to match what the host is. I noticed that there is no ubuntu 20 listed there so I created it however it did not resolve the issue. >> >> > >> >> > So then I tried systematically replacing this reference in the other files found in the same location "venvs\kayobe\share\kayobe\ansible". The file I changed which allowed it to progress is "kayobe-target-venv.yml" >> >> > >> >> > But unfortunately it fails a bit further on, failing to find an selinux package [1] >> >> > >> >> > Seeing as the error is mentioning selinux (a RedHat security feature not installed on ubuntu) could the root cause issue be that kayobe is not matching the host as ubuntu? I did already set in kayobe that I am using ubuntu OS distribution within globals.yml [2]. >> >> > >> >> > Are there any extra steps that I need to complete that maybe are not listed in the documentation / guide? >> >> > >> >> > [1] TASK [MichaelRigart.interfaces : Debian | install current/latest network package - Pastebin.com >> >> > [2] ---# Kayobe global configuration.######################################### - Pastebin.com >> >> >> >> Hi Tony, >> >> >> >> That's definitely not a recent Wallaby checkout you're using. Ubuntu >> >> no longer uses that MichaelRigart.interfaces role. Check that you have >> >> recent commits. Here is the most recent on stable/wallaby: >> >> 13169077aaec0f7a28ae1f15b419dafc2456faf7. >> >> >> >> Mark >> >> >> >> > >> >> > Regards, >> >> > >> >> > Tony Pearce >> >> > >> >> > >> >> > >> >> > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau wrote: >> >> >> >> >> >> Hi Tony, >> >> >> >> >> >> Kayobe doesn't use platform-python anymore, on both stable/wallaby and >> >> >> stable/victoria: >> >> >> https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 >> >> >> >> >> >> Can you double-check what version you are using, and share how you >> >> >> installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. >> >> >> >> >> >> Best wishes, >> >> >> Pierre >> >> >> >> >> >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: >> >> >> > >> >> >> > I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is not found during the host configure part. >> >> >> > >> >> >> > PLAY [Verify that the Kayobe Ansible user account is accessible] >> >> >> > TASK [Verify that a command can be executed] >> >> >> > >> >> >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} >> >> >> > >> >> >> > Python3 is installed on the host. When searching where this platform-python is coming from it returns the kolla-ansible virtual envs: >> >> >> > >> >> >> > $ grep -rni -e "platform-python" >> >> >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: '8': /usr/libexec/platform-python >> >> >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: - /usr/libexec/platform-python >> >> >> > >> >> >> > I had a look through the deployment guide for Kayobe Wallaby and didnt see a note about changing this. >> >> >> > >> >> >> > Do I need to do further steps to support the ubuntu overcloud host? I have already set (as per the doc): >> >> >> > >> >> >> > os_distribution: ubuntu >> >> >> > os_release: focal >> >> >> > >> >> >> > Regards, >> >> >> > >> >> >> > Tony Pearce >> >> >> > From smooney at redhat.com Tue Jun 15 10:38:43 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 15 Jun 2021 11:38:43 +0100 Subject: [neutron][nova][placement] bug 1926693: What would be the reasonable solution ? In-Reply-To: References: Message-ID: On Tue, 2021-06-15 at 09:17 +0900, Takashi Kajinami wrote: > Thank you all for your additional thoughts. > > Because I've not received very strong objections about existing two > patches[1][2], > I updated these patches to resolve conflicts between these patches. >   [1] https://review.opendev.org/c/openstack/neutron/+/763563 > >   [2] https://review.opendev.org/c/openstack/neutron/+/788893 >   > I made the patch to add default hypervisor name as base one because > it doesn't > change behavior and would be "safe" for backports. So far we have > received positive > feedback about fixing compatibility with libvirt (in master) but I'll > create a backport > of that change as well to ask some feedback about its profit and risk > for backport. > > I think strategy is now clear with this feedback but please feel free > to put your > thoughts in this thread or the above patches. > > > if we want to "fix" this in neutron then neutron should either try > > looking up the RP using the host name and then fall back to using > the > > fqdn or we should look at using the hypervior api as we discussed a > few > > years ago when this last came up > > http://lists.openstack.org/pipermail/openstack-discuss/2019- > November/011044.html > > I feel like this discussion would be a good chance to revisit the > requirement of basic client > implementation for placement. (or abstraction layer like castellan) > Currently each components like nova, neutron, and cyborg(?) have > their own placement > client implementation (and logic to query resource providers) but IMO > it is more efficient > if we can maintain the common client implementation instead. it may be useful in a form of placement-lib this is not somethign that coudl have been adress in a common client however as for example ironic or other clustered driver have 1 compute service but multipel resouce provider per compute service so we cant always assume 1:1 mappings. its why we cant use conf.HOST in the general case altough we could have used it for libvirt. > > > for many deployment that do not set the fqdn as the canonical host > name > > in /etc/host the current default behavior works out of the box > > whatever solution we take we need to ensure that no existing > deployment > > is affected by the change which means we cannot default to only > using > > the fqdn or similar as that would be an upgrade breakage so we have > > to maintain the current behavior by default and enhance neutron to > > either fall back to the fqdn if the hostname based lookup fails or > use > > the new config intoduc ed by takashi's patch where the fqdn is used > as > > the server canonical hostname. > Thank you for pointing this out. To be clear, the behavior change I > proposed[2] doesn't > break any deployment with libvirt but would break deployments with > non-libvirt drivers. > This point should be considered when reviewing that change. So far > most of the feedback > I received is that it is preferred to fix compatibility with libvirt > as it's the "default" option > but please share your thoughts on the patch. ok there are 3 sets of name that are likely to be used the hostname, the fqdn, and the value of conf.HOST conf.HOST default to the hostname. if we are to enhance the default behavior i think we should just implement a fallback behavior which would check all 3 values if they are distinct i.e. lookup by hostname, if that fails lookup by fqdn, if that fails lookup by conf.HOST if and only if it not the same as the hostname(its default value) or the fqdn. it would be unusual fo rthe conf.host to not match the hostname or fqdn but it does happen for example if you are rinning multiple virt driver on the same host wehn you deploy say libvirt and ironic on the same host or you use the fake dirver for scale testing. > > > On Mon, Jun 14, 2021 at 7:30 PM Sean Mooney > wrote: > > On Sat, 2021-06-12 at 00:46 +0900, Takashi Kajinami wrote: > > > On Fri, Jun 11, 2021 at 8:48 PM Oliver Walsh > > wrote: > > > > Hi Takashi, > > > > > > > > On Thu, 10 Jun 2021 at 15:06, Takashi Kajinami > > > > > > wrote: > > > > > Hi All, > > > > > > > > > > > > > > > I've been working on bug 1926693[1], and am lost about the > > > > > reasonable > > > > > solutions we expect. Ideally I'd need to bring this topic in > > the > > > > > team meeting > > > > > but because of the timezone gap and complicated background, > > I'd > > > > > like to > > > > > gather some feedback in ml first. > > > > > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1926693 > > > > > > > > > > TL;DR > > > > >  Which one(or ones) would be reasonable solutions for this > > issue ? > > > > >   (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > >   (2) https://review.opendev.org/c/openstack/neutron/+/788893 > > > > >   (3) Implement something different > > > > > > > > > > The issue I reported in the bug is that there is an > > inconsistency > > > > > between > > > > > nova and neutron about the way to determine a hypervisor > > name. > > > > > Currently neutron uses socket.gethostname() (which always > > returns > > > > > shortname) > > > > > > > > > > > > > > > > > socket.gethostname() can return fqdn or shortname -   > > > > > > https://docs.python.org/3/library/socket.html#socket.gethostname. > > > > > > > > > > You are correct and my statement was not accurate. > > > So socket.gethostname() returns what is returned by gethostname > > system > > > call, > > > and gethostname/sethostname accept both FQDN and short name, > > > socket.gethostname() > > > can return one of FQDN or short name. > > > > > > However the root problem is that this logic is not completely > > same as > > > the ones used > > > in each virt driver. Of cause we can require people the "correct" > > > format usage for > > > canonical name as well as "hostname", but fixthing this problem > > in > > > neutron would > > > be much more helpful considering the effect caused by enforcing > > users > > > to "fix" > > > hostname/canonical name formatting at this point. > > this is not really something that can be fixed in neutron > > we can either create a common funciton in oslo.utils or placement- > > lib > > that we can use in nova, neutron and all other project or we can > > use > > the config option. > > > > if we want to "fix" this in neutron then neutron should either try > > looking up the RP using the host name and then fall back to using > > the > > fqdn or we shoudl look at using the hypervior api as we discussed a > > few > > years ago when this last came up > > > http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011044.html > > > > i dont think neutron shoudl know anything about hyperviors so i > > would > > just proceed with the new config option that takashi has proposed > > but i > > would not implemente Rodolfo's solution of adding a > > hypervisor_type. > > > > just as nova has no awareness of the neutron backend and trys to > > treat > > all fo them the same neutron should remain hypervior independent > > and we > > should look to provide common code that can be reused to identify > > the > > RP in a seperate lib as a longer term solution. > > > > for many deployment that do not set the fqdn as the canonical host > > name > > in /etc/host the current default behavior works out of the box > > whatever solution we take we need to ensure that no existing > > deployment > > is affected by the change which means we cannot default to only > > using > > the fqdn or similar as that would be an upgrade breakage so we have > > to maintain the current behavior by default and enhance neutron to > > either fall back to the fqdn if the hostname based lookup fails or > > use > > the new config intoduc ed by takashi's patch where the fqdn is used > > as > > the server canonical hostname. > > >   > > > > I've seen cases where it switched from short to fqdn but I'm > > not sure > > > > of the root cause - DHCP lease setting a hostname/domainname > > perhaps. > > > > > > > > Thanks, > > > > Ollie > > > > > > > > > to determine a hypervisor name to search the corresponding > > resource > > > > > provider. > > > > > On the other hand, nova uses libvirt's getHostname function > > (if > > > > > libvirt driver is used) > > > > > which returns a canonical name. Canonical name can be > > shortname or > > > > > FQDN (*1) > > > > > and if FQDN is used then neutron and nova never agree. > > > > > > > > > > (*1) > > > > > IMO this is likely to happen in real deployments. For > > example, > > > > > TripelO uses > > > > > FQDN for canonical names.   > > > > > > > > > > > > > > > Neutron already provides the > > resource_provider_defauly_hypervisors > > > > > option > > > > > to override a hypervisor name used. However because this > > option > > > > > accepts > > > > > a map between interface and hypervisor, setting this > > parameter > > > > > requires > > > > > very redundant description especially when a compute node has > > > > > multiple > > > > > interfaces/bridges. The following example shows how redundant > > the > > > > > current > > > > > requirement is. > > > > > ~~~ > > > > > [OVS] > > > > > resource_provider_bandwidths=br-data1:1024:1024,br- > > > > > data2:1024:1024,\ > > > > > br-data3:1024,1024,br-data4,1024:1024 > > > > > resource_provider_hypervisors=br-data1:compute0.mydomain,br- > > data2:\ > > > > > compute0.mydomain,br-data3:compute0.mydomain,br- > > > > > data4:compute0.mydomain > > > > > ~~~ > > > > > > > > > > I've submitted a change to propose a new single parameter to > > > > > override > > > > > the base hypervisor name but this is currently -2ed, mainly > > because > > > > > I lacked analysis about the root cause of mismatch when I > > proposed > > > > > this. > > > > >  (1) https://review.opendev.org/c/openstack/neutron/+/763563 > > > > > > > > > > > > > > > On the other hand, I submitted a different change to neutron > > which > > > > > implements > > > > > the logic to get a hypervisor name which is fully compatible > > with > > > > > libvirt. > > > > > While this would save users from even overriding hypervisor > > names, > > > > > I'm aware > > > > > that this might break the other virt driver which depends on > > a > > > > > different logic > > > > > to generate a hypervisor name. IMO the patch is still useful > > > > > considering > > > > > the libvirt driver would be the most popular option now, but > > I'm > > > > > not fully > > > > > aware of the impact on the other drivers, especially because > > I > > > > > don't know > > > > > which virt driver would support the minimum QoS feature now. > > > > >  (2) https://review.opendev.org/c/openstack/neutron/+/788893/ > > > > > > > > > > > > > > > In the review of (2), Sean mentioned implementing a logic to > > > > > determine > > > > > an appropriate resource provider(3) even if there is a > > mismatch > > > > > about > > > > > host name format, but I'm not sure how I would implement > > that, tbh. > > > > > > > > > > > > > > > My current thought is to merge (1) as a quick solution first, > > and > > > > > discuss whether > > > > > we should merge (2), but I'd like to ask for some feedback > > about > > > > > this plan > > > > > (like we should NOT merge (2)). > > > > > > > > > > I'd appreciate your thoughts about this $topic. > > > > > > > > > > Thank you, > > > > > Takashi > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Jun 15 10:54:00 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 15 Jun 2021 05:54:00 -0500 Subject: [cinder] revert volume to snapshot In-Reply-To: <782fa353.71d3.17a0ea5201f.Coremail.bxzhu_5355@163.com> References: <782fa353.71d3.17a0ea5201f.Coremail.bxzhu_5355@163.com> Message-ID: <20210615105400.GA8236@sm-workstation> > There is a restful api[1] to revert volume to snapshot. But the description means > we can only use this api to revert volume to its latest snapshot. > > > Are some drivers limited to rolling back only to the latest snapshot? Or just nobody > helps to improve the api to revert volume to any snapshots of the volume? > This is partly due to the rollback abilities of each type of storage. Some types can't revert to several snapshots back without losing the more recent snapshots. This means that Cinder would still think there are snapshots available, but those snapshots would no longer be present on the storage device. This is considered a data loss condition, so we need to protect against that from happening. It's been discussed several times at Design Summits and PTGs, and at least so far there has not been a good way to handle it. The best recommendation we can give is for anyone that needs to go back several snapshots, you will need to revert one snapshot at a time to get back to where you need to be. But it is also worth pointing out that snapshots, and the ability to revert to snapshots, is not necessarily the best mechanism for data protection. If you need to have the ability to restore a volume back to its earlier state, using the backup/restore APIs are likely the more appropriate way to go. Sean From antonio.paulo at cern.ch Tue Jun 15 11:39:18 2021 From: antonio.paulo at cern.ch (=?UTF-8?Q?Ant=c3=b3nio_Paulo?=) Date: Tue, 15 Jun 2021 13:39:18 +0200 Subject: [nova] GPU VMs using MIG? In-Reply-To: References: <803dae06-8317-27f4-42ac-365f72ff31f4@cern.ch> Message-ID: <96350c31-c656-c523-6649-54795863fa16@cern.ch> I see, thank you for the reply. Even if it does not make sense for MIG to be supported/documented upstream if someone does come across MIG+OpenStack not backed by virtual GPUs do ping me please :-) I'll be trying to get this working when some cards arrive. Cheers, António On 14/06/21 18:01, Sylvain Bauza wrote: > > > On Mon, Jun 14, 2021 at 4:37 PM António Paulo > wrote: > > Hi! > > Has anyone looked into instancing VMs with NVIDIA's Multi-Instance GPU > (MIG) devices [1] without having to rely on vGPUs? Unfortunately, NVIDIA > vGPUs lack tracing and profiling support that our users need. > > I could not find anything specific to MIG in the OpenStack docs but I > was wondering if doing PCI passthrough [2] of MIG devices is an option > that someone has seen or tested? > > Maybe some massaging to expose the MIG as a Linux device is required > [3]? > > > Nividia MIG feature is orthogonal to virtual GPUs and hardware dependent. > As the latter, this is not really something we can "support" upstream as > our upstream CI can't just verify it. > > Some downstream vendors tho have work efforts for trying to test this > with their own solutions but again, not something we can discuss it here. > > Cheers, > António > > [1] https://docs.nvidia.com/datacenter/tesla/mig-user-guide/ > > [2] https://docs.openstack.org/nova/pike/admin/pci-passthrough.html > > [3] > https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#device-nodes > From stephenfin at redhat.com Tue Jun 15 14:18:45 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 15 Jun 2021 15:18:45 +0100 Subject: AW: AW: Customization of nova-scheduler In-Reply-To: <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> Message-ID: On Thu, 2021-06-10 at 17:21 +0200, levonmelikbekjan at yahoo.de wrote: > Hi Stephen, > > I'm trying to customize my nova scheduler. However, if I change the nova.conf as it is written here https://docs.openstack.org/operations-guide/de/ops-customize-compute.html, then my python file cannot be found. How can I configure it correctly? > > Do you have any idea? > > My controller node is running with CENTOS 7. I couldn't install devstack because it is only supported for CENTOS 8 version. That document is very old. You want [1], which documents how to do this properly. Hope this helps, Stephen [1] https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > Best regards > Levon > > -----Ursprüngliche Nachricht----- > Von: Stephen Finucane > Gesendet: Montag, 31. Mai 2021 18:21 > An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org > Betreff: Re: AW: Customization of nova-scheduler > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > Hello Stephen, > > > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > > > Two cases should be solved! > > > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. > > What you're describing is commonly referred to as "preemptible" or "spot" > instances. This topic has a long, complicated history in nova and has yet to be implemented. Searching for "preemptible instances openstack" should yield you lots of discussion on the topic along with a few proof-of-concept approaches using external services or out-of-tree modifications to nova. > > > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? > > As hinted above, this is likely to be a very difficult project given the fraught history of the idea. I don't want to dissuade you from this work but you should be aware of what you're getting into from the start. If you're serious about pursuing this, I suggest you first do some research on prior art. As noted above, there is lots of information on the internet about this. With this research done, you'll need to decide whether this is something you want to approach within nova itself, via out-of-tree extensions or via a third party project. If you're opting for integration with nova, then you'll need to think long and hard about how you would design such a system and start working on a spec (a design document) outlining your proposed solution. Details on how to write a spec are discussed at [1]. The only extension points nova offers today are scheduler filters and weighers so your options for an out-of-tree extension approach will be limited. A third party project will arguably be the easiest approach but you will be restricted to talking to nova's REST APIs which may limit the design somewhat. This Blazar spec [2] could give you some ideas on this approach (assuming it was never actually implemented, though it may well have been). > > > My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. > > > > My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > > > The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. > > > > What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? > > This could potentially work, but I suspect there will be serious performance implications with this, particularly at scale. Scheduler filters are historically used for simple things like "find me a group of hosts that have this metadata attribute I set on my image". Making API calls sounds like something that would take significant time and therefore slow down the schedule process. You'd also have to decide what your heuristic for deciding which VM(s) to delete would be, since there's nothing obvious in nova that you could use. > You could use something as simple as filter extra specs or something as complicated as an external service. > > This should be lots to get you started. Once again, do make sure you're aware of what you're getting yourself into before you start. This could get complicated very quickly :) > > Cheers, > Stephen > > > I'm very happy to have found you!!! > > > > Thank you really much for your time! > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > [2] https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > > Best regards > > Levon > > > > -----Ursprüngliche Nachricht----- > > Von: Stephen Finucane > > Gesendet: Montag, 31. Mai 2021 12:34 > > An: Levon Melikbekjan ; > > openstack at lists.openstack.org > > Betreff: Re: Customization of nova-scheduler > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > Hello Openstack team, > > > > > > is it possible to customize the nova-scheduler via Python? If yes, how? > > > > Yes, you can provide your own filters and weighers. This is documented at [1]. > > > > Hope this helps, > > Stephen > > > > [1] > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-y > > our-own-filter > > > > > > > > Best regards > > > Levon > > > > > > > > > From smooney at redhat.com Tue Jun 15 14:36:56 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 15 Jun 2021 15:36:56 +0100 Subject: AW: AW: Customization of nova-scheduler In-Reply-To: References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> Message-ID: On Tue, 2021-06-15 at 15:18 +0100, Stephen Finucane wrote: > On Thu, 2021-06-10 at 17:21 +0200, levonmelikbekjan at yahoo.de wrote: > > Hi Stephen, > > > > I'm  trying to customize my nova scheduler. However, if I change the > > nova.conf as it is written here > > https://docs.openstack.org/operations-guide/de/ops-customize-compute.html > > , then my python file cannot be found. How can I configure it > > correctly? > > > > Do you have any idea? > > > > My controller node is running with CENTOS 7. I couldn't install > > devstack because it is only supported for CENTOS 8 version. > > That document is very old. You want [1], which documents how to do this > properly. wwell that depend if they acatully want to write ther own filter yes but if they want to replace the scheduler with a new one we recently removed support for that right. previously we had several schduler implemtation like the caching scheduler and that old doc https://docs.openstack.org/operations-guide/de/ops-customize-compute.html descibes on how to replace the filter scheduler dirver with an new one. we deprecated it ussuri https://github.com/openstack/nova/commit/6a4cb24d39623930fd240e67d65013803459839d and you finally removed the extention point in febuary https://github.com/openstack/nova/commit/5aeb3a387494c4559d183d1290db3c92a96dfb90 so from wallaby on you can nolonger write an alternitvie schduler implemenation out of tree without reverting that. so yes https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter is how you customise schduling now but you cant customise the schduler itself out fo tree anymore. > > Hope this helps, > Stephen > > [1] > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > > > Best regards > > Levon > > > > -----Ursprüngliche Nachricht----- > > Von: Stephen Finucane > > Gesendet: Montag, 31. Mai 2021 18:21 > > An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org > > Betreff: Re: AW: Customization of nova-scheduler > > > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > > Hello Stephen, > > > > > > I am a student from Germany who is currently working on his > > > bachelor thesis. My job is to build a cloud solution for my > > > university with Openstack. The functionality should include the > > > prioritization of users. So that you can imagine exactly how the > > > whole thing should work, I would like to give you an example. > > > > > > Two cases should be solved! > > > > > > Case 1: A user A with a low priority uses a VM from Openstack with > > > half performance of the available host. Then user B comes in with a > > > high priority and needs the full performance of the host for his > > > VM. When creating the VM of user B, the VM of user A should be > > > deleted because there is not enough compute power for user B. The > > > VM of user B is successfully created. > > > > > > Case 2: A user A with a low priority uses a VM with half the > > > performance of the available host, then user B comes in with a high > > > priority and needs half of the performance of the host for his VM. > > > When creating the VM of user B, user A should not be deleted, since > > > enough computing power is available for both users. > > > > > > These cases should work for unlimited users. In order to optimize > > > the whole thing, I would like to write a function that precisely > > > calculates all performance components to determine whether enough > > > resources are available for the VM of the high priority user. > > > > What you're describing is commonly referred to as "preemptible" or > > "spot" > > instances. This topic has a long, complicated history in nova and has > > yet to be implemented. Searching for "preemptible instances > > openstack" should yield you lots of discussion on the topic along > > with a few proof-of-concept approaches using external services or > > out-of-tree modifications to nova. > > > > > I’m new to Openstack, but I’ve already implemented cloud projects > > > with Microsoft Azure and have solid programming skills. Can you > > > give me a hint where and how I can start? > > > > As hinted above, this is likely to be a very difficult project given > > the fraught history of the idea. I don't want to dissuade you from > > this work but you should be aware of what you're getting into from > > the start. If you're serious about pursuing this, I suggest you first > > do some research on prior art. As noted above, there is lots of > > information on the internet about this. With this research done, > > you'll need to decide whether this is something you want to approach > > within nova itself, via out-of-tree extensions or via a third party > > project. If you're opting for integration with nova, then you'll need > > to think long and hard about how you would design such a system and > > start working on a spec (a design document) outlining your proposed > > solution. Details on how to write a spec are discussed at [1]. The > > only extension points nova offers today are scheduler filters and > > weighers so your options for an out-of-tree extension approach will > > be limited. A third party project will arguably be the easiest > > approach but you will be restricted to talking to nova's REST APIs > > which may limit the design somewhat. This Blazar spec [2] could give > > you some ideas on this approach (assuming it was never actually > > implemented, though it may well have been). > > > > > My university gave me three compute hosts and one control host to > > > implement this solution for the bachelor thesis. I’m currently > > > setting up Openstack and all the services on the control host all > > > by myself to understand all the functionality (sorry for not using > > > Packstack) 😉. All my hosts have CentOS 7 and the minimum > > > deployment which I configure is Train. > > > > > > My idea is to work with nova schedulers, because they seem to be > > > interesting for my case. I've found a whole infrastructure > > > description of the provisioning of an instance in Openstack > > > https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png > > > .  > > > > > > The nova scheduler > > > https://docs.openstack.org/operations-guide/ops-customize-compute.html > > >  is the first component, where it is possible to implement > > > functions via Python and the Compute API > > > https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail > > >  to check for active VMs and probably delete them if needed before > > > a successful request for an instantiation can be made. > > > > > > What do you guys think about it? Does it seem like a good starting > > > point for you or is it the wrong approach? > > > > This could potentially work, but I suspect there will be serious > > performance implications with this, particularly at scale. Scheduler > > filters are historically used for simple things like "find me a group > > of hosts that have this metadata attribute I set on my image". Making > > API calls sounds like something that would take significant time and > > therefore slow down the schedule process. You'd also have to decide > > what your heuristic for deciding which VM(s) to delete would be, > > since there's nothing obvious in nova that you could use. > > You could use something as simple as filter extra specs or something > > as complicated as an external service. > > > > This should be lots to get you started. Once again, do make sure > > you're aware of what you're getting yourself into before you start. > > This could get complicated very quickly :) > > > > Cheers, > > Stephen > > > > > I'm very happy to have found you!!! > > > > > > Thank you really much for your time! > > > > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > > [2] > > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > > > > > Best regards > > > Levon > > > > > > -----Ursprüngliche Nachricht----- > > > Von: Stephen Finucane > > > Gesendet: Montag, 31. Mai 2021 12:34 > > > An: Levon Melikbekjan ; > > > openstack at lists.openstack.org > > > Betreff: Re: Customization of nova-scheduler > > > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > > Hello Openstack team, > > > > > > > > is it possible to customize the nova-scheduler via Python? If > > > > yes, how? > > > > > > Yes, you can provide your own filters and weighers. This is > > > documented at [1]. > > > > > > Hope this helps, > > > Stephen > > > > > > [1] > > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-y > > > our-own-filter > > > > > > > > > > > Best regards > > > > Levon > > > > > > > > > > > > > > > > > From gmann at ghanshyammann.com Tue Jun 15 14:37:42 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 15 Jun 2021 09:37:42 -0500 Subject: 3rd party CI failures with devstack 'master' using devstack-gate In-Reply-To: <6fc4dc79-2083-4cf3-9ca8-ef6e1dd0ca5d@www.fastmail.com> References: <6806626.31r3eYUQgx@whitebase.usersys.redhat.com> <14936534.uLZWGnKmhe@whitebase.usersys.redhat.com> <179ebf99f29.d451fbf0365691.4329366033312889323@ghanshyammann.com> <6fc4dc79-2083-4cf3-9ca8-ef6e1dd0ca5d@www.fastmail.com> Message-ID: <17a101b4e15.f69ca44330433.7144220500254514851@ghanshyammann.com> ---- On Tue, 08 Jun 2021 12:12:11 -0500 Clark Boylan wrote ---- > On Tue, Jun 8, 2021, at 7:14 AM, Ghanshyam Mann wrote: > > ---- On Tue, 08 Jun 2021 07:42:21 -0500 Luigi Toscano > > wrote ---- > > > On Tuesday, 8 June 2021 14:11:40 CEST Fernando Ferraz wrote: > > > > Hello, > > > > > > > > The NetApp CI for Cinder also relies on Zuul v2. We were able to > > > > recently move our jobs to focal, but dropping devstack-gate is a > > big > > > > concern considering our team size and schedule. > > > > Luigi, could you clarify what would immediately break after xena is > > > > branched? > > > > > > > > > > For example grenade jobs won't work anymore because there won't be > > any new > > > entry related to stable/xena added here to devstack-vm-gate-wrap.sh: > > > > > > > > https://opendev.org/openstack/devstack-gate/src/branch/master/devstack-vm-gate-wrap.sh#L335 > > > > > > I understand that grenade testing is probably not relevant for 3rd > > party CIs > > > (it should be, but that's a different discussion), but the main > > point is that > > > devstack-gate is already now in almost-maintenance mode. The minimum > > amount of > > > fixed that have been merged have been used to keep working the very > > few legacy > > > jobs defined on opendev.org, and that number is basically 0 at this > > point. > > > > > > This mean that there are a ton of potential breakages happening > > anytime, and > > > the focal change is just one (and each one of you, CI owner, had to > > fix it on > > > your own). Others may come anytime and they won't be detected nor > > investigated > > > anymore because we don't have de-facto legacy jobs around since > > wallaby. > > > > > > To summarize: if you use Zuul v2, you have been running for a long > > while on an > > > unsupported software stack. The last tiny bits which could be used > > on both > > > zuulv2 and zuulv3 in legacy mode to easy the transition are > > unsupported too. > > > > > > This problem, I believe, has been communicated periodically by the > > various > > > team and the time to migrate is... last month. Please hurry up! > > > > Yes, we have done this migration in Victoria release cycle with two > > community-wide goals together > > with the direction of moving all the CI from devstack gate from wallaby > > itself. But by seeing few jobs > > and especially 3rd party CI, we extended the devstack-gate support for > > wallaby release [1]. So we > > extended the support for one more release until stable/wallaby. > > > > NOTE: supporting a extra release extend the devstack-gate support until > > that release until that become EOL, > > as we need to support that release stable CI. So it is not just a one > > more cycle support but even longer > > time of 1 year or more. > > > > Now extended the support for Xena cycle also seems very difficult by > > seeing very less number of > > contributor or less bandwidth of current core members in devstack-gate. > > > > I will plan to officially declare the devstack-gate deprecation with > > team but please move your CI/CD to > > latest Focal and to zuulv3 ASAP. > > These changes have started to go up [2]. > > I want to clarify a few things though. As far as I can remember we have never required any specific CI system or setup. What we have done are required basic behaviors from the CI system. Things like respond to "recheck", post logs in a publicly accessible location and report them back, have contacts available so we can contact you if things break, and so on. What this means is that some third party CI system are likely running Jenkins. I know others that ran some homegrown thing that watched the Gerrit event stream. We recommend Zuul and now Zuulv3 or newer because it is a tool that we understand and can provide some assistance with. > > Those that choose not to use the recommended tools are likely to need to invest in their own tooling and debugging. For devstack-gate we will not accept new patches to keep it running against master, but need to keep it around for older stable branches. If those that are running their own set of tools want to keep devstack-gate alive for modern openstack then forking it is likely the best path forward. Updates: All the patches for deprecating the devstack-gate are merged now, along with governance one: - https://review.opendev.org/c/openstack/governance/+/795385 README file has been updated with the warning and about forking way Clark mentioned above: https://opendev.org/openstack/devstack-gate/src/branch/master/README.rst -gmann > > > > > 1. > > https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html > > 2. > > https://governance.openstack.org/tc/goals/selected/victoria/native-zuulv3-jobs.html > > > > > > [1] > > https://review.opendev.org/c/openstack/devstack-gate/+/778129 > > https://review.opendev.org/c/openstack/devstack-gate/+/785010 > > [2] https://review.opendev.org/q/topic:%22deprecate-devstack-gate%22+(status:open%20OR%20status:merged) > From levonmelikbekjan at yahoo.de Tue Jun 15 14:59:04 2021 From: levonmelikbekjan at yahoo.de (levonmelikbekjan at yahoo.de) Date: Tue, 15 Jun 2021 16:59:04 +0200 Subject: AW: AW: AW: Customization of nova-scheduler In-Reply-To: References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> Message-ID: <000001d761f6$fc41d1a0$f4c574e0$@yahoo.de> Hi Stephen, I am already done with my solution. Everything works as expected! :) Thank you for your support. You guys are great. Best regards Levon -----Ursprüngliche Nachricht----- Von: Stephen Finucane Gesendet: Dienstag, 15. Juni 2021 16:19 An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org Betreff: Re: AW: AW: Customization of nova-scheduler On Thu, 2021-06-10 at 17:21 +0200, levonmelikbekjan at yahoo.de wrote: > Hi Stephen, > > I'm trying to customize my nova scheduler. However, if I change the novaconf as it is written here https://docs.openstack.org/operations-guide/de/ops-customize-compute.html, then my python file cannot be found. How can I configure it correctly? > > Do you have any idea? > > My controller node is running with CENTOS 7. I couldn't install devstack because it is only supported for CENTOS 8 version. That document is very old. You want [1], which documents how to do this properly. Hope this helps, Stephen [1] https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > Best regards > Levon > > -----Ursprüngliche Nachricht----- > Von: Stephen Finucane > Gesendet: Montag, 31. Mai 2021 18:21 > An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org > Betreff: Re: AW: Customization of nova-scheduler > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > Hello Stephen, > > > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > > > Two cases should be solved! > > > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. > > What you're describing is commonly referred to as "preemptible" or "spot" > instances. This topic has a long, complicated history in nova and has yet to be implemented. Searching for "preemptible instances openstack" should yield you lots of discussion on the topic along with a few proof-of-concept approaches using external services or out-of-tree modifications to nova. > > > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? > > As hinted above, this is likely to be a very difficult project given the fraught history of the idea. I don't want to dissuade you from this work but you should be aware of what you're getting into from the start. If you're serious about pursuing this, I suggest you first do some research on prior art. As noted above, there is lots of information on the internet about this. With this research done, you'll need to decide whether this is something you want to approach within nova itself, via out-of-tree extensions or via a third party project. If you're opting for integration with nova, then you'll need to think long and hard about how you would design such a system and start working on a spec (a design document) outlining your proposed solution. Details on how to write a spec are discussed at [1]. The only extension points nova offers today are scheduler filters and weighers so your options for an out-of-tree extension approach will be limited. A third party project will arguably be the easiest approach but you will be restricted to talking to nova's REST APIs which may limit the design somewhat. This Blazar spec [2] could give you some ideas on this approach (assuming it was never actually implemented, though it may well have been). > > > My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. > > > > My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > > > The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. > > > > What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? > > This could potentially work, but I suspect there will be serious performance implications with this, particularly at scale. Scheduler filters are historically used for simple things like "find me a group of hosts that have this metadata attribute I set on my image". Making API calls sounds like something that would take significant time and therefore slow down the schedule process. You'd also have to decide what your heuristic for deciding which VM(s) to delete would be, since there's nothing obvious in nova that you could use. > You could use something as simple as filter extra specs or something as complicated as an external service. > > This should be lots to get you started. Once again, do make sure > you're aware of what you're getting yourself into before you start. > This could get complicated very quickly :) > > Cheers, > Stephen > > > I'm very happy to have found you!!! > > > > Thank you really much for your time! > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > [2] > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar > -preemptible-instances.html > > > Best regards > > Levon > > > > -----Ursprüngliche Nachricht----- > > Von: Stephen Finucane > > Gesendet: Montag, 31. Mai 2021 12:34 > > An: Levon Melikbekjan ; > > openstack at lists.openstack.org > > Betreff: Re: Customization of nova-scheduler > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > Hello Openstack team, > > > > > > is it possible to customize the nova-scheduler via Python? If yes, how? > > > > Yes, you can provide your own filters and weighers. This is documented at [1]. > > > > Hope this helps, > > Stephen > > > > [1] > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing > > -y > > our-own-filter > > > > > > > > Best regards > > > Levon > > > > > > > > > From levonmelikbekjan at yahoo.de Tue Jun 15 14:59:27 2021 From: levonmelikbekjan at yahoo.de (levonmelikbekjan at yahoo.de) Date: Tue, 15 Jun 2021 16:59:27 +0200 Subject: AW: AW: AW: Customization of nova-scheduler In-Reply-To: References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> Message-ID: <000101d761f7$0a0bf090$1e23d1b0$@yahoo.de> Hi Sean, I am already done with my solution. Everything works as expected! :) Thank you for your support. You guys are great. Best regards Levon -----Ursprüngliche Nachricht----- Von: Sean Mooney Gesendet: Dienstag, 15. Juni 2021 16:37 An: Stephen Finucane ; levonmelikbekjan at yahoo.de; openstack at lists.openstack.org Betreff: Re: AW: AW: Customization of nova-scheduler On Tue, 2021-06-15 at 15:18 +0100, Stephen Finucane wrote: > On Thu, 2021-06-10 at 17:21 +0200, levonmelikbekjan at yahoo.de wrote: > > Hi Stephen, > > > > I'm trying to customize my nova scheduler. However, if I change the > > nova.conf as it is written here > > https://docs.openstack.org/operations-guide/de/ops-customize-compute > > .html , then my python file cannot be found. How can I configure it > > correctly? > > > > Do you have any idea? > > > > My controller node is running with CENTOS 7. I couldn't install > > devstack because it is only supported for CENTOS 8 version. > > That document is very old. You want [1], which documents how to do > this properly. wwell that depend if they acatully want to write ther own filter yes but if they want to replace the scheduler with a new one we recently removed support for that right. previously we had several schduler implemtation like the caching scheduler and that old doc https://docs.openstack.org/operations-guide/de/ops-customize-compute.html descibes on how to replace the filter scheduler dirver with an new one. we deprecated it ussuri https://github.com/openstack/nova/commit/6a4cb24d39623930fd240e67d65013803459839d and you finally removed the extention point in febuary https://github.com/openstack/nova/commit/5aeb3a387494c4559d183d1290db3c92a96dfb90 so from wallaby on you can nolonger write an alternitvie schduler implemenation out of tree without reverting that. so yes https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter is how you customise schduling now but you cant customise the schduler itself out fo tree anymore. > > Hope this helps, > Stephen > > [1] > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-y > our-own-filter > > > Best regards > > Levon > > > > -----Ursprüngliche Nachricht----- > > Von: Stephen Finucane > > Gesendet: Montag, 31. Mai 2021 18:21 > > An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org > > Betreff: Re: AW: Customization of nova-scheduler > > > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > > Hello Stephen, > > > > > > I am a student from Germany who is currently working on his > > > bachelor thesis. My job is to build a cloud solution for my > > > university with Openstack. The functionality should include the > > > prioritization of users. So that you can imagine exactly how the > > > whole thing should work, I would like to give you an example. > > > > > > Two cases should be solved! > > > > > > Case 1: A user A with a low priority uses a VM from Openstack with > > > half performance of the available host. Then user B comes in with > > > a high priority and needs the full performance of the host for his > > > VM. When creating the VM of user B, the VM of user A should be > > > deleted because there is not enough compute power for user B. The > > > VM of user B is successfully created. > > > > > > Case 2: A user A with a low priority uses a VM with half the > > > performance of the available host, then user B comes in with a > > > high priority and needs half of the performance of the host for his VM. > > > When creating the VM of user B, user A should not be deleted, > > > since enough computing power is available for both users. > > > > > > These cases should work for unlimited users. In order to optimize > > > the whole thing, I would like to write a function that precisely > > > calculates all performance components to determine whether enough > > > resources are available for the VM of the high priority user. > > > > What you're describing is commonly referred to as "preemptible" or > > "spot" > > instances. This topic has a long, complicated history in nova and > > has yet to be implemented. Searching for "preemptible instances > > openstack" should yield you lots of discussion on the topic along > > with a few proof-of-concept approaches using external services or > > out-of-tree modifications to nova. > > > > > I’m new to Openstack, but I’ve already implemented cloud projects > > > with Microsoft Azure and have solid programming skills. Can you > > > give me a hint where and how I can start? > > > > As hinted above, this is likely to be a very difficult project given > > the fraught history of the idea. I don't want to dissuade you from > > this work but you should be aware of what you're getting into from > > the start. If you're serious about pursuing this, I suggest you > > first do some research on prior art. As noted above, there is lots > > of information on the internet about this. With this research done, > > you'll need to decide whether this is something you want to approach > > within nova itself, via out-of-tree extensions or via a third party > > project. If you're opting for integration with nova, then you'll > > need to think long and hard about how you would design such a system > > and start working on a spec (a design document) outlining your > > proposed solution. Details on how to write a spec are discussed at > > [1]. The only extension points nova offers today are scheduler > > filters and weighers so your options for an out-of-tree extension > > approach will be limited. A third party project will arguably be the > > easiest approach but you will be restricted to talking to nova's > > REST APIs which may limit the design somewhat. This Blazar spec [2] > > could give you some ideas on this approach (assuming it was never > > actually implemented, though it may well have been). > > > > > My university gave me three compute hosts and one control host to > > > implement this solution for the bachelor thesis. I’m currently > > > setting up Openstack and all the services on the control host all > > > by myself to understand all the functionality (sorry for not using > > > Packstack) 😉. All my hosts have CentOS 7 and the minimum > > > deployment which I configure is Train. > > > > > > My idea is to work with nova schedulers, because they seem to be > > > interesting for my case. I've found a whole infrastructure > > > description of the provisioning of an instance in Openstack > > > https://docs.openstack.org/operations-guide/de/_images/provision-a > > > n-instance.png > > > . > > > > > > The nova scheduler > > > https://docs.openstack.org/operations-guide/ops-customize-compute. > > > html > > > is the first component, where it is possible to implement > > > functions via Python and the Compute API > > > https://docs.openstack.org/api-ref/compute/?expanded=show-details- > > > of-specific-api-version-detail,list-servers-detail > > > to check for active VMs and probably delete them if needed before > > > a successful request for an instantiation can be made. > > > > > > What do you guys think about it? Does it seem like a good starting > > > point for you or is it the wrong approach? > > > > This could potentially work, but I suspect there will be serious > > performance implications with this, particularly at scale. Scheduler > > filters are historically used for simple things like "find me a > > group of hosts that have this metadata attribute I set on my image". > > Making API calls sounds like something that would take significant > > time and therefore slow down the schedule process. You'd also have > > to decide what your heuristic for deciding which VM(s) to delete > > would be, since there's nothing obvious in nova that you could use. > > You could use something as simple as filter extra specs or something > > as complicated as an external service. > > > > This should be lots to get you started. Once again, do make sure > > you're aware of what you're getting yourself into before you start. > > This could get complicated very quickly :) > > > > Cheers, > > Stephen > > > > > I'm very happy to have found you!!! > > > > > > Thank you really much for your time! > > > > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > > [2] > > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blaz > > ar-preemptible-instances.html > > > > > Best regards > > > Levon > > > > > > -----Ursprüngliche Nachricht----- > > > Von: Stephen Finucane > > > Gesendet: Montag, 31. Mai 2021 12:34 > > > An: Levon Melikbekjan ; > > > openstack at lists.openstack.org > > > Betreff: Re: Customization of nova-scheduler > > > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > > Hello Openstack team, > > > > > > > > is it possible to customize the nova-scheduler via Python? If > > > > yes, how? > > > > > > Yes, you can provide your own filters and weighers. This is > > > documented at [1]. > > > > > > Hope this helps, > > > Stephen > > > > > > [1] > > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writi > > > ng-y > > > our-own-filter > > > > > > > > > > > Best regards > > > > Levon > > > > > > > > > > > > > > > > > From balazs.gibizer at est.tech Tue Jun 15 16:44:53 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 15 Jun 2021 18:44:53 +0200 Subject: [nova] spec review day In-Reply-To: References: Message-ID: Hi, Let's have another spec review day on 6th of July before the M2 spec freeze that will happen on 15th of July. The rules are the usual. Let's use this day to focus on open specs, trying to reach agreement on as many thing as possible with close cooperation during the day. Let me know if the timing does not good for you. Cheers, gibi From kennelson11 at gmail.com Tue Jun 15 16:53:20 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 15 Jun 2021 09:53:20 -0700 Subject: [TC] Open Infra Live- Open Source Governance In-Reply-To: <187a78ef-0e29-d7dd-5506-73515fb28dbd@gmail.com> References: <187a78ef-0e29-d7dd-5506-73515fb28dbd@gmail.com> Message-ID: It would be great to have both of you join! I passed your names onto Erin. She will reach out at some point soon. -Kendall (diablo_rojo) On Mon, Jun 14, 2021 at 10:57 AM Jay Bryant wrote: > > On 6/14/2021 9:45 AM, Kendall Nelson wrote: > > Hello TC Folks :) > > So I have been tasked with helping to collect a couple volunteers for our > July 29th episode of Open Infra Live (at 14:00 UTC) on open source > governance. > > I am also working on getting a couple members from the k8s steering > committee to join us that day. > > If you are interested in participating, please let me know! I only need > like two volunteers, but if we have more people than that dying to join in, > I am sure we can work it out. > > I can help if you need another person. Let me know. > > Jay > > Thanks! > > -Kendall Nelson (diablo_rojo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.ames at canonical.com Tue Jun 15 22:18:37 2021 From: david.ames at canonical.com (David Ames) Date: Tue, 15 Jun 2021 15:18:37 -0700 Subject: [ironic][ovn] Ironic with OVN as the SDN Message-ID: I am looking for a summary of the support or lack thereof for Ironic and OVN. Missing OVN features are explained in [0] and [1]. This bug [2] seems to imply one can run a Neutron OVS DHCP agent alongside OVN. Can I get definitive answers to the following questions: Is it possible to run neutron-dhcp-agent alongside OVN to handle the iPXE boot process? If so, is there documentation anywhere? What is the roadmap/projected timeline for OVN to support Ironic? [0] https://docs.openstack.org/neutron/latest/ovn/gaps.html [1] https://bugzilla.redhat.com/show_bug.cgi?id=1622154 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1620943 Thanks, -- David Ames OpenStack Charm Engineering From gmann at ghanshyammann.com Tue Jun 15 22:22:45 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 15 Jun 2021 17:22:45 -0500 Subject: [all] CRITICAL: Upcoming changes to the OpenStack Community IRC this weekend In-Reply-To: <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> References: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> <179d407fa6f.f6101b54160799.6570320596784902701@ghanshyammann.com> Message-ID: <17a11c510c5.118eeb08d15663.6993681498866504374@ghanshyammann.com> ---- On Thu, 03 Jun 2021 17:39:23 -0500 Ghanshyam Mann wrote ---- > ---- On Mon, 31 May 2021 09:06:11 -0500 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > Updates: > > > > As you might have seen in the Fungi email reply on service-discuss ML, all the bot and logging migration is complete now. > > > > * Now onwards every discussion or meeting now needs to be done on OFTC, not on Freenode. As you can see many projects PTL started sending email on their next meeting on OFTC, please do if you have not done yet. > > > > * I have started a new etherpad for tracking all the migration tasks (all action items we collected from Wed TC meeting.). Please plan the work needed from the project team side and mark the progress. > > > > - https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc A gentle reminder to the project teams if you have not updated your project doc/wiki pages yet or mark it complete in etherpad if work is done. Projects need action: * Adjutant * Barbican * Cloudkitty * Cyborg * Freezer * Keystone * Mistral * Murano * Octavia * Openstack Charms * Openstack-Chef * Openstack-Helm * Oslo * Rally * Release Management * Requirements * Sahara * Senlin * Solum * Swift * Tripleo * Vitrage * Watcher * Winstackers * Zun -gmann > > Hello Everyone, > > There were two question in openstack-tc this morning which we discussed in today TC meeting and agreed on below points: > > 1. Backporting OFTC reference changes > > * Agreed to backport the changes as much as possible. > * On keeping doc/source/contributor/contributing.rst on stable branches: > ** We do not need to maintain this on stable as such, because master version of it can be referred from doc or top level CONTRIBUTING.rst > ** Fungi will add the global redirect link to master/latest version in openstack-manual. Project does not need to do this explicitly. > ** Project can remove doc/source/contributor/contributing.rst from stable branch as per their convenience > > 2. Topic change on Freenode channel > * We decided to do this on June 11th and until then continue redirecting people from old channel to OFTC. > > -gmann > > > > > -gmann > > > > > > ---- On Wed, 26 May 2021 12:19:26 -0500 Ghanshyam Mann wrote ---- > > > Greetings contributors & community members! > > > > > > With recent events, the Technical Committee held an emergency meeting today (Wednesday, May 26th, 2021) > > > regarding Freenode IRC and what our decision would be [1]. Earlier in the week, the consensus amongst the TC > > > was to gather more information from the individual projects, and make a decision from there[2]. With #rdo, > > > #ubuntu, and #wikipedia having been hijacked, the consensus amongst the TC and the community members > > > who were able to attend the meeting was to move away from Freenode as soon as possible. The TC agreed > > > that this move away from Freenode needs to be a community-wide move to the same, new IRC network for > > > all projects to avoid splintering of the community. As has been long-planned in the event of a contingency, we > > > will be moving to OFTC. > > > > > > We recognize this is a contentious topic, and ultimately we seek to ensure community continuity before evolution > > > to something beyond IRC, as many have expressed interest in doing via Mailing List discussions. At this point, we > > > had to make a decision to solve the immediate problem in the simplest and most expedient way possible, so this is > > > that announcement. We welcome continued discussion about future alternatives on the other threads. > > > > > > With this in mind, we suggest the following steps. > > > > > > Everyone: > > > ======= > > > 1. Do NOT change any channel topics to represent this change. This is likely to result in the channel being taken > > > over by Freenode and will disrupt communications within our community. > > > 2. Register your nicknames on OFTC [3][4] > > > 3. Be *prepared* to join your channels on OFTC[4]. The OpenStack community channels have already been > > > registered on OFTC and await you. > > > 4. Continue to use Freenode for OpenStack discussions until the bots have been moved and the official cut-over > > > takes place this coming weekend. We anticipate using OFTC starting Monday, May 31st. > > > > > > Projects/Project Leaders: > > > ==================== > > > 1. Projects should work to get a few volunteers to staff their project channels on Freenode, for the near future to help > > > redirect people to OFTC. This should occur via private messages to avoid a ban. > > > 2. Continue to hold project meetings on Freenode until the bots are enabled on OFTC. > > > 3. Update project wikis/documentation with the new IRC network information. We ask that you consider referring to > > > the central contributor guide[5]. > > > 4. The TC is asking that projects take advantage of this time of change to consider moving project meetings from > > > the #openstack-meeting* channels to their project channel. > > > 5. Please avoid discussing the move to OFTC in Freenode channels as this may also trigger a takeover of the channel. > > > > > > We are working on getting our bots over to OFTC, and they will be moved over the weekend. Starting Monday May 31, > > > the bots will be on OFTC. Communication regarding this migration will take place on OFTC[4] in #openstack-dev, and > > > we're working on updating the contributor guide[5] to reflect this migration. > > > > > > Sincerely, > > > > > > The OpenStack TC and community leaders who came together to agree on a path forward. > > > > > > [1]: https://etherpad.opendev.org/p/openstack-irc > > > [2]: https://etherpad.opendev.org/p/feedback-on-freenode > > > [3]: https://www.oftc.net/Services/#register-your-account > > > [4]: https://www.oftc.net/ > > > [5]: https://docs.openstack.org/contributors/common/irc.html > > > > > > > > > > > > From piotrmisiak1984 at gmail.com Tue Jun 15 22:54:37 2021 From: piotrmisiak1984 at gmail.com (Piotr Misiak) Date: Wed, 16 Jun 2021 00:54:37 +0200 Subject: [ironic][ovn] Ironic with OVN as the SDN In-Reply-To: References: Message-ID: <70256ed3-07ef-ef75-5f87-df423480247a@gmail.com> Hi David, LSP port with type 'external' is a way to go. Take a look at this bug, especially at comment#6: https://bugzilla.redhat.com/show_bug.cgi?id=1666673 AFAIK there is no support of switching networks (provision, tenant, etc) for a BM server. On 16.06.2021 00:18, David Ames wrote: > I am looking for a summary of the support or lack thereof for Ironic > and OVN. Missing OVN features are explained in [0] and [1]. This bug > [2] seems to imply one can run a Neutron OVS DHCP agent alongside OVN. > > Can I get definitive answers to the following questions: > > Is it possible to run neutron-dhcp-agent alongside OVN to handle the > iPXE boot process? If so, is there documentation anywhere? > > What is the roadmap/projected timeline for OVN to support Ironic? > > [0] https://docs.openstack.org/neutron/latest/ovn/gaps.html > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1622154 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1620943 > > > Thanks, > From laurentfdumont at gmail.com Wed Jun 16 00:54:35 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Tue, 15 Jun 2021 20:54:35 -0400 Subject: AW: AW: Customization of nova-scheduler In-Reply-To: <000101d761f7$0a0bf090$1e23d1b0$@yahoo.de> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> <000101d761f7$0a0bf090$1e23d1b0$@yahoo.de> Message-ID: Out of curiosity, how did you end up implementing your workflow? Through the scheduler or is the logic external to Openstack? On Tue, Jun 15, 2021 at 11:48 AM wrote: > Hi Sean, > > I am already done with my solution. Everything works as expected! :) > > Thank you for your support. You guys are great. > > Best regards > Levon > > -----Ursprüngliche Nachricht----- > Von: Sean Mooney > Gesendet: Dienstag, 15. Juni 2021 16:37 > An: Stephen Finucane ; levonmelikbekjan at yahoo.de; > openstack at lists.openstack.org > Betreff: Re: AW: AW: Customization of nova-scheduler > > On Tue, 2021-06-15 at 15:18 +0100, Stephen Finucane wrote: > > On Thu, 2021-06-10 at 17:21 +0200, levonmelikbekjan at yahoo.de wrote: > > > Hi Stephen, > > > > > > I'm trying to customize my nova scheduler. However, if I change the > > > nova.conf as it is written here > > > https://docs.openstack.org/operations-guide/de/ops-customize-compute > > > .html , then my python file cannot be found. How can I configure it > > > correctly? > > > > > > Do you have any idea? > > > > > > My controller node is running with CENTOS 7. I couldn't install > > > devstack because it is only supported for CENTOS 8 version. > > > > That document is very old. You want [1], which documents how to do > > this properly. > > wwell that depend if they acatully want to write ther own filter yes but > if they want to replace the scheduler with a new one we recently removed > support for that right. > previously we had several schduler implemtation like the caching scheduler > and that old doc > https://docs.openstack.org/operations-guide/de/ops-customize-compute.html > > descibes on how to replace the filter scheduler dirver with an new one. > we deprecated it ussuri > > https://github.com/openstack/nova/commit/6a4cb24d39623930fd240e67d65013803459839d > > and you finally removed the extention point in febuary > > https://github.com/openstack/nova/commit/5aeb3a387494c4559d183d1290db3c92a96dfb90 > so from wallaby on you can nolonger write an alternitvie schduler > implemenation out of tree without reverting that. > > so yes > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > is how you customise schduling now but you cant customise the schduler > itself out fo tree anymore. > > > > > Hope this helps, > > Stephen > > > > [1] > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-y > > our-own-filter > > > > > Best regards > > > Levon > > > > > > -----Ursprüngliche Nachricht----- > > > Von: Stephen Finucane > > > Gesendet: Montag, 31. Mai 2021 18:21 > > > An: levonmelikbekjan at yahoo.de; openstack at lists.openstack.org > > > Betreff: Re: AW: Customization of nova-scheduler > > > > > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > > > Hello Stephen, > > > > > > > > I am a student from Germany who is currently working on his > > > > bachelor thesis. My job is to build a cloud solution for my > > > > university with Openstack. The functionality should include the > > > > prioritization of users. So that you can imagine exactly how the > > > > whole thing should work, I would like to give you an example. > > > > > > > > Two cases should be solved! > > > > > > > > Case 1: A user A with a low priority uses a VM from Openstack with > > > > half performance of the available host. Then user B comes in with > > > > a high priority and needs the full performance of the host for his > > > > VM. When creating the VM of user B, the VM of user A should be > > > > deleted because there is not enough compute power for user B. The > > > > VM of user B is successfully created. > > > > > > > > Case 2: A user A with a low priority uses a VM with half the > > > > performance of the available host, then user B comes in with a > > > > high priority and needs half of the performance of the host for his > VM. > > > > When creating the VM of user B, user A should not be deleted, > > > > since enough computing power is available for both users. > > > > > > > > These cases should work for unlimited users. In order to optimize > > > > the whole thing, I would like to write a function that precisely > > > > calculates all performance components to determine whether enough > > > > resources are available for the VM of the high priority user. > > > > > > What you're describing is commonly referred to as "preemptible" or > > > "spot" > > > instances. This topic has a long, complicated history in nova and > > > has yet to be implemented. Searching for "preemptible instances > > > openstack" should yield you lots of discussion on the topic along > > > with a few proof-of-concept approaches using external services or > > > out-of-tree modifications to nova. > > > > > > > I’m new to Openstack, but I’ve already implemented cloud projects > > > > with Microsoft Azure and have solid programming skills. Can you > > > > give me a hint where and how I can start? > > > > > > As hinted above, this is likely to be a very difficult project given > > > the fraught history of the idea. I don't want to dissuade you from > > > this work but you should be aware of what you're getting into from > > > the start. If you're serious about pursuing this, I suggest you > > > first do some research on prior art. As noted above, there is lots > > > of information on the internet about this. With this research done, > > > you'll need to decide whether this is something you want to approach > > > within nova itself, via out-of-tree extensions or via a third party > > > project. If you're opting for integration with nova, then you'll > > > need to think long and hard about how you would design such a system > > > and start working on a spec (a design document) outlining your > > > proposed solution. Details on how to write a spec are discussed at > > > [1]. The only extension points nova offers today are scheduler > > > filters and weighers so your options for an out-of-tree extension > > > approach will be limited. A third party project will arguably be the > > > easiest approach but you will be restricted to talking to nova's > > > REST APIs which may limit the design somewhat. This Blazar spec [2] > > > could give you some ideas on this approach (assuming it was never > > > actually implemented, though it may well have been). > > > > > > > My university gave me three compute hosts and one control host to > > > > implement this solution for the bachelor thesis. I’m currently > > > > setting up Openstack and all the services on the control host all > > > > by myself to understand all the functionality (sorry for not using > > > > Packstack) 😉. All my hosts have CentOS 7 and the minimum > > > > deployment which I configure is Train. > > > > > > > > My idea is to work with nova schedulers, because they seem to be > > > > interesting for my case. I've found a whole infrastructure > > > > description of the provisioning of an instance in Openstack > > > > https://docs.openstack.org/operations-guide/de/_images/provision-a > > > > n-instance.png > > > > . > > > > > > > > The nova scheduler > > > > https://docs.openstack.org/operations-guide/ops-customize-compute. > > > > html > > > > is the first component, where it is possible to implement > > > > functions via Python and the Compute API > > > > https://docs.openstack.org/api-ref/compute/?expanded=show-details- > > > > of-specific-api-version-detail,list-servers-detail > > > > to check for active VMs and probably delete them if needed before > > > > a successful request for an instantiation can be made. > > > > > > > > What do you guys think about it? Does it seem like a good starting > > > > point for you or is it the wrong approach? > > > > > > This could potentially work, but I suspect there will be serious > > > performance implications with this, particularly at scale. Scheduler > > > filters are historically used for simple things like "find me a > > > group of hosts that have this metadata attribute I set on my image". > > > Making API calls sounds like something that would take significant > > > time and therefore slow down the schedule process. You'd also have > > > to decide what your heuristic for deciding which VM(s) to delete > > > would be, since there's nothing obvious in nova that you could use. > > > You could use something as simple as filter extra specs or something > > > as complicated as an external service. > > > > > > This should be lots to get you started. Once again, do make sure > > > you're aware of what you're getting yourself into before you start. > > > This could get complicated very quickly :) > > > > > > Cheers, > > > Stephen > > > > > > > I'm very happy to have found you!!! > > > > > > > > Thank you really much for your time! > > > > > > > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > > > [2] > > > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blaz > > > ar-preemptible-instances.html > > > > > > > Best regards > > > > Levon > > > > > > > > -----Ursprüngliche Nachricht----- > > > > Von: Stephen Finucane > > > > Gesendet: Montag, 31. Mai 2021 12:34 > > > > An: Levon Melikbekjan ; > > > > openstack at lists.openstack.org > > > > Betreff: Re: Customization of nova-scheduler > > > > > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > > > Hello Openstack team, > > > > > > > > > > is it possible to customize the nova-scheduler via Python? If > > > > > yes, how? > > > > > > > > Yes, you can provide your own filters and weighers. This is > > > > documented at [1]. > > > > > > > > Hope this helps, > > > > Stephen > > > > > > > > [1] > > > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writi > > > > ng-y > > > > our-own-filter > > > > > > > > > > > > > > Best regards > > > > > Levon > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bxzhu_5355 at 163.com Wed Jun 16 02:33:14 2021 From: bxzhu_5355 at 163.com (Boxiang Zhu) Date: Wed, 16 Jun 2021 10:33:14 +0800 (CST) Subject: =?GBK?Q?=BB=D8=B8=B4:Re:_[cinder]_revert_volume_to_snapshot?= In-Reply-To: <20210615105400.GA8236@sm-workstation> References: <782fa353.71d3.17a0ea5201f.Coremail.bxzhu_5355@163.com> <20210615105400.GA8236@sm-workstation> Message-ID: Thank you, Sean. At 2021-06-15 18:54:00, "Sean McGinnis" wrote: > >This is partly due to the rollback abilities of each type of storage. Some >types can't revert to several snapshots back without losing the more recent >snapshots. This means that Cinder would still think there are snapshots BTW, do we have counted which drivers can't / can revert to several snapshots back without losing the more recent snapshots? >available, but those snapshots would no longer be present on the storage >device. > >This is considered a data loss condition, so we need to protect against that >from happening. It's been discussed several times at Design Summits and PTGs, >and at least so far there has not been a good way to handle it. > >The best recommendation we can give is for anyone that needs to go back several >snapshots, you will need to revert one snapshot at a time to get back to where >you need to be. > >But it is also worth pointing out that snapshots, and the ability to revert to >snapshots, is not necessarily the best mechanism for data protection. If you >need to have the ability to restore a volume back to its earlier state, using >the backup/restore APIs are likely the more appropriate way to go. > >Sean -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Wed Jun 16 05:22:03 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Tue, 15 Jun 2021 22:22:03 -0700 Subject: networking-calico-1.4.2-1.el7.centos.noarch.rpm Message-ID: Anyone know where I can download this rpm? thx. Pete -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Wed Jun 16 05:35:34 2021 From: openinfradn at gmail.com (open infra) Date: Wed, 16 Jun 2021 11:05:34 +0530 Subject: Error creating VMs Message-ID: Hi After setting up OpenStack environment using STX R5 (two controllers, two storage nodes and one worker), I have deployed a VM. VM ended up with ERROR status. I highly appreciate if someone can guide to dig further (what logs to check ) or to fix this issue. http://paste.openstack.org/show/806626/ Regards Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Wed Jun 16 06:09:31 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Wed, 16 Jun 2021 14:09:31 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Hi Mark, thanks again for your reply and direction on this. Regarding the need to do a pip install within the venv, I re-installed the host and re-installed kayobe and this issue is no longer present as of yesterday afternoon. Regarding the "Waiting for nova-compute services to register themselves" - I tailed *.log in the kolla/nova log dir and to be honest there wasnt much being logged at that time and no errors, either. Regarding Wallaby hasnt yet been released, I went back to Victoria on Centos 8.4 this morning, using the "pip install ." (installation from source). I will use this install from source going forward to make sure I am getting the latest kayobe patches. I had two blockers during "host configure": First was that python was not found on the host: TASK [Verify that a command can be executed] ***************************************************************************************************************************** fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: /usr/bin/python3: No such file or directory\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} So I manually installed python to see what would happen. I was able to get past that point but then it fails again: TASK [Ensure the Python virtualenv package is installed] ***************************************************************************************************************** [WARNING]: Updating cache and auto-installing missing dependency: python3-apt fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "cmd": "apt-get update", "msg": "[Errno 2] No such file or directory: b'apt-get': b'apt-get'", "rc": 2} PLAY RECAP *************************************************************************************************************************************************************** juc-ucsb-5-p : ok=7 changed=0 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0 The above log suggests that the CentOS 8.4 host is being matched as ubuntu. As an FYI, what I mean when I say "went back to Victoria" is that I followed the initial kayobe install steps again with new top directory structure and then manually transferred over the kayobe config to the new empty config which is downloaded during the git clone. "manually transferred" means I opened each yml config file individually and copied over *only* the variables that have been configured within the file, making sure to leave defaults as a prefernce. After the above issue I decided to go back to Train and have been able to successfully deploy Openstack using kayobe Train today - no blocking issues, first time. *Summary* I really appreciate your time to help me with this, so thanks again. I Still think kayobe is the best way to deploy and manage openstack and I have been "telling my friends" also :) Now I am back to this point I just have some minor things to work through before I can start using it. Kind regards, Tony Pearce On Tue, 15 Jun 2021 at 16:02, Mark Goddard wrote: > On Tue, 15 Jun 2021 at 06:51, Tony Pearce wrote: > > > > Hi Mark, > > > > I had never used the "pip install ." method. Maybe a miscomprehension on > my side, from the documentation [1] there are three ways to install kayobe. > I had opted for the first way which is "pip install kayobe" since January > 2020. The understanding was as conveyed in the doc "Installing from PyPI > ensures the use of well used and tested software". > > That is true, but since Wallaby has not been released for Kayobe yet, > it is not on PyPI. If you do install from PyPI, I would advise using a > version constraint to ensure you get the release series you need. > > > > > I have since followed your steps in your mail which is the installation > from source. I had new problems: > > > > During ansible bootstrap: > > During ansible host bootstrap it errors out and says the kolla_ansible > is not found and needs to be installed in the same virtual environment. In > all previous times, I had understood that kolla ansible is installed by > kayobe at this point. I eventually done "pip install kolla-ansible" and it > seemed to take care of that and allowed me to move on to "host configure" > > Kolla Ansible should be installed automatically during 'kayobe control > host bootstrap', in a separate virtualenv from Kayobe. You should not > need to install it manually, and I would again advise against doing so > without version constraints. > > > > > During host configure: > > I was able to get past the previous python issue but then it failed on > the network due to a "duplicate bond name", though this config was deployed > successfully in Train. I dont think I really need a bond at this point so I > deleted the bond and the host configure is now successful. (fyi this is an > all-in-one host.) > > > > During kayobe service deploy: > > This then fails with "no module named docker" on the host. To > troubleshoot this I logged into the host and activated the kayobe virtual > env (/opt/kayobe/venvs/kayobe/bin/activate) and then "pip install docker". > It was already installed. Eventually, I issued "pip install > --ignore-installed docker" within these three (environment) locations which > resolved this and allowed the kayobe command to complete successfully and > progress further: > > - /opt/kayobe/venvs/kayobe/ > > - /opt/kayobe/venvs/kolla-ansible/ > > - native on the host after deactivating the venv. > > > > Now the blocker is the following failure; > > > > TASK [nova-cell : Waiting for nova-compute services to register > themselves] > ********************************************************************************************** > > FAILED - RETRYING: Waiting for nova-compute services to register > themselves (20 retries left). > > FAILED - RETRYING: Waiting for nova-compute services to register > themselves (19 retries left). > > > > I haven't seen this one before but previously I had seen something > similar with mariadb because the API dns was not available. What I have > been using here is a /etc/hosts entry for this. I checked that this entry > is available on the host and in the nova containers. I decided to reboot > the host anyway (previously resolved similar mariadb issue) to restart the > containers just in case the dns was not available in one of them and I > missed it. > > I'd check the nova compute logs here, to find why they are not > registering themselves. > > > > Unfortunately I now have two additional issues which are hard blockers: > > 1. The network is no longer working on the host after reboot, so I am > unable to ssh > > 2. The user password has been changed by kayobe, so I am unable to login > using the console > > > > Due to the above, I am unable to login to the host to investigate or > remediate. Previously when this happened with centos I could use the root > user to log in. This time around as it's ubuntu I do not have a root user. > > The user I am using for both "kolla_ansible_user" and > "kayobe_ansible_user" is the same - is this causing a problem with Victoria > and Wallaby? I had this user password change issue beginning with Victoria. > > > > So at this point I need to re-install the host and go back to the host > configure before service deploy. > > > > Summary > > Any guidance is well appreciated as I'm at a loss at this point. Last > week I had a working Openstack Train deployment in a single host. "Kayobe" > stopped working (maybe because I had previously always used pip install > kayobe). > > > > I would like to deploy Wallaby, should I be able to successfully do this > today or should I be using Victoria at the moment (or even, Train)? > > We are very close to release of Wallaby, and I expect that it should > generally work, but Ubuntu is a new distro for Kayobe, and Wallaby is > a new release. There may be teething problems, so if you're looking > for something more stable then I'd suggest CentOS & Victoria. > > > > > [1] OpenStack Docs: Installation > > > > Regards, > > > > Tony Pearce > > > > > > On Mon, 14 Jun 2021 at 18:36, Mark Goddard wrote: > >> > >> On Mon, 14 Jun 2021 at 09:40, Tony Pearce wrote: > >> > > >> > Hi Mark, > >> > > >> > I followed this guide to do a "git clone" specifying the branch "-b" > to "stable/wallaby" [1]. What additional steps do I need to do to get the > latest commits? > >> > >> That should be sufficient. When you install it via pip, note that 'pip > >> install kayobe' will still pull from PyPI, even if there is a local > >> kayobe directory. Use ./kayobe, or 'pip install .' if in the same > >> directory. > >> > >> Mark > >> > > >> > [1] OpenStack Docs: Overcloud > >> > > >> > Kind regards, > >> > > >> > Tony Pearce > >> > > >> > > >> > On Mon, 14 Jun 2021 at 16:10, Mark Goddard wrote: > >> >> > >> >> On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: > >> >> > > >> >> > Hi Pierre, thanks for replying to my message. > >> >> > > >> >> > To install kayobe I followed the documentation which summarise: > installing a few system packages and setting up the kayobe virtual > environment and then pulling the correct kayobe git version for the > openstack to be installed. After configuring the yaml files I have run > these commands: > >> >> > > >> >> > - kayobe control host bootstrap > >> >> > - kayobe overcloud host configure -> this one is failing with > /usr/libexec/platform-python: not found > >> >> > > >> >> > After reading your message on the weekend I concluded that maybe I > had done something wrong. Today, I re-pulled the kayobe wallaby git and > manually transferred the configuration over to the new directory structure > on the ansible host and set up again as per the guide but the same issue is > seen. > >> >> > > >> >> > What I ended up doing to try and resolve was finding where this > "platform-python" is coming from. It is coming from the virtual environment > which is being set up during the kayobe ansible host bootstrap. Initially, > I found the base.yml and it looks like it tries to match what the host is. > I noticed that there is no ubuntu 20 listed there so I created it however > it did not resolve the issue. > >> >> > > >> >> > So then I tried systematically replacing this reference in the > other files found in the same location "venvs\kayobe\share\kayobe\ansible". > The file I changed which allowed it to progress is "kayobe-target-venv.yml" > >> >> > > >> >> > But unfortunately it fails a bit further on, failing to find an > selinux package [1] > >> >> > > >> >> > Seeing as the error is mentioning selinux (a RedHat security > feature not installed on ubuntu) could the root cause issue be that kayobe > is not matching the host as ubuntu? I did already set in kayobe that I am > using ubuntu OS distribution within globals.yml [2]. > >> >> > > >> >> > Are there any extra steps that I need to complete that maybe are > not listed in the documentation / guide? > >> >> > > >> >> > [1] TASK [MichaelRigart.interfaces : Debian | install > current/latest network package - Pastebin.com > >> >> > [2] ---# Kayobe global > configuration.######################################### - Pastebin.com > >> >> > >> >> Hi Tony, > >> >> > >> >> That's definitely not a recent Wallaby checkout you're using. Ubuntu > >> >> no longer uses that MichaelRigart.interfaces role. Check that you > have > >> >> recent commits. Here is the most recent on stable/wallaby: > >> >> 13169077aaec0f7a28ae1f15b419dafc2456faf7. > >> >> > >> >> Mark > >> >> > >> >> > > >> >> > Regards, > >> >> > > >> >> > Tony Pearce > >> >> > > >> >> > > >> >> > > >> >> > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau > wrote: > >> >> >> > >> >> >> Hi Tony, > >> >> >> > >> >> >> Kayobe doesn't use platform-python anymore, on both > stable/wallaby and > >> >> >> stable/victoria: > >> >> >> > https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 > >> >> >> > >> >> >> Can you double-check what version you are using, and share how you > >> >> >> installed it? Note that only stable/wallaby supports Ubuntu 20 > hosts. > >> >> >> > >> >> >> Best wishes, > >> >> >> Pierre > >> >> >> > >> >> >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce > wrote: > >> >> >> > > >> >> >> > I'm trying to run "kayobe overcloud host configure" against an > ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is > not found during the host configure part. > >> >> >> > > >> >> >> > PLAY [Verify that the Kayobe Ansible user account is accessible] > >> >> >> > TASK [Verify that a command can be executed] > >> >> >> > > >> >> >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, > "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", > "module_stdout": "", "msg": "The module failed to execute correctly, you > probably need to set the interpreter.\nSee stdout/stderr for the exact > error", "rc": 127} > >> >> >> > > >> >> >> > Python3 is installed on the host. When searching where this > platform-python is coming from it returns the kolla-ansible virtual envs: > >> >> >> > > >> >> >> > $ grep -rni -e "platform-python" > >> >> >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: > '8': /usr/libexec/platform-python > >> >> >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: > - /usr/libexec/platform-python > >> >> >> > > >> >> >> > I had a look through the deployment guide for Kayobe Wallaby > and didnt see a note about changing this. > >> >> >> > > >> >> >> > Do I need to do further steps to support the ubuntu overcloud > host? I have already set (as per the doc): > >> >> >> > > >> >> >> > os_distribution: ubuntu > >> >> >> > os_release: focal > >> >> >> > > >> >> >> > Regards, > >> >> >> > > >> >> >> > Tony Pearce > >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Wed Jun 16 07:23:39 2021 From: neil at tigera.io (Neil Jerram) Date: Wed, 16 Jun 2021 08:23:39 +0100 Subject: networking-calico-1.4.2-1.el7.centos.noarch.rpm In-Reply-To: References: Message-ID: Hi Pete, On Wed, Jun 16, 2021 at 6:24 AM Pete Zhang wrote: > Anyone know where I can download this rpm? thx. > networking-calico-1.4.2-1.el7.centos.noarch.rpm looks like a very old version - are you sure you really want that particular version? That said, the Calico team (including me) hosts RPMs at https://binaries.projectcalico.org/rpm/, and that particular version can be found at these subpaths: ./calico-2.0/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm ./calico-2.1/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm ./calico-2.5/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm ./calico-2.2/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm ./calico-2.3/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm ./calico-2.4/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm Best wishes, Neil -- Neil Jerram Senior Software Engineer Tigera neil at tigera.io | @neiljerram Follow Tigera: Blog | Twitter | LinkedIn Leader in Kubernetes Security and Observability -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Jun 16 08:09:11 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 16 Jun 2021 09:09:11 +0100 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: On Wed, 16 Jun 2021 at 07:09, Tony Pearce wrote: > > Hi Mark, thanks again for your reply and direction on this. > > Regarding the need to do a pip install within the venv, I re-installed the host and re-installed kayobe and this issue is no longer present as of yesterday afternoon. > > Regarding the "Waiting for nova-compute services to register themselves" - I tailed *.log in the kolla/nova log dir and to be honest there wasnt much being logged at that time and no errors, either. > > Regarding Wallaby hasnt yet been released, I went back to Victoria on Centos 8.4 this morning, using the "pip install ." (installation from source). I will use this install from source going forward to make sure I am getting the latest kayobe patches. I had two blockers during "host configure": > > First was that python was not found on the host: > > TASK [Verify that a command can be executed] ***************************************************************************************************************************** > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: /usr/bin/python3: No such file or directory\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} In stable/victoria there is a task called 'Ensure python is installed' which should ensure that Python 3 is installed. Did that task run? > > So I manually installed python to see what would happen. I was able to get past that point but then it fails again: > > TASK [Ensure the Python virtualenv package is installed] ***************************************************************************************************************** > [WARNING]: Updating cache and auto-installing missing dependency: python3-apt > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "cmd": "apt-get update", "msg": "[Errno 2] No such file or directory: b'apt-get': b'apt-get'", "rc": 2} > > PLAY RECAP *************************************************************************************************************************************************************** > juc-ucsb-5-p : ok=7 changed=0 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0 > > The above log suggests that the CentOS 8.4 host is being matched as ubuntu. As an FYI, what I mean when I say "went back to Victoria" is that I followed the initial kayobe install steps again with new top directory structure and then manually transferred over the kayobe config to the new empty config which is downloaded during the git clone. "manually transferred" means I opened each yml config file individually and copied over only the variables that have been configured within the file, making sure to leave defaults as a prefernce. Something odd is clearly going on here. We use the package module, which uses ansible's ansible_pkg_mgr fact to dispatch the correct packaging module. Perhaps you have fact caching enabled, and have stale facts from when the host was running Ubuntu? > > After the above issue I decided to go back to Train and have been able to successfully deploy Openstack using kayobe Train today - no blocking issues, first time. > > Summary > I really appreciate your time to help me with this, so thanks again. I Still think kayobe is the best way to deploy and manage openstack and I have been "telling my friends" also :) Now I am back to this point I just have some minor things to work through before I can start using it. > > Kind regards, > > Tony Pearce > > > On Tue, 15 Jun 2021 at 16:02, Mark Goddard wrote: >> >> On Tue, 15 Jun 2021 at 06:51, Tony Pearce wrote: >> > >> > Hi Mark, >> > >> > I had never used the "pip install ." method. Maybe a miscomprehension on my side, from the documentation [1] there are three ways to install kayobe. I had opted for the first way which is "pip install kayobe" since January 2020. The understanding was as conveyed in the doc "Installing from PyPI ensures the use of well used and tested software". >> >> That is true, but since Wallaby has not been released for Kayobe yet, >> it is not on PyPI. If you do install from PyPI, I would advise using a >> version constraint to ensure you get the release series you need. >> >> > >> > I have since followed your steps in your mail which is the installation from source. I had new problems: >> > >> > During ansible bootstrap: >> > During ansible host bootstrap it errors out and says the kolla_ansible is not found and needs to be installed in the same virtual environment. In all previous times, I had understood that kolla ansible is installed by kayobe at this point. I eventually done "pip install kolla-ansible" and it seemed to take care of that and allowed me to move on to "host configure" >> >> Kolla Ansible should be installed automatically during 'kayobe control >> host bootstrap', in a separate virtualenv from Kayobe. You should not >> need to install it manually, and I would again advise against doing so >> without version constraints. >> >> > >> > During host configure: >> > I was able to get past the previous python issue but then it failed on the network due to a "duplicate bond name", though this config was deployed successfully in Train. I dont think I really need a bond at this point so I deleted the bond and the host configure is now successful. (fyi this is an all-in-one host.) >> > >> > During kayobe service deploy: >> > This then fails with "no module named docker" on the host. To troubleshoot this I logged into the host and activated the kayobe virtual env (/opt/kayobe/venvs/kayobe/bin/activate) and then "pip install docker". It was already installed. Eventually, I issued "pip install --ignore-installed docker" within these three (environment) locations which resolved this and allowed the kayobe command to complete successfully and progress further: >> > - /opt/kayobe/venvs/kayobe/ >> > - /opt/kayobe/venvs/kolla-ansible/ >> > - native on the host after deactivating the venv. >> > >> > Now the blocker is the following failure; >> > >> > TASK [nova-cell : Waiting for nova-compute services to register themselves] ********************************************************************************************** >> > FAILED - RETRYING: Waiting for nova-compute services to register themselves (20 retries left). >> > FAILED - RETRYING: Waiting for nova-compute services to register themselves (19 retries left). >> > >> > I haven't seen this one before but previously I had seen something similar with mariadb because the API dns was not available. What I have been using here is a /etc/hosts entry for this. I checked that this entry is available on the host and in the nova containers. I decided to reboot the host anyway (previously resolved similar mariadb issue) to restart the containers just in case the dns was not available in one of them and I missed it. >> >> I'd check the nova compute logs here, to find why they are not >> registering themselves. >> > >> > Unfortunately I now have two additional issues which are hard blockers: >> > 1. The network is no longer working on the host after reboot, so I am unable to ssh >> > 2. The user password has been changed by kayobe, so I am unable to login using the console >> > >> > Due to the above, I am unable to login to the host to investigate or remediate. Previously when this happened with centos I could use the root user to log in. This time around as it's ubuntu I do not have a root user. >> > The user I am using for both "kolla_ansible_user" and "kayobe_ansible_user" is the same - is this causing a problem with Victoria and Wallaby? I had this user password change issue beginning with Victoria. >> > >> > So at this point I need to re-install the host and go back to the host configure before service deploy. >> > >> > Summary >> > Any guidance is well appreciated as I'm at a loss at this point. Last week I had a working Openstack Train deployment in a single host. "Kayobe" stopped working (maybe because I had previously always used pip install kayobe). >> > >> > I would like to deploy Wallaby, should I be able to successfully do this today or should I be using Victoria at the moment (or even, Train)? >> >> We are very close to release of Wallaby, and I expect that it should >> generally work, but Ubuntu is a new distro for Kayobe, and Wallaby is >> a new release. There may be teething problems, so if you're looking >> for something more stable then I'd suggest CentOS & Victoria. >> >> > >> > [1] OpenStack Docs: Installation >> > >> > Regards, >> > >> > Tony Pearce >> > >> > >> > On Mon, 14 Jun 2021 at 18:36, Mark Goddard wrote: >> >> >> >> On Mon, 14 Jun 2021 at 09:40, Tony Pearce wrote: >> >> > >> >> > Hi Mark, >> >> > >> >> > I followed this guide to do a "git clone" specifying the branch "-b" to "stable/wallaby" [1]. What additional steps do I need to do to get the latest commits? >> >> >> >> That should be sufficient. When you install it via pip, note that 'pip >> >> install kayobe' will still pull from PyPI, even if there is a local >> >> kayobe directory. Use ./kayobe, or 'pip install .' if in the same >> >> directory. >> >> >> >> Mark >> >> > >> >> > [1] OpenStack Docs: Overcloud >> >> > >> >> > Kind regards, >> >> > >> >> > Tony Pearce >> >> > >> >> > >> >> > On Mon, 14 Jun 2021 at 16:10, Mark Goddard wrote: >> >> >> >> >> >> On Mon, 14 Jun 2021 at 07:21, Tony Pearce wrote: >> >> >> > >> >> >> > Hi Pierre, thanks for replying to my message. >> >> >> > >> >> >> > To install kayobe I followed the documentation which summarise: installing a few system packages and setting up the kayobe virtual environment and then pulling the correct kayobe git version for the openstack to be installed. After configuring the yaml files I have run these commands: >> >> >> > >> >> >> > - kayobe control host bootstrap >> >> >> > - kayobe overcloud host configure -> this one is failing with /usr/libexec/platform-python: not found >> >> >> > >> >> >> > After reading your message on the weekend I concluded that maybe I had done something wrong. Today, I re-pulled the kayobe wallaby git and manually transferred the configuration over to the new directory structure on the ansible host and set up again as per the guide but the same issue is seen. >> >> >> > >> >> >> > What I ended up doing to try and resolve was finding where this "platform-python" is coming from. It is coming from the virtual environment which is being set up during the kayobe ansible host bootstrap. Initially, I found the base.yml and it looks like it tries to match what the host is. I noticed that there is no ubuntu 20 listed there so I created it however it did not resolve the issue. >> >> >> > >> >> >> > So then I tried systematically replacing this reference in the other files found in the same location "venvs\kayobe\share\kayobe\ansible". The file I changed which allowed it to progress is "kayobe-target-venv.yml" >> >> >> > >> >> >> > But unfortunately it fails a bit further on, failing to find an selinux package [1] >> >> >> > >> >> >> > Seeing as the error is mentioning selinux (a RedHat security feature not installed on ubuntu) could the root cause issue be that kayobe is not matching the host as ubuntu? I did already set in kayobe that I am using ubuntu OS distribution within globals.yml [2]. >> >> >> > >> >> >> > Are there any extra steps that I need to complete that maybe are not listed in the documentation / guide? >> >> >> > >> >> >> > [1] TASK [MichaelRigart.interfaces : Debian | install current/latest network package - Pastebin.com >> >> >> > [2] ---# Kayobe global configuration.######################################### - Pastebin.com >> >> >> >> >> >> Hi Tony, >> >> >> >> >> >> That's definitely not a recent Wallaby checkout you're using. Ubuntu >> >> >> no longer uses that MichaelRigart.interfaces role. Check that you have >> >> >> recent commits. Here is the most recent on stable/wallaby: >> >> >> 13169077aaec0f7a28ae1f15b419dafc2456faf7. >> >> >> >> >> >> Mark >> >> >> >> >> >> > >> >> >> > Regards, >> >> >> > >> >> >> > Tony Pearce >> >> >> > >> >> >> > >> >> >> > >> >> >> > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau wrote: >> >> >> >> >> >> >> >> Hi Tony, >> >> >> >> >> >> >> >> Kayobe doesn't use platform-python anymore, on both stable/wallaby and >> >> >> >> stable/victoria: >> >> >> >> https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 >> >> >> >> >> >> >> >> Can you double-check what version you are using, and share how you >> >> >> >> installed it? Note that only stable/wallaby supports Ubuntu 20 hosts. >> >> >> >> >> >> >> >> Best wishes, >> >> >> >> Pierre >> >> >> >> >> >> >> >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce wrote: >> >> >> >> > >> >> >> >> > I'm trying to run "kayobe overcloud host configure" against an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is not found during the host configure part. >> >> >> >> > >> >> >> >> > PLAY [Verify that the Kayobe Ansible user account is accessible] >> >> >> >> > TASK [Verify that a command can be executed] >> >> >> >> > >> >> >> >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} >> >> >> >> > >> >> >> >> > Python3 is installed on the host. When searching where this platform-python is coming from it returns the kolla-ansible virtual envs: >> >> >> >> > >> >> >> >> > $ grep -rni -e "platform-python" >> >> >> >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: '8': /usr/libexec/platform-python >> >> >> >> > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: - /usr/libexec/platform-python >> >> >> >> > >> >> >> >> > I had a look through the deployment guide for Kayobe Wallaby and didnt see a note about changing this. >> >> >> >> > >> >> >> >> > Do I need to do further steps to support the ubuntu overcloud host? I have already set (as per the doc): >> >> >> >> > >> >> >> >> > os_distribution: ubuntu >> >> >> >> > os_release: focal >> >> >> >> > >> >> >> >> > Regards, >> >> >> >> > >> >> >> >> > Tony Pearce >> >> >> >> > From tomasz.rutkowski at netart.pl Wed Jun 16 08:15:25 2021 From: tomasz.rutkowski at netart.pl (Tomasz Rutkowski) Date: Wed, 16 Jun 2021 10:15:25 +0200 Subject: [kolla][monasca] thresh keeps dying Message-ID: <16e169aa9a91a7a0bdeecb4d790aa15cdc6f7fd1.camel@netart.pl> can someone direct me where to search for the cause? as I understand failing to submit the topology to storm cluster shouldn't be final, and the process just dissapers there... + exec /opt/storm/bin/storm jar /monasca-thresh-source/monasca-thresh-stable-victoria/thresh/target/monasca-thresh-2.4.0-SNAPSHOT-shaded.jar -Djava.io.tmpdir=/var/lib/monasca-thresh/data monasca.thresh.ThresholdingEngine /etc/monasca/thresh-config.yml monasca-thresh Running: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -client -Ddaemon.name= -Dstorm.options= -Dstorm.home=/opt/storm -Dstorm.log.dir=/var/log/kolla/storm -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= -cp /opt/storm/*:/opt/storm/lib/*:/opt/storm/extlib/*:/monasca-thresh-source/monasca-thresh-stable-victoria/thresh/target/monasca-thresh-2.4.0-SNAPSHOT-shaded.jar:/opt/storm/conf:/opt/storm/bin -Dstorm.jar=/monasca-thresh-source/monasca-thresh-stable-victoria/thresh/target/monasca-thresh-2.4.0-SNAPSHOT-shaded.jar -Dstorm.dependency.jars= -Dstorm.dependency.artifacts={} -Djava.io.tmpdir=/var/lib/monasca-thresh/data monasca.thresh.ThresholdingEngine /etc/monasca/thresh-config.yml monasca-thresh 687 [main] INFO m.t.ThresholdingEngine - -------- Version Information -------- 692 [main] INFO m.t.ThresholdingEngine - monasca-thresh-2.4.0-SNAPSHOT-2021-06-06T08:32:08-${buildNumber} 693 [main] INFO m.t.ThresholdingEngine - Instantiating ThresholdingEngine with config file: /etc/monasca/thresh-config.yml, topology: monasca-thresh 1000 [main] INFO o.h.v.i.u.Version - HV000001: Hibernate Validator 5.2.1.Final 1197 [main] INFO m.t.ThresholdingEngine - local set to false 1312 [main] INFO m.t.i.t.MetricSpout - Created 1340 [main] INFO m.t.i.t.EventSpout - EventSpout created 1516 [main] WARN o.a.s.u.Utils - STORM-VERSION new 1.2.2 old null 1516 [main] INFO m.t.ThresholdingEngine - submitting topology monasca-thresh to non-local storm cluster 1549 [main] INFO o.a.s.StormSubmitter - Generated ZooKeeper secret payload for MD5-digest: -7012431400424907995:-9108807134284416946 1728 [main] INFO o.a.s.u.NimbusClient - Found leader nimbus : storm1:6627 1751 [main] INFO o.a.s.s.a.AuthUtils - Got AutoCreds [] 1756 [main] INFO o.a.s.u.NimbusClient - Found leader nimbus : storm1:6627 Exception in thread "main" java.lang.RuntimeException: Topology with name `monasca-thresh` already exists on cluster at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:237) at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:387) at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:159) at monasca.thresh.ThresholdingEngine.run(ThresholdingEngine.java:111) at monasca.thresh.ThresholdingEngine.main(ThresholdingEngine.java:82) regards -- Tomasz Rutkowski -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3728 bytes Desc: not available URL: From geguileo at redhat.com Wed Jun 16 08:22:11 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 16 Jun 2021 10:22:11 +0200 Subject: [cinder] revert volume to snapshot In-Reply-To: <782fa353.71d3.17a0ea5201f.Coremail.bxzhu_5355@163.com> References: <782fa353.71d3.17a0ea5201f.Coremail.bxzhu_5355@163.com> Message-ID: <20210616082211.lz7jgdwxfezpgsni@localhost> On 15/06, Boxiang Zhu wrote: > > > Hi, > > > There is a restful api[1] to revert volume to snapshot. But the description means > we can only use this api to revert volume to its latest snapshot. > > > Are some drivers limited to rolling back only to the latest snapshot? Or just nobody > helps to improve the api to revert volume to any snapshots of the volume? > > > > > [1] https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=revert-volume-to-snapshot-detail#revert-volume-to-snapshot > > > > > Thanks, > Boxiang Hi, There's a spec proposal [1] under review to improve the revert to snapshot so it can work with all drivers regardless of their limitations. The spec was briefly discussed during the Xena PTG [2]. Cheers, Gorka. [1]: https://review.opendev.org/c/openstack/cinder-specs/+/736111 [2]: https://wiki.openstack.org/wiki/CinderXenaPTGSummary#Support_revert_any_snapshot_to_the_volume From lucasagomes at gmail.com Wed Jun 16 08:39:36 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Wed, 16 Jun 2021 09:39:36 +0100 Subject: [ironic][ovn] Ironic with OVN as the SDN In-Reply-To: References: Message-ID: Hi, On Tue, Jun 15, 2021 at 11:21 PM David Ames wrote: > > I am looking for a summary of the support or lack thereof for Ironic > and OVN. Missing OVN features are explained in [0] and [1]. This bug > [2] seems to imply one can run a Neutron OVS DHCP agent alongside OVN. > > Can I get definitive answers to the following questions: > > Is it possible to run neutron-dhcp-agent alongside OVN to handle the > iPXE boot process? If so, is there documentation anywhere? > It's not documented nor tested anywhere upstream AFAIK. That said, it should be possible to deploy the Neutron DHCP agent on the controller nodes and that will take care of the PXE/iPXE boot process for the baremetal nodes. It would be great if someone actually gave this a go and documented their experience, we could even think of a job in the gate exercising this scenario I think. > What is the roadmap/projected timeline for OVN to support Ironic? > Core OVN already started landing some of the missing bits for the OVN DHCP server to start supporting this scenario (for example: https://github.com/ovn-org/ovn/commit/2476911b853518092e023767663a69a7191a8fb7) We also need a few changes in the ML2/OVN Neutron driver, such as: 1) Ironic uses a dnsmasq syntax for setting the DHCP options in Neutron (e.g !ipxe,bootfile-name=...) this is not understood by ML2/OVN yet, so we need to work on that. 2) The OVN built-in DHCP server is distributed and it runs on the compute nodes and fulfills the DHCP request for the VMs running on that local hypervisor, for baremetal things are a bit different. Fortunately, OVN has support for something called "external" ports (see: https://github.com/ovn-org/ovn/commit/96080083581275afaec8bc281d6a648aff7ef39e) which I believe we can use for the baremetal case. For example, if the VNIC of the port created is VNIC_BAREMETAL we can create an external port for it in ML2/OVN. That's how we support SR-IOV in ML2/OVN too. We have most of the bits and pieces in place already and I hope I can start working on supporting baremetal nodes natively in ML2/OVN soon too. But for now, the only way to achieve it is by using the Neutron DHCP agent as discussed before. Cheers, Lucas > [0] https://docs.openstack.org/neutron/latest/ovn/gaps.html > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1622154 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1620943 > > > Thanks, > > -- > David Ames > OpenStack Charm Engineering > From tonyppe at gmail.com Wed Jun 16 08:48:24 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Wed, 16 Jun 2021 16:48:24 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Possibly fact caching could have been the issue. The Victoria guide describes to configure fact caching so it was being used by my ACH. I did not think of this at the time of the issue, sorry about that. I have a new question about SSL cert, so will send a new mail for the topic. Thank you again, Kind regards, Tony Pearce On Wed, 16 Jun 2021 at 16:09, Mark Goddard wrote: > On Wed, 16 Jun 2021 at 07:09, Tony Pearce wrote: > > > > Hi Mark, thanks again for your reply and direction on this. > > > > Regarding the need to do a pip install within the venv, I re-installed > the host and re-installed kayobe and this issue is no longer present as of > yesterday afternoon. > > > > Regarding the "Waiting for nova-compute services to register themselves" > - I tailed *.log in the kolla/nova log dir and to be honest there wasnt > much being logged at that time and no errors, either. > > > > Regarding Wallaby hasnt yet been released, I went back to Victoria on > Centos 8.4 this morning, using the "pip install ." (installation from > source). I will use this install from source going forward to make sure I > am getting the latest kayobe patches. I had two blockers during "host > configure": > > > > First was that python was not found on the host: > > > > TASK [Verify that a command can be executed] > ***************************************************************************************************************************** > > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": > "/bin/sh: /usr/bin/python3: No such file or directory\n", "module_stdout": > "", "msg": "The module failed to execute correctly, you probably need to > set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} > > In stable/victoria there is a task called 'Ensure python is installed' > which should ensure that Python 3 is installed. Did that task run? > > > > > So I manually installed python to see what would happen. I was able to > get past that point but then it fails again: > > > > TASK [Ensure the Python virtualenv package is installed] > ***************************************************************************************************************** > > [WARNING]: Updating cache and auto-installing missing dependency: > python3-apt > > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "cmd": "apt-get > update", "msg": "[Errno 2] No such file or directory: b'apt-get': > b'apt-get'", "rc": 2} > > > > PLAY RECAP > *************************************************************************************************************************************************************** > > juc-ucsb-5-p : ok=7 changed=0 unreachable=0 > failed=1 skipped=10 rescued=0 ignored=0 > > > > The above log suggests that the CentOS 8.4 host is being matched as > ubuntu. As an FYI, what I mean when I say "went back to Victoria" is that I > followed the initial kayobe install steps again with new top directory > structure and then manually transferred over the kayobe config to the new > empty config which is downloaded during the git clone. "manually > transferred" means I opened each yml config file individually and copied > over only the variables that have been configured within the file, making > sure to leave defaults as a prefernce. > > Something odd is clearly going on here. We use the package module, > which uses ansible's ansible_pkg_mgr fact to dispatch the correct > packaging module. Perhaps you have fact caching enabled, and have > stale facts from when the host was running Ubuntu? > > > > > After the above issue I decided to go back to Train and have been able > to successfully deploy Openstack using kayobe Train today - no blocking > issues, first time. > > > > Summary > > I really appreciate your time to help me with this, so thanks again. I > Still think kayobe is the best way to deploy and manage openstack and I > have been "telling my friends" also :) Now I am back to this point I just > have some minor things to work through before I can start using it. > > > > Kind regards, > > > > Tony Pearce > > > > > > On Tue, 15 Jun 2021 at 16:02, Mark Goddard wrote: > >> > >> On Tue, 15 Jun 2021 at 06:51, Tony Pearce wrote: > >> > > >> > Hi Mark, > >> > > >> > I had never used the "pip install ." method. Maybe a miscomprehension > on my side, from the documentation [1] there are three ways to install > kayobe. I had opted for the first way which is "pip install kayobe" since > January 2020. The understanding was as conveyed in the doc "Installing from > PyPI ensures the use of well used and tested software". > >> > >> That is true, but since Wallaby has not been released for Kayobe yet, > >> it is not on PyPI. If you do install from PyPI, I would advise using a > >> version constraint to ensure you get the release series you need. > >> > >> > > >> > I have since followed your steps in your mail which is the > installation from source. I had new problems: > >> > > >> > During ansible bootstrap: > >> > During ansible host bootstrap it errors out and says the > kolla_ansible is not found and needs to be installed in the same virtual > environment. In all previous times, I had understood that kolla ansible is > installed by kayobe at this point. I eventually done "pip install > kolla-ansible" and it seemed to take care of that and allowed me to move on > to "host configure" > >> > >> Kolla Ansible should be installed automatically during 'kayobe control > >> host bootstrap', in a separate virtualenv from Kayobe. You should not > >> need to install it manually, and I would again advise against doing so > >> without version constraints. > >> > >> > > >> > During host configure: > >> > I was able to get past the previous python issue but then it failed > on the network due to a "duplicate bond name", though this config was > deployed successfully in Train. I dont think I really need a bond at this > point so I deleted the bond and the host configure is now successful. (fyi > this is an all-in-one host.) > >> > > >> > During kayobe service deploy: > >> > This then fails with "no module named docker" on the host. To > troubleshoot this I logged into the host and activated the kayobe virtual > env (/opt/kayobe/venvs/kayobe/bin/activate) and then "pip install docker". > It was already installed. Eventually, I issued "pip install > --ignore-installed docker" within these three (environment) locations which > resolved this and allowed the kayobe command to complete successfully and > progress further: > >> > - /opt/kayobe/venvs/kayobe/ > >> > - /opt/kayobe/venvs/kolla-ansible/ > >> > - native on the host after deactivating the venv. > >> > > >> > Now the blocker is the following failure; > >> > > >> > TASK [nova-cell : Waiting for nova-compute services to register > themselves] > ********************************************************************************************** > >> > FAILED - RETRYING: Waiting for nova-compute services to register > themselves (20 retries left). > >> > FAILED - RETRYING: Waiting for nova-compute services to register > themselves (19 retries left). > >> > > >> > I haven't seen this one before but previously I had seen something > similar with mariadb because the API dns was not available. What I have > been using here is a /etc/hosts entry for this. I checked that this entry > is available on the host and in the nova containers. I decided to reboot > the host anyway (previously resolved similar mariadb issue) to restart the > containers just in case the dns was not available in one of them and I > missed it. > >> > >> I'd check the nova compute logs here, to find why they are not > >> registering themselves. > >> > > >> > Unfortunately I now have two additional issues which are hard > blockers: > >> > 1. The network is no longer working on the host after reboot, so I am > unable to ssh > >> > 2. The user password has been changed by kayobe, so I am unable to > login using the console > >> > > >> > Due to the above, I am unable to login to the host to investigate or > remediate. Previously when this happened with centos I could use the root > user to log in. This time around as it's ubuntu I do not have a root user. > >> > The user I am using for both "kolla_ansible_user" and > "kayobe_ansible_user" is the same - is this causing a problem with Victoria > and Wallaby? I had this user password change issue beginning with Victoria. > >> > > >> > So at this point I need to re-install the host and go back to the > host configure before service deploy. > >> > > >> > Summary > >> > Any guidance is well appreciated as I'm at a loss at this point. Last > week I had a working Openstack Train deployment in a single host. "Kayobe" > stopped working (maybe because I had previously always used pip install > kayobe). > >> > > >> > I would like to deploy Wallaby, should I be able to successfully do > this today or should I be using Victoria at the moment (or even, Train)? > >> > >> We are very close to release of Wallaby, and I expect that it should > >> generally work, but Ubuntu is a new distro for Kayobe, and Wallaby is > >> a new release. There may be teething problems, so if you're looking > >> for something more stable then I'd suggest CentOS & Victoria. > >> > >> > > >> > [1] OpenStack Docs: Installation > >> > > >> > Regards, > >> > > >> > Tony Pearce > >> > > >> > > >> > On Mon, 14 Jun 2021 at 18:36, Mark Goddard wrote: > >> >> > >> >> On Mon, 14 Jun 2021 at 09:40, Tony Pearce wrote: > >> >> > > >> >> > Hi Mark, > >> >> > > >> >> > I followed this guide to do a "git clone" specifying the branch > "-b" to "stable/wallaby" [1]. What additional steps do I need to do to get > the latest commits? > >> >> > >> >> That should be sufficient. When you install it via pip, note that > 'pip > >> >> install kayobe' will still pull from PyPI, even if there is a local > >> >> kayobe directory. Use ./kayobe, or 'pip install .' if in the same > >> >> directory. > >> >> > >> >> Mark > >> >> > > >> >> > [1] OpenStack Docs: Overcloud > >> >> > > >> >> > Kind regards, > >> >> > > >> >> > Tony Pearce > >> >> > > >> >> > > >> >> > On Mon, 14 Jun 2021 at 16:10, Mark Goddard > wrote: > >> >> >> > >> >> >> On Mon, 14 Jun 2021 at 07:21, Tony Pearce > wrote: > >> >> >> > > >> >> >> > Hi Pierre, thanks for replying to my message. > >> >> >> > > >> >> >> > To install kayobe I followed the documentation which summarise: > installing a few system packages and setting up the kayobe virtual > environment and then pulling the correct kayobe git version for the > openstack to be installed. After configuring the yaml files I have run > these commands: > >> >> >> > > >> >> >> > - kayobe control host bootstrap > >> >> >> > - kayobe overcloud host configure -> this one is failing with > /usr/libexec/platform-python: not found > >> >> >> > > >> >> >> > After reading your message on the weekend I concluded that > maybe I had done something wrong. Today, I re-pulled the kayobe wallaby git > and manually transferred the configuration over to the new directory > structure on the ansible host and set up again as per the guide but the > same issue is seen. > >> >> >> > > >> >> >> > What I ended up doing to try and resolve was finding where this > "platform-python" is coming from. It is coming from the virtual environment > which is being set up during the kayobe ansible host bootstrap. Initially, > I found the base.yml and it looks like it tries to match what the host is. > I noticed that there is no ubuntu 20 listed there so I created it however > it did not resolve the issue. > >> >> >> > > >> >> >> > So then I tried systematically replacing this reference in the > other files found in the same location "venvs\kayobe\share\kayobe\ansible". > The file I changed which allowed it to progress is "kayobe-target-venv.yml" > >> >> >> > > >> >> >> > But unfortunately it fails a bit further on, failing to find an > selinux package [1] > >> >> >> > > >> >> >> > Seeing as the error is mentioning selinux (a RedHat security > feature not installed on ubuntu) could the root cause issue be that kayobe > is not matching the host as ubuntu? I did already set in kayobe that I am > using ubuntu OS distribution within globals.yml [2]. > >> >> >> > > >> >> >> > Are there any extra steps that I need to complete that maybe > are not listed in the documentation / guide? > >> >> >> > > >> >> >> > [1] TASK [MichaelRigart.interfaces : Debian | install > current/latest network package - Pastebin.com > >> >> >> > [2] ---# Kayobe global > configuration.######################################### - Pastebin.com > >> >> >> > >> >> >> Hi Tony, > >> >> >> > >> >> >> That's definitely not a recent Wallaby checkout you're using. > Ubuntu > >> >> >> no longer uses that MichaelRigart.interfaces role. Check that you > have > >> >> >> recent commits. Here is the most recent on stable/wallaby: > >> >> >> 13169077aaec0f7a28ae1f15b419dafc2456faf7. > >> >> >> > >> >> >> Mark > >> >> >> > >> >> >> > > >> >> >> > Regards, > >> >> >> > > >> >> >> > Tony Pearce > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > On Fri, 11 Jun 2021 at 21:05, Pierre Riteau < > pierre at stackhpc.com> wrote: > >> >> >> >> > >> >> >> >> Hi Tony, > >> >> >> >> > >> >> >> >> Kayobe doesn't use platform-python anymore, on both > stable/wallaby and > >> >> >> >> stable/victoria: > >> >> >> >> > https://review.opendev.org/q/I0d477325e0edd13d1aba211c13dc2e8b7a9b4c98 > >> >> >> >> > >> >> >> >> Can you double-check what version you are using, and share how > you > >> >> >> >> installed it? Note that only stable/wallaby supports Ubuntu 20 > hosts. > >> >> >> >> > >> >> >> >> Best wishes, > >> >> >> >> Pierre > >> >> >> >> > >> >> >> >> On Fri, 11 Jun 2021 at 13:20, Tony Pearce > wrote: > >> >> >> >> > > >> >> >> >> > I'm trying to run "kayobe overcloud host configure" against > an ubuntu 20 machine to deploy Wallaby. I'm getting an error that python is > not found during the host configure part. > >> >> >> >> > > >> >> >> >> > PLAY [Verify that the Kayobe Ansible user account is > accessible] > >> >> >> >> > TASK [Verify that a command can be executed] > >> >> >> >> > > >> >> >> >> > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, > "module_stderr": "/bin/sh: 1: /usr/libexec/platform-python: not found\n", > "module_stdout": "", "msg": "The module failed to execute correctly, you > probably need to set the interpreter.\nSee stdout/stderr for the exact > error", "rc": 127} > >> >> >> >> > > >> >> >> >> > Python3 is installed on the host. When searching where this > platform-python is coming from it returns the kolla-ansible virtual envs: > >> >> >> >> > > >> >> >> >> > $ grep -rni -e "platform-python" > >> >> >> >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1450: > '8': /usr/libexec/platform-python > >> >> >> >> > > venvs/kolla-ansible/lib/python3.8/site-packages/ansible/config/base.yml:1470: > - /usr/libexec/platform-python > >> >> >> >> > > >> >> >> >> > I had a look through the deployment guide for Kayobe Wallaby > and didnt see a note about changing this. > >> >> >> >> > > >> >> >> >> > Do I need to do further steps to support the ubuntu > overcloud host? I have already set (as per the doc): > >> >> >> >> > > >> >> >> >> > os_distribution: ubuntu > >> >> >> >> > os_release: focal > >> >> >> >> > > >> >> >> >> > Regards, > >> >> >> >> > > >> >> >> >> > Tony Pearce > >> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Wed Jun 16 09:10:02 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Wed, 16 Jun 2021 17:10:02 +0800 Subject: [kayobe][train] kolla_copy_ca_into_containers variable Message-ID: I have deployed Train with Kayobe. I'd like to enable SSL using a cert which is signed but NOT by a public CA. This means I need to add the CA cert to the containers. I came across this doc [1] and I wanted to ask / discover when this variable comes into play "kolla_copy_ca_into_containers"? Does this variable work only from Victoria onwards or will it work in Train? Do I require to have a "seed" to build containers, to enable this cert copy into containers? (kayobe overcloud container image build). OR if I do "kayobe overcloud container image pull" will the cert be copied at that point? [1] OpenStack Docs: TLS Thanks and regards, Tony Pearce -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Jun 16 11:28:11 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 16 Jun 2021 07:28:11 -0400 Subject: [cinder] festival of XS reviews 18 June 2021 Message-ID: <581b53fc-ecbf-bd9c-7cab-d5b3cca3edd3@gmail.com> Hello Cinder community members, This is a reminder that the most recent edition of the Cinder Festival of XS Reviews will be held at the end of this week on Friday 18 June. who: Everyone! what: The Cinder Festival of XS Reviews when: Friday 18 June 2021 from 1400-1600 UTC where: https://meetpad.opendev.org/cinder-festival-of-reviews This recurring meeting can be placed on your calendar by using this handy ICS file: http://eavesdrop.openstack.org/calendars/cinder-festival-of-reviews.ics See you there! brian From C-Albert.Braden at charter.com Wed Jun 16 12:46:18 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Wed, 16 Jun 2021 12:46:18 +0000 Subject: [EXTERNAL] Error creating VMs In-Reply-To: References: Message-ID: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> Can you check the scheduler and conductor logs on the controllers? There should be entries describing why the instance failed to schedule. You may need to set “debug=true” in nova.conf to get more details. From: open infra Sent: Wednesday, June 16, 2021 1:36 AM To: openstack-discuss Subject: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi After setting up OpenStack environment using STX R5 (two controllers, two storage nodes and one worker), I have deployed a VM. VM ended up with ERROR status. I highly appreciate if someone can guide to dig further (what logs to check ) or to fix this issue. http://paste.openstack.org/show/806626/ Regards Danishka E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Jun 16 13:08:29 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 16 Jun 2021 10:08:29 -0300 Subject: [cinder] Bug deputy report for week of 2021-16-06 Message-ID: Hello, This is a bug report from 2021-09-06 to 2021-16-06. You're welcome to join the Cinder Bug Meeting today. Weekly on Wednesday at 1500 UTC on #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- We only have one bug reported this period: Critical:- High:- Medium: - https://bugs.launchpad.net/cinder/+bug/1931629 'netapp_pool_name_search_pattern doesn't accept regex'. Unassigned. Cheers, Sofia -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Wed Jun 16 13:12:11 2021 From: openinfradn at gmail.com (open infra) Date: Wed, 16 Jun 2021 18:42:11 +0530 Subject: [EXTERNAL] Error creating VMs In-Reply-To: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: I can see sm-scheduler.log not sure if its the correct log file you mentioned, but I can't see conductor log. This environment have been deployed using starlingx. On Wed, Jun 16, 2021 at 6:16 PM Braden, Albert wrote: > Can you check the scheduler and conductor logs on the controllers? There > should be entries describing why the instance failed to schedule. You may > need to set “debug=true” in nova.conf to get more details. > > > > *From:* open infra > *Sent:* Wednesday, June 16, 2021 1:36 AM > *To:* openstack-discuss > *Subject:* [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Hi > > > > After setting up OpenStack environment using STX R5 (two controllers, two > storage nodes and one worker), I have deployed a VM. VM ended up with ERROR > status. > > > > I highly appreciate if someone can guide to dig further (what logs to > check ) or to fix this issue. > > > > http://paste.openstack.org/show/806626/ > > > > Regards > > Danishka > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmlineb at sandia.gov Wed Jun 16 13:58:50 2021 From: jmlineb at sandia.gov (Linebarger, John) Date: Wed, 16 Jun 2021 13:58:50 +0000 Subject: Current market share of OpenStack implementations? Message-ID: This is more of a marketing than a technical question, but the online sources seem to be either a few years old or the links have gone stale. What is the collective assessment of the OpenStack Hive Mind about the current worldwide market share of OpenStack implementations, in descending order? And which ones are growing and which ones are leveling off or declining? Candidates would seem to be: 1. TripleO OpenStack 2. Canonical's Charmed OpenStack 3. RedHat OpenStack Platform 4. Mirantis Cloud Platform 5. Packstack 6. Devstack (possibly; but that's just build from source, right?) But does something more comprehensive, more definitive, more current exist with regard to worldwide market share of OpenStack implementations? Or even segmented by continent, geographical region, or country? Thanks! Enjoy! John M. Linebarger, PhD, MBA Principal Member of Technical Staff Sandia National Laboratories (Office) 505-845-8282 (Cell) 505-681-4879 [cid:image002.jpg at 01D76285.70750A90][AWS Certified Solutions Architect - Professional][AWS Certified Solutions Architect - Associate][AWS Certified Developer - Associate][cid:image003.png at 01D76282.D64E1170][cid:image005.png at 01D76285.70750A90] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 7238 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 2419 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 9285 bytes Desc: image005.png URL: From jpodivin at redhat.com Wed Jun 16 14:16:56 2021 From: jpodivin at redhat.com (Jiri Podivin) Date: Wed, 16 Jun 2021 16:16:56 +0200 Subject: Current market share of OpenStack implementations? In-Reply-To: References: Message-ID: Answer to this question would interest me as well. I suppose the marketing, or equivalent, departments might have a good idea about their own customers. But I'm not sure how they would feel about sharing it, especially in detail. On Wed, Jun 16, 2021 at 4:06 PM Linebarger, John wrote: > This is more of a marketing than a technical question, but the online > sources seem to be either a few years old or the links have gone stale. > What is the collective assessment of the OpenStack Hive Mind about the > current worldwide market share of OpenStack implementations, in descending > order? And which ones are growing and which ones are leveling off or > declining? Candidates would seem to be: > > > > 1. TripleO OpenStack > > 2. Canonical’s Charmed OpenStack > > 3. RedHat OpenStack Platform > > 4. Mirantis Cloud Platform > > 5. Packstack > > 6. Devstack (possibly; but that’s just build from source, right?) > > > > But does something more comprehensive, more definitive, more current exist > with regard to worldwide market share of OpenStack implementations? Or even > segmented by continent, geographical region, or country? > > > > Thanks! Enjoy! > > > > *John M. Linebarger, PhD, MBA* > > Principal Member of Technical Staff > > Sandia National Laboratories > > (Office) 505-845-8282 > > (Cell) 505-681-4879 > > [image: AWS Certified Solutions Architect - > Professional] > [image: > AWS Certified Solutions Architect - Associate] > [image: > AWS Certified Developer - Associate] > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 7238 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 2419 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 9285 bytes Desc: not available URL: From noonedeadpunk at ya.ru Wed Jun 16 14:32:04 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 16 Jun 2021 17:32:04 +0300 Subject: Current market share of OpenStack implementations? In-Reply-To: References: Message-ID: <389941623853828@mail.yandex.ru> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 9285 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 7238 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 2419 bytes Desc: not available URL: From C-Albert.Braden at charter.com Wed Jun 16 14:36:30 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Wed, 16 Jun 2021 14:36:30 +0000 Subject: [EXTERNAL] Error creating VMs In-Reply-To: References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: Someone will need to dig through the starlingx documentation and figure out where the log files are located. Do you want to do that? https://docs.starlingx.io/ From: open infra Sent: Wednesday, June 16, 2021 9:12 AM To: Braden, Albert Cc: openstack-discuss Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. I can see sm-scheduler.log not sure if its the correct log file you mentioned, but I can't see conductor log. This environment have been deployed using starlingx. On Wed, Jun 16, 2021 at 6:16 PM Braden, Albert > wrote: Can you check the scheduler and conductor logs on the controllers? There should be entries describing why the instance failed to schedule. You may need to set “debug=true” in nova.conf to get more details. From: open infra > Sent: Wednesday, June 16, 2021 1:36 AM To: openstack-discuss > Subject: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi After setting up OpenStack environment using STX R5 (two controllers, two storage nodes and one worker), I have deployed a VM. VM ended up with ERROR status. I highly appreciate if someone can guide to dig further (what logs to check ) or to fix this issue. http://paste.openstack.org/show/806626/ Regards Danishka The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 16 14:41:42 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 16 Jun 2021 15:41:42 +0100 Subject: Current market share of OpenStack implementations? In-Reply-To: <389941623853828@mail.yandex.ru> References: <389941623853828@mail.yandex.ru> Message-ID: <2435d29b2a390b109344f07c3f94b7596de1a0a0.camel@redhat.com> On Wed, 2021-06-16 at 17:32 +0300, Dmitriy Rabotyagov wrote: > Hey! >   > It's unfortunate that you've missed OpenStack-Ansible deployment > tooling in the list of the candidates. It's known to be used pretty > widely as well including big production deployments. also kolla-ansible/kayobe and airship but that more for starlingx althouhg it is an opnestack installer too. >   > 16.06.2021, 17:06, "Linebarger, John" : > > This is more of a marketing than a technical question, but the > > online sources seem to be either a few years old or the links have > > gone stale. What is the collective assessment of the OpenStack Hive > > Mind about the current worldwide market share of OpenStack > > implementations, in descending order? And which ones are growing > > and which ones are leveling off or declining? Candidates would seem > > to be: > >   > > 1. TripleO OpenStack > > 2. Canonical’s Charmed OpenStack > > 3. RedHat OpenStack Platform > > 4. Mirantis Cloud Platform you have also kind of mixed two different things the item above 2-4 are openstack distobutions. 1, 5 and 6 are openstack installers not distobutions so currently its not clear if you are interested in the distubution of openstack used or the intallation method. one can imply the other but not alwasy. > > 5. Packstack > > 6. Devstack (possibly; but that’s just build from source, right?) > >   > > But does something more comprehensive, more definitive, more > > current exist with regard to worldwide market share of OpenStack > > implementations? Or even segmented by continent, geographical > > region, or country? > >   > > Thanks!  Enjoy! > >   > > John M. Linebarger, PhD, MBA > > Principal Member of Technical Staff > > Sandia National Laboratories > > (Office) 505-845-8282 > > (Cell)     505-681-4879 > > > >   >   >   > --  > Kind Regards, > Dmitriy Rabotyagov >   From mark at stackhpc.com Wed Jun 16 14:42:25 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 16 Jun 2021 15:42:25 +0100 Subject: Current market share of OpenStack implementations? In-Reply-To: References: Message-ID: On Wed, 16 Jun 2021 at 14:59, Linebarger, John wrote: > This is more of a marketing than a technical question, but the online > sources seem to be either a few years old or the links have gone stale. > What is the collective assessment of the OpenStack Hive Mind about the > current worldwide market share of OpenStack implementations, in descending > order? And which ones are growing and which ones are leveling off or > declining? Candidates would seem to be: > > > The Kolla project remains popular as a vendor-neutral production-ready deployment tool. Here are the user survey results: https://www.openstack.org/analytics/ 1. TripleO OpenStack > > 2. Canonical’s Charmed OpenStack > > 3. RedHat OpenStack Platform > > 4. Mirantis Cloud Platform > > 5. Packstack > > 6. Devstack (possibly; but that’s just build from source, right?) > > > > But does something more comprehensive, more definitive, more current exist > with regard to worldwide market share of OpenStack implementations? Or even > segmented by continent, geographical region, or country? > > > > Thanks! Enjoy! > > > > *John M. Linebarger, PhD, MBA* > > Principal Member of Technical Staff > > Sandia National Laboratories > > (Office) 505-845-8282 > > (Cell) 505-681-4879 > > [image: AWS Certified Solutions Architect - Professional][image: AWS > Certified Solutions Architect - Associate][image: AWS Certified Developer > - Associate] > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Wed Jun 16 14:43:39 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 16 Jun 2021 17:43:39 +0300 Subject: Current market share of OpenStack implementations? In-Reply-To: References: Message-ID: <383511623854473@mail.yandex.ru> Hey! Duplicating email in non-HTML for web-archive :( It's unfortunate that you've missed OpenStack-Ansible deployment tooling in the list of the candidates. It's known to be used pretty widely as well including big production deployments. Also I'd add kolla-ansible as a pretty popular choice to go to the list as well. 16.06.2021, 17:06, "Linebarger, John" : > This is more of a marketing than a technical question, but the online sources seem to be either a few years old or the links have gone stale. What is the collective assessment of the OpenStack Hive Mind about the current worldwide market share of OpenStack implementations, in descending order? And which ones are growing and which ones are leveling off or declining? Candidates would seem to be: > > 1. TripleO OpenStack > > 2. Canonical’s Charmed OpenStack > > 3. RedHat OpenStack Platform > > 4. Mirantis Cloud Platform > > 5. Packstack > > 6. Devstack (possibly; but that’s just build from source, right?) > > But does something more comprehensive, more definitive, more current exist with regard to worldwide market share of OpenStack implementations? Or even segmented by continent, geographical region, or country? > > Thanks!  Enjoy! > > John M. Linebarger, PhD, MBA > > Principal Member of Technical Staff > > Sandia National Laboratories > > (Office) 505-845-8282 > > (Cell)     505-681-4879 --  Kind Regards, Dmitriy Rabotyagov From david.ames at canonical.com Wed Jun 16 14:45:14 2021 From: david.ames at canonical.com (David Ames) Date: Wed, 16 Jun 2021 07:45:14 -0700 Subject: [ironic][ovn] Ironic with OVN as the SDN In-Reply-To: References: Message-ID: Lucas, On Wed, Jun 16, 2021 at 1:40 AM Lucas Alvares Gomes wrote: > > Hi, > > On Tue, Jun 15, 2021 at 11:21 PM David Ames wrote: > > > > I am looking for a summary of the support or lack thereof for Ironic > > and OVN. Missing OVN features are explained in [0] and [1]. This bug > > [2] seems to imply one can run a Neutron OVS DHCP agent alongside OVN. > > > > Can I get definitive answers to the following questions: > > > > Is it possible to run neutron-dhcp-agent alongside OVN to handle the > > iPXE boot process? If so, is there documentation anywhere? > > > > It's not documented nor tested anywhere upstream AFAIK. That said, it > should be possible to deploy the Neutron DHCP agent on the controller > nodes and that will take care of the PXE/iPXE boot process for the > baremetal nodes. It would be great if someone actually gave this a go > and documented their experience, we could even think of a job in the > gate exercising this scenario I think. > > > What is the roadmap/projected timeline for OVN to support Ironic? > > > > Core OVN already started landing some of the missing bits for the OVN > DHCP server to start supporting this scenario (for example: > https://github.com/ovn-org/ovn/commit/2476911b853518092e023767663a69a7191a8fb7) > > We also need a few changes in the ML2/OVN Neutron driver, such as: > > 1) Ironic uses a dnsmasq syntax for setting the DHCP options in > Neutron (e.g !ipxe,bootfile-name=...) this is not understood by > ML2/OVN yet, so we need to work on that. > > 2) The OVN built-in DHCP server is distributed and it runs on the > compute nodes and fulfills the DHCP request for the VMs running on > that local hypervisor, for baremetal things are a bit different. > Fortunately, OVN has support for something called "external" ports > (see: https://github.com/ovn-org/ovn/commit/96080083581275afaec8bc281d6a648aff7ef39e) > which I believe we can use for the baremetal case. For example, if the > VNIC of the port created is VNIC_BAREMETAL we can create an external > port for it in ML2/OVN. That's how we support SR-IOV in ML2/OVN too. > > We have most of the bits and pieces in place already and I hope I can > start working on supporting baremetal nodes natively in ML2/OVN soon > too. But for now, the only way to achieve it is by using the Neutron > DHCP agent as discussed before. > > Cheers, > Lucas > > > [0] https://docs.openstack.org/neutron/latest/ovn/gaps.html > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1622154 > > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1620943 Thank you, this is exactly what I was looking for. Glad to hear it is on the road map and in progress. I will spend some time attempting to get neutron-dhcp-agent working with Ironic and OVN and I will be sure to document that process if successful. Thanks again, -- David Ames OpenStack Charm Engineering From allison at openstack.org Wed Jun 16 14:46:31 2021 From: allison at openstack.org (Allison Price) Date: Wed, 16 Jun 2021 09:46:31 -0500 Subject: Current market share of OpenStack implementations? In-Reply-To: References: Message-ID: One of the best ways we can make sure that user choices like deployment tools and software vendors are captured is by encouraging anyone you know who is deploying OpenStack to take the OpenStack User Survey: https://www.openstack.org/user-survey/survey-2021 We pull data every August and can share anonymized data around choices like this. Please share this far and wide - it’s very helpful to have everyone update their survey if they’ve completed it in a previous cycle or start a new one if you’ve never completed one before. If you have any questions, please let me know! Allison Allison Price Director of Marketing & Community / OpenInfra Foundation e: allison at openinfra.dev p: +1-214-686-6821 > On Jun 16, 2021, at 9:42 AM, Mark Goddard wrote: > > > > On Wed, 16 Jun 2021 at 14:59, Linebarger, John > wrote: > This is more of a marketing than a technical question, but the online sources seem to be either a few years old or the links have gone stale. What is the collective assessment of the OpenStack Hive Mind about the current worldwide market share of OpenStack implementations, in descending order? And which ones are growing and which ones are leveling off or declining? Candidates would seem to be: > > > > > The Kolla project remains popular as a vendor-neutral production-ready deployment tool. > > Here are the user survey results: https://www.openstack.org/analytics/ > > > 1. TripleO OpenStack > > 2. Canonical’s Charmed OpenStack > > 3. RedHat OpenStack Platform > > 4. Mirantis Cloud Platform > > 5. Packstack > > 6. Devstack (possibly; but that’s just build from source, right?) > > > > But does something more comprehensive, more definitive, more current exist with regard to worldwide market share of OpenStack implementations? Or even segmented by continent, geographical region, or country? > > > > Thanks! Enjoy! > > > > John M. Linebarger, PhD, MBA > > Principal Member of Technical Staff > > Sandia National Laboratories > > (Office) 505-845-8282 > > (Cell) 505-681-4879 > > <> <> <> <> <> <> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Wed Jun 16 14:46:29 2021 From: amy at demarco.com (Amy Marrich) Date: Wed, 16 Jun 2021 09:46:29 -0500 Subject: Current market share of OpenStack implementations? In-Reply-To: <389941623853828@mail.yandex.ru> References: <389941623853828@mail.yandex.ru> Message-ID: I've added Allison Price on here as deployment method is part of the User Survey. Thanks, Amy (spotz) On Wed, Jun 16, 2021 at 9:35 AM Dmitriy Rabotyagov wrote: > Hey! > > It's unfortunate that you've missed OpenStack-Ansible deployment tooling > in the list of the candidates. It's known to be used pretty widely as well > including big production deployments. > > 16.06.2021, 17:06, "Linebarger, John" : > > This is more of a marketing than a technical question, but the online > sources seem to be either a few years old or the links have gone stale. > What is the collective assessment of the OpenStack Hive Mind about the > current worldwide market share of OpenStack implementations, in descending > order? And which ones are growing and which ones are leveling off or > declining? Candidates would seem to be: > > > > 1. TripleO OpenStack > > 2. Canonical’s Charmed OpenStack > > 3. RedHat OpenStack Platform > > 4. Mirantis Cloud Platform > > 5. Packstack > > 6. Devstack (possibly; but that’s just build from source, right?) > > > > But does something more comprehensive, more definitive, more current exist > with regard to worldwide market share of OpenStack implementations? Or even > segmented by continent, geographical region, or country? > > > > Thanks! Enjoy! > > > > *John M. Linebarger, PhD, MBA* > > Principal Member of Technical Staff > > Sandia National Laboratories > > (Office) 505-845-8282 > > (Cell) 505-681-4879 > > [image: AWS Certified Solutions Architect - > Professional] > [image: > AWS Certified Solutions Architect - Associate] > [image: > AWS Certified Developer - Associate] > > > > > > > > > -- > Kind Regards, > Dmitriy Rabotyagov > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 9285 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 7238 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 2419 bytes Desc: not available URL: From juliaashleykreger at gmail.com Wed Jun 16 14:48:57 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 16 Jun 2021 07:48:57 -0700 Subject: Current market share of OpenStack implementations? In-Reply-To: References: Message-ID: CC'ing Wes Wilson as I've had similar discussions with him in the past at the foundation level. I, personally, would really love a framework where projects could upload some anonymized usage statistics and enable the entire community to query the data. One of the biggest challenges we face in Ironic is the lack of full context into the various scale of problems. The math starts to change drastically as deployments scale, and without some of that context in the community beyond operators expressing frustrations in private conversations, it is difficult for projects to build the consensus on the scale of problems in the open, much less the possible positive impact of fixing the perceived problem. Unfortunately the Foundation is likely to lean on the User Survey, but it is opt-in and announced on a mailing list and only really captures community participants as a result. Plus the growing trend I've heard mentioned on multiple fronts is operators are becoming more secretive about what they are doing with OpenStack because it is becoming an everyday thing. It is at the lower levels of their infrastructure and divulging details starts to become an operational security risk. On Wed, Jun 16, 2021 at 7:20 AM Jiri Podivin wrote: > Answer to this question would interest me as well. > I suppose the marketing, or equivalent, departments might have a good idea > about their own customers. > But I'm not sure how they would feel about sharing it, especially in > detail. > > > On Wed, Jun 16, 2021 at 4:06 PM Linebarger, John > wrote: > >> This is more of a marketing than a technical question, but the online >> sources seem to be either a few years old or the links have gone stale. >> What is the collective assessment of the OpenStack Hive Mind about the >> current worldwide market share of OpenStack implementations, in descending >> order? And which ones are growing and which ones are leveling off or >> declining? Candidates would seem to be: >> >> >> >> 1. TripleO OpenStack >> >> 2. Canonical’s Charmed OpenStack >> >> 3. RedHat OpenStack Platform >> >> 4. Mirantis Cloud Platform >> >> 5. Packstack >> >> 6. Devstack (possibly; but that’s just build from source, right?) >> >> >> >> But does something more comprehensive, more definitive, more current >> exist with regard to worldwide market share of OpenStack implementations? >> Or even segmented by continent, geographical region, or country? >> >> >> >> Thanks! Enjoy! >> >> >> >> *John M. Linebarger, PhD, MBA* >> >> Principal Member of Technical Staff >> >> Sandia National Laboratories >> >> (Office) 505-845-8282 >> >> (Cell) 505-681-4879 >> >> [image: AWS Certified Solutions Architect - >> Professional] >> [image: >> AWS Certified Solutions Architect - Associate] >> [image: >> AWS Certified Developer - Associate] >> >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 7238 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 2419 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 9285 bytes Desc: not available URL: From marios at redhat.com Wed Jun 16 15:19:04 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 16 Jun 2021 18:19:04 +0300 Subject: [TripleO] moving stable/stein and stable/queens to End of Life Message-ID: Hello TripleO, I am proposing to move to End of Life for stable/stein and stable/queens branches across all TripleO repos (stein [1] and queens [2]). The proposal was introduced and discussed in recent TripleO irc meetings [3] and there have been no objections so far. The main reason is to allow us to focus our resources on the more active branches, train ussuri victoria wallaby and xena (!). We moved the rocky branch to EOL last cycle with [4] for the same reason. I think this move is well justified as these branches are getting relatively few commits lately (so calling into question the resources we are dedicating to running and maintaining the check, gate and 3rd party periodic/promotion lines). Looking at tripleo-common, python-tripleoclient and tripleo-heat-templates I counted 9 commits to stable/stein and 23 commits to stable/queens since November 2020 (details on those numbers at [5][6]). Please speak up if you have any concerns or questions about the proposal. If there are none then the next step is to post a review against the releases repo to make it official. regards, marios [1] https://releases.openstack.org/teams/tripleo.html#stein [2] https://releases.openstack.org/teams/tripleo.html#queens [3] https://meetings.opendev.org/meetings/tripleo/2021/tripleo.2021-06-08-14.00.html [4] https://review.opendev.org/c/openstack/releases/+/774244 [5] https://gist.github.com/marios/b3155fe3b1318cc26bfa4bc15c764a26#gistcomment-3752102 [6] https://gist.github.com/marios/b3155fe3b1318cc26bfa4bc15c764a26#gistcomment-3755127 From openinfradn at gmail.com Wed Jun 16 15:27:31 2021 From: openinfradn at gmail.com (open infra) Date: Wed, 16 Jun 2021 20:57:31 +0530 Subject: [EXTERNAL] Error creating VMs In-Reply-To: References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: worker-0 contin following under /var/log/pods kube-system_calico-node-brhq2_b3e3c8d7-ebeb-405a-af83-a19240d82997 openstack_neutron-metadata-agent-worker-0-13cc482d-6tgrf_88f4f98d-4526-4148-b99c-4e2ee9072995 kube-system_kube-multus-ds-amd64-k4pr4_1b0342da-641c-4248-a3ad-c67a9f8bcce5 openstack_neutron-ovs-agent-worker-0-13cc482d-6m9sw_0bc04a1a-8d36-4a37-99b3-19f6ebba9856 kube-system_kube-proxy-8hv88_dc495497-b346-4b28-ac5d-5aec9848f886 openstack_neutron-sriov-agent-worker-0-13cc482d-22rsz_a09c060e-3c4b-4aae-ad7f-ec2f96c0fcc3 kube-system_kube-sriov-cni-ds-amd64-5rpfv_02eab05d-eb1b-4a17-8559-8e6809d830d7 openstack_nova-compute-worker-0-13cc482d-27k2r_dcdd6855-b3e3-4137-b582-2fb665569490 openstack_libvirt-libvirt-default-4dcqc_c29b390b-7b69-4661-b0ac-cfc7b8c2d175 openstack_openvswitch-db-pnvcp_2855de13-37a7-4045-8444-0bbb54a6d8f3 openstack_neutron-dhcp-agent-worker-0-13cc482d-7pt4l_219e142a-1277-4b29-b4b5-dd365db4efb8 openstack_openvswitch-vswitchd-scrmc_c9eb3f17-9383-4f04-80bf-1cb63550d092 openstack_neutron-l3-agent-worker-0-13cc482d-7sfnp_ee272908-5710-4d2d-9206-8ec20a91ff10 openstack_osh-openstack-garbd-garbd-58dc7995cf-7hrd9_0366de80-b0d7-428e-8e6f-a9206231fd7a On Wed, Jun 16, 2021 at 8:08 PM Braden, Albert wrote: > Someone will need to dig through the starlingx documentation and figure > out where the log files are located. Do you want to do that? > > > > https://docs.starlingx.io/ > > > > *From:* open infra > *Sent:* Wednesday, June 16, 2021 9:12 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > I can see sm-scheduler.log not sure if its the correct log file you > mentioned, but I can't see conductor log. > > This environment have been deployed using starlingx. > > > > On Wed, Jun 16, 2021 at 6:16 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > Can you check the scheduler and conductor logs on the controllers? There > should be entries describing why the instance failed to schedule. You may > need to set “debug=true” in nova.conf to get more details. > > > > *From:* open infra > *Sent:* Wednesday, June 16, 2021 1:36 AM > *To:* openstack-discuss > *Subject:* [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Hi > > > > After setting up OpenStack environment using STX R5 (two controllers, two > storage nodes and one worker), I have deployed a VM. VM ended up with ERROR > status. > > > > I highly appreciate if someone can guide to dig further (what logs to > check ) or to fix this issue. > > > > http://paste.openstack.org/show/806626/ > > > > Regards > > Danishka > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svyas at redhat.com Wed Jun 16 15:33:31 2021 From: svyas at redhat.com (Soniya Vyas) Date: Wed, 16 Jun 2021 21:03:31 +0530 Subject: Fwd: Regarding RBAC tests In-Reply-To: References: Message-ID: Hello everyone, I was just curious about writing RBAC tests, hence while writing them I came across certain analysis and confusions. Analysis:- Whenever an api is called within patrole for its testing, it's the call to its respective service client. Like, let's say we need to test 'create_network:is_default', naturally for expected result we need to call the api first, which would be a call to neutron client's create_network[1]. Question:- But which api's to call for logging resource[2]?For example, let's say we need to write a RBAC test for Create log. As far as I know, Openstack uses python's default logging extension. So the question is which api's to call for the expected result. In Patrole, there are no assertions with expected and actual data. So, How do RBAC-tests test the api? Any help with these confusions would be greatly appreciated. [1] https://review.opendev.org/c/openstack/patrole/+/583340/4/patrole_tempest_plugin/tests/api/network/test_networks_rbac.py#100 [2] https://docs.openstack.org/api-ref/network/v2/?expanded=#log-resource Thanks and Regards, Soniya Vyas -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Wed Jun 16 15:58:22 2021 From: openinfradn at gmail.com (open infra) Date: Wed, 16 Jun 2021 21:28:22 +0530 Subject: [EXTERNAL] Error creating VMs In-Reply-To: References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: Just clarifying: I have created an external network (including a subnet). openstack network create --share --external --provider-network-type vlan provider-external --provider-physical-network physnet0 Then I have created internal network using GUI. Used 'Networks' tab under Projects > Network. But it failed with the following error. "Failed to create network "internal-net": Unable to create the network. No tenant network is available for allocation." But when I try to create internal network under "Admin > Network > Networks", still I need to provide physical network like in external network creation. So, do I need to create both external and internal network using the same physical network? On Wed, Jun 16, 2021 at 8:57 PM open infra wrote: > > > worker-0 contin following under /var/log/pods > > > kube-system_calico-node-brhq2_b3e3c8d7-ebeb-405a-af83-a19240d82997 > > openstack_neutron-metadata-agent-worker-0-13cc482d-6tgrf_88f4f98d-4526-4148-b99c-4e2ee9072995 > > kube-system_kube-multus-ds-amd64-k4pr4_1b0342da-641c-4248-a3ad-c67a9f8bcce5 > > openstack_neutron-ovs-agent-worker-0-13cc482d-6m9sw_0bc04a1a-8d36-4a37-99b3-19f6ebba9856 > > kube-system_kube-proxy-8hv88_dc495497-b346-4b28-ac5d-5aec9848f886 > > openstack_neutron-sriov-agent-worker-0-13cc482d-22rsz_a09c060e-3c4b-4aae-ad7f-ec2f96c0fcc3 > > > kube-system_kube-sriov-cni-ds-amd64-5rpfv_02eab05d-eb1b-4a17-8559-8e6809d830d7 > > openstack_nova-compute-worker-0-13cc482d-27k2r_dcdd6855-b3e3-4137-b582-2fb665569490 > > > openstack_libvirt-libvirt-default-4dcqc_c29b390b-7b69-4661-b0ac-cfc7b8c2d175 > > openstack_openvswitch-db-pnvcp_2855de13-37a7-4045-8444-0bbb54a6d8f3 > > > openstack_neutron-dhcp-agent-worker-0-13cc482d-7pt4l_219e142a-1277-4b29-b4b5-dd365db4efb8 > openstack_openvswitch-vswitchd-scrmc_c9eb3f17-9383-4f04-80bf-1cb63550d092 > > > openstack_neutron-l3-agent-worker-0-13cc482d-7sfnp_ee272908-5710-4d2d-9206-8ec20a91ff10 > > openstack_osh-openstack-garbd-garbd-58dc7995cf-7hrd9_0366de80-b0d7-428e-8e6f-a9206231fd7a > > On Wed, Jun 16, 2021 at 8:08 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > >> Someone will need to dig through the starlingx documentation and figure >> out where the log files are located. Do you want to do that? >> >> >> >> https://docs.starlingx.io/ >> >> >> >> *From:* open infra >> *Sent:* Wednesday, June 16, 2021 9:12 AM >> *To:* Braden, Albert >> *Cc:* openstack-discuss >> *Subject:* Re: [EXTERNAL] Error creating VMs >> >> >> >> *CAUTION:* The e-mail below is from an external source. Please exercise >> caution before opening attachments, clicking links, or following guidance. >> >> I can see sm-scheduler.log not sure if its the correct log file you >> mentioned, but I can't see conductor log. >> >> This environment have been deployed using starlingx. >> >> >> >> On Wed, Jun 16, 2021 at 6:16 PM Braden, Albert < >> C-Albert.Braden at charter.com> wrote: >> >> Can you check the scheduler and conductor logs on the controllers? There >> should be entries describing why the instance failed to schedule. You may >> need to set “debug=true” in nova.conf to get more details. >> >> >> >> *From:* open infra >> *Sent:* Wednesday, June 16, 2021 1:36 AM >> *To:* openstack-discuss >> *Subject:* [EXTERNAL] Error creating VMs >> >> >> >> *CAUTION:* The e-mail below is from an external source. Please exercise >> caution before opening attachments, clicking links, or following guidance. >> >> Hi >> >> >> >> After setting up OpenStack environment using STX R5 (two controllers, two >> storage nodes and one worker), I have deployed a VM. VM ended up with ERROR >> status. >> >> >> >> I highly appreciate if someone can guide to dig further (what logs to >> check ) or to fix this issue. >> >> >> >> http://paste.openstack.org/show/806626/ >> >> >> >> Regards >> >> Danishka >> >> >> >> The contents of this e-mail message and >> any attachments are intended solely for the >> addressee(s) and may contain confidential >> and/or legally privileged information. If you >> are not the intended recipient of this message >> or if this message has been addressed to you >> in error, please immediately alert the sender >> by reply e-mail and then delete this message >> and any attachments. If you are not the >> intended recipient, you are notified that >> any use, dissemination, distribution, copying, >> or storage of this message or any attachment >> is strictly prohibited. >> >> The contents of this e-mail message and >> any attachments are intended solely for the >> addressee(s) and may contain confidential >> and/or legally privileged information. If you >> are not the intended recipient of this message >> or if this message has been addressed to you >> in error, please immediately alert the sender >> by reply e-mail and then delete this message >> and any attachments. If you are not the >> intended recipient, you are notified that >> any use, dissemination, distribution, copying, >> or storage of this message or any attachment >> is strictly prohibited. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jun 16 16:05:30 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 16 Jun 2021 11:05:30 -0500 Subject: Fwd: Regarding RBAC tests In-Reply-To: References: Message-ID: <17a15920b76.f9c54d0f77445.1682947049905638536@ghanshyammann.com> ---- On Wed, 16 Jun 2021 10:33:31 -0500 Soniya Vyas wrote ---- > Hello everyone, > I was just curious about writing RBAC tests, hence while writing them I came across certain analysis and confusions. > Analysis:- > Whenever an api is called within patrole for its testing, it's the call to its respective service client. Like, let's say we need to test 'create_network:is_default', naturally for expected result we need to call the api first, which would be a call to neutron client's create_network[1]. > Question:- > > But which api's to call for logging resource[2]?For example, let's say we need to write a RBAC test for Create log. > As far as I know, Openstack uses python's default logging extension. So the question is which api's to call for the expected result. > > In Patrole, there are no assertions with expected and actual data. So, How do RBAC-tests test the api? > Any help with these confusions would be greatly appreciated. Thanks, Sonia for improving the test coverage. For any Tempest-like tests (Patrole tests are Tempest-like tests, we have implemented the service clients for each required API. We do not use the python-based client in Tempest for debugging reason etc. We have all the service clients either present in tempest/lib/services/ [1] or on the tempest plugin side. For neutron's log API we do not have any service client written yet so you need to write the service client in Tempest for that first and then add RBAC test in patrole. You can do in parallel and test with depends-on. You can write new service client here https://opendev.org/openstack/tempest/src/branch/master/tempest/lib/services/network. Example: https://review.opendev.org/c/openstack/tempest/+/465810 [1] https://opendev.org/openstack/tempest/src/branch/master/tempest/lib/services -gmann > > [1] https://review.opendev.org/c/openstack/patrole/+/583340/4/patrole_tempest_plugin/tests/api/network/test_networks_rbac.py#100[2] https://docs.openstack.org/api-ref/network/v2/?expanded=#log-resource > Thanks and Regards,Soniya Vyas From peiyong.zhang at salesforce.com Wed Jun 16 16:26:32 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Wed, 16 Jun 2021 09:26:32 -0700 Subject: networking-calico-1.4.2-1.el7.centos.noarch.rpm In-Reply-To: References: Message-ID: Great. We are installing the release-train, which/where networking-calico version shall be used? thx. Pete On Wed, Jun 16, 2021 at 12:23 AM Neil Jerram wrote: > Hi Pete, > > On Wed, Jun 16, 2021 at 6:24 AM Pete Zhang > wrote: > >> Anyone know where I can download this rpm? thx. >> > > networking-calico-1.4.2-1.el7.centos.noarch.rpm looks like a very old > version - are you sure you really want that particular version? > > That said, the Calico team (including me) hosts RPMs at > https://binaries.projectcalico.org/rpm/, and that particular version can > be found at these subpaths: > ./calico-2.0/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm > ./calico-2.1/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm > ./calico-2.5/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm > ./calico-2.2/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm > ./calico-2.3/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm > ./calico-2.4/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm > > Best wishes, > Neil > > -- > > Neil Jerram > > Senior Software Engineer > > Tigera > > neil at tigera.io | @neiljerram > > Follow Tigera: Blog | Twitter > | LinkedIn > > > Leader in Kubernetes Security and Observability > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Wed Jun 16 16:29:33 2021 From: neil at tigera.io (Neil Jerram) Date: Wed, 16 Jun 2021 17:29:33 +0100 Subject: networking-calico-1.4.2-1.el7.centos.noarch.rpm In-Reply-To: References: Message-ID: What do you mean by "the release-train"? On Wed, Jun 16, 2021 at 5:26 PM Pete Zhang wrote: > Great. > We are installing the release-train, which/where networking-calico version > shall be used? thx. > > Pete > > On Wed, Jun 16, 2021 at 12:23 AM Neil Jerram wrote: > >> Hi Pete, >> >> On Wed, Jun 16, 2021 at 6:24 AM Pete Zhang >> wrote: >> >>> Anyone know where I can download this rpm? thx. >>> >> >> networking-calico-1.4.2-1.el7.centos.noarch.rpm looks like a very old >> version - are you sure you really want that particular version? >> >> That said, the Calico team (including me) hosts RPMs at >> https://binaries.projectcalico.org/rpm/, and that particular version can >> be found at these subpaths: >> ./calico-2.0/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >> ./calico-2.1/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >> ./calico-2.5/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >> ./calico-2.2/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >> ./calico-2.3/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >> ./calico-2.4/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >> >> Best wishes, >> Neil >> >> -- >> >> Neil Jerram >> >> Senior Software Engineer >> >> Tigera >> >> neil at tigera.io | @neiljerram >> >> Follow Tigera: Blog | Twitter >> | LinkedIn >> >> >> Leader in Kubernetes Security and Observability >> >> > > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Wed Jun 16 16:39:10 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Wed, 16 Jun 2021 09:39:10 -0700 Subject: networking-calico-1.4.2-1.el7.centos.noarch.rpm In-Reply-To: References: Message-ID: The openstack release TRAIN. https://releases.openstack.org/train/index.html . On Wed, Jun 16, 2021 at 9:29 AM Neil Jerram wrote: > What do you mean by "the release-train"? > > On Wed, Jun 16, 2021 at 5:26 PM Pete Zhang > wrote: > >> Great. >> We are installing the release-train, which/where networking-calico >> version shall be used? thx. >> >> Pete >> >> On Wed, Jun 16, 2021 at 12:23 AM Neil Jerram wrote: >> >>> Hi Pete, >>> >>> On Wed, Jun 16, 2021 at 6:24 AM Pete Zhang >>> wrote: >>> >>>> Anyone know where I can download this rpm? thx. >>>> >>> >>> networking-calico-1.4.2-1.el7.centos.noarch.rpm looks like a very old >>> version - are you sure you really want that particular version? >>> >>> That said, the Calico team (including me) hosts RPMs at >>> https://binaries.projectcalico.org/rpm/, and that particular version >>> can be found at these subpaths: >>> ./calico-2.0/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>> ./calico-2.1/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>> ./calico-2.5/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>> ./calico-2.2/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>> ./calico-2.3/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>> ./calico-2.4/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>> >>> Best wishes, >>> Neil >>> >>> -- >>> >>> Neil Jerram >>> >>> Senior Software Engineer >>> >>> Tigera >>> >>> neil at tigera.io | @neiljerram >>> >>> Follow Tigera: Blog | Twitter >>> | LinkedIn >>> >>> >>> Leader in Kubernetes Security and Observability >>> >>> >> >> -- >> >> >> > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Wed Jun 16 16:50:54 2021 From: neil at tigera.io (Neil Jerram) Date: Wed, 16 Jun 2021 17:50:54 +0100 Subject: networking-calico-1.4.2-1.el7.centos.noarch.rpm In-Reply-To: References: Message-ID: Ah right. Will that be with Python 2 or Python 3? Our recent Calico releases have required Python 3, so if you're using Python 3 I recommend the latest, Calico 3.19. If you're using Python 2, our last Python 2 release series was 3.15, and RPMs for that can be found at https://binaries.projectcalico.org/rpm/calico-3.15-python2/ Hope that helps - please do ask in case you have more questions! On Wed, Jun 16, 2021 at 5:39 PM Pete Zhang wrote: > The openstack release TRAIN. > https://releases.openstack.org/train/index.html. > > On Wed, Jun 16, 2021 at 9:29 AM Neil Jerram wrote: > >> What do you mean by "the release-train"? >> >> On Wed, Jun 16, 2021 at 5:26 PM Pete Zhang >> wrote: >> >>> Great. >>> We are installing the release-train, which/where networking-calico >>> version shall be used? thx. >>> >>> Pete >>> >>> On Wed, Jun 16, 2021 at 12:23 AM Neil Jerram wrote: >>> >>>> Hi Pete, >>>> >>>> On Wed, Jun 16, 2021 at 6:24 AM Pete Zhang < >>>> peiyong.zhang at salesforce.com> wrote: >>>> >>>>> Anyone know where I can download this rpm? thx. >>>>> >>>> >>>> networking-calico-1.4.2-1.el7.centos.noarch.rpm looks like a very old >>>> version - are you sure you really want that particular version? >>>> >>>> That said, the Calico team (including me) hosts RPMs at >>>> https://binaries.projectcalico.org/rpm/, and that particular version >>>> can be found at these subpaths: >>>> ./calico-2.0/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>> ./calico-2.1/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>> ./calico-2.5/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>> ./calico-2.2/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>> ./calico-2.3/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>> ./calico-2.4/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>> >>>> Best wishes, >>>> Neil >>>> >>>> -- >>>> >>>> Neil Jerram >>>> >>>> Senior Software Engineer >>>> >>>> Tigera >>>> >>>> neil at tigera.io | @neiljerram >>>> >>>> Follow Tigera: Blog | Twitter >>>> | LinkedIn >>>> >>>> >>>> Leader in Kubernetes Security and Observability >>>> >>>> >>> >>> -- >>> >>> >>> >> > > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Wed Jun 16 16:52:10 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Wed, 16 Jun 2021 09:52:10 -0700 Subject: networking-calico-1.4.2-1.el7.centos.noarch.rpm In-Reply-To: References: Message-ID: Awesome, thanks for the info! On Wed, Jun 16, 2021 at 9:51 AM Neil Jerram wrote: > Ah right. Will that be with Python 2 or Python 3? > > Our recent Calico releases have required Python 3, so if you're using > Python 3 I recommend the latest, Calico 3.19. > > If you're using Python 2, our last Python 2 release series was 3.15, and > RPMs for that can be found at > https://binaries.projectcalico.org/rpm/calico-3.15-python2/ > > Hope that helps - please do ask in case you have more questions! > > > On Wed, Jun 16, 2021 at 5:39 PM Pete Zhang > wrote: > >> The openstack release TRAIN. >> https://releases.openstack.org/train/index.html. >> >> On Wed, Jun 16, 2021 at 9:29 AM Neil Jerram wrote: >> >>> What do you mean by "the release-train"? >>> >>> On Wed, Jun 16, 2021 at 5:26 PM Pete Zhang >>> wrote: >>> >>>> Great. >>>> We are installing the release-train, which/where networking-calico >>>> version shall be used? thx. >>>> >>>> Pete >>>> >>>> On Wed, Jun 16, 2021 at 12:23 AM Neil Jerram wrote: >>>> >>>>> Hi Pete, >>>>> >>>>> On Wed, Jun 16, 2021 at 6:24 AM Pete Zhang < >>>>> peiyong.zhang at salesforce.com> wrote: >>>>> >>>>>> Anyone know where I can download this rpm? thx. >>>>>> >>>>> >>>>> networking-calico-1.4.2-1.el7.centos.noarch.rpm looks like a very old >>>>> version - are you sure you really want that particular version? >>>>> >>>>> That said, the Calico team (including me) hosts RPMs at >>>>> https://binaries.projectcalico.org/rpm/, and that particular version >>>>> can be found at these subpaths: >>>>> ./calico-2.0/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>>> ./calico-2.1/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>>> ./calico-2.5/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>>> ./calico-2.2/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>>> ./calico-2.3/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>>> ./calico-2.4/noarch/networking-calico-1.4.2-1.el7.centos.noarch.rpm >>>>> >>>>> Best wishes, >>>>> Neil >>>>> >>>>> -- >>>>> >>>>> Neil Jerram >>>>> >>>>> Senior Software Engineer >>>>> >>>>> Tigera >>>>> >>>>> neil at tigera.io | @neiljerram >>>>> >>>>> Follow Tigera: Blog | Twitter >>>>> | LinkedIn >>>>> >>>>> >>>>> Leader in Kubernetes Security and Observability >>>>> >>>>> >>>> >>>> -- >>>> >>>> >>>> >>>> >>> >> >> -- >> >> >> > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From levonmelikbekjan at yahoo.de Wed Jun 16 18:22:26 2021 From: levonmelikbekjan at yahoo.de (levonmelikbekjan at yahoo.de) Date: Wed, 16 Jun 2021 20:22:26 +0200 Subject: AW: AW: AW: Customization of nova-scheduler In-Reply-To: References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> <000101d761f7$0a0bf090$1e23d1b0$@yahoo.de> Message-ID: <000001d762dc$8fa76b40$aef641c0$@yahoo.de> I would like to explain you that, but now I am facing another issue. The quota info message appeared „Quota exceeded for cores: Requested 4, but already used 59 of 60 cores“. In my version Train is a path called /usr/lib/python2.7/site-packages/nova/openstack with a file called wsgi.py. To be more precise it is the __exit__ function in the ResourceExceptionHandler, which outputs this information message. My question is, where does this function get the information from that the limit of the VCPUs and possibly other resources has been exceeded? I mean the scheduler doesn’t even run. I can’t get it until now. Maybe I need to manipulate the nova-api rather then manipulating the nova-scheduler. Thank you very much and have a nice day! Best regards Levon Von: Laurent Dumont Gesendet: Mittwoch, 16. Juni 2021 02:55 An: levonmelikbekjan at yahoo.de Cc: Sean Mooney ; openstack Betreff: Re: AW: AW: Customization of nova-scheduler Out of curiosity, how did you end up implementing your workflow? Through the scheduler or is the logic external to Openstack? On Tue, Jun 15, 2021 at 11:48 AM > wrote: Hi Sean, I am already done with my solution. Everything works as expected! :) Thank you for your support. You guys are great. Best regards Levon -----Ursprüngliche Nachricht----- Von: Sean Mooney > Gesendet: Dienstag, 15. Juni 2021 16:37 An: Stephen Finucane >; levonmelikbekjan at yahoo.de ; openstack at lists.openstack.org Betreff: Re: AW: AW: Customization of nova-scheduler On Tue, 2021-06-15 at 15:18 +0100, Stephen Finucane wrote: > On Thu, 2021-06-10 at 17:21 +0200, levonmelikbekjan at yahoo.de wrote: > > Hi Stephen, > > > > I'm trying to customize my nova scheduler. However, if I change the > > nova.conf as it is written here > > https://docs.openstackorg/operations-guide/de/ops-customize-compute > > .html , then my python file cannot be found. How can I configure it > > correctly? > > > > Do you have any idea? > > > > My controller node is running with CENTOS 7. I couldn't install > > devstack because it is only supported for CENTOS 8 version. > > That document is very old. You want [1], which documents how to do > this properly. wwell that depend if they acatully want to write ther own filter yes but if they want to replace the scheduler with a new one we recently removed support for that right. previously we had several schduler implemtation like the caching scheduler and that old doc https://docs.openstack.org/operations-guide/de/ops-customize-compute.html descibes on how to replace the filter scheduler dirver with an new one. we deprecated it ussuri https://github.com/openstack/nova/commit/6a4cb24d39623930fd240e67d65013803459839d and you finally removed the extention point in febuary https://github.com/openstack/nova/commit/5aeb3a387494c4559d183d1290db3c92a96dfb90 so from wallaby on you can nolonger write an alternitvie schduler implemenation out of tree without reverting that. so yes https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter is how you customise schduling now but you cant customise the schduler itself out fo tree anymore. > > Hope this helps, > Stephen > > [1] > https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-y > our-own-filter > > > Best regards > > Levon > > > > -----Ursprüngliche Nachricht----- > > Von: Stephen Finucane > > > Gesendet: Montag, 31. Mai 2021 18:21 > > An: levonmelikbekjan at yahoo.de ; openstack at lists.openstack.org > > Betreff: Re: AW: Customization of nova-scheduler > > > > On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > > > Hello Stephen, > > > > > > I am a student from Germany who is currently working on his > > > bachelor thesis. My job is to build a cloud solution for my > > > university with Openstack. The functionality should include the > > > prioritization of users. So that you can imagine exactly how the > > > whole thing should work, I would like to give you an example > > > > > > Two cases should be solved! > > > > > > Case 1: A user A with a low priority uses a VM from Openstack with > > > half performance of the available host. Then user B comes in with > > > a high priority and needs the full performance of the host for his > > > VM. When creating the VM of user B, the VM of user A should be > > > deleted because there is not enough compute power for user B The > > > VM of user B is successfully created. > > > > > > Case 2: A user A with a low priority uses a VM with half the > > > performance of the available host, then user B comes in with a > > > high priority and needs half of the performance of the host for his VM. > > > When creating the VM of user B, user A should not be deleted, > > > since enough computing power is available for both users. > > > > > > These cases should work for unlimited users. In order to optimize > > > the whole thing, I would like to write a function that precisely > > > calculates all performance components to determine whether enough > > > resources are available for the VM of the high priority user > > > > What you're describing is commonly referred to as "preemptible" or > > "spot" > > instances. This topic has a long, complicated history in nova and > > has yet to be implemented. Searching for "preemptible instances > > openstack" should yield you lots of discussion on the topic along > > with a few proof-of-concept approaches using external services or > > out-of-tree modifications to nova. > > > > > I’m new to Openstack, but I’ve already implemented cloud projects > > > with Microsoft Azure and have solid programming skills. Can you > > > give me a hint where and how I can start? > > > > As hinted above, this is likely to be a very difficult project given > > the fraught history of the idea. I don't want to dissuade you from > > this work but you should be aware of what you're getting into from > > the start. If you're serious about pursuing this, I suggest you > > first do some research on prior art. As noted above, there is lots > > of information on the internet about this. With this research done, > > you'll need to decide whether this is something you want to approach > > within nova itself, via out-of-tree extensions or via a third party > > project. If you're opting for integration with nova, then you'll > > need to think long and hard about how you would design such a system > > and start working on a spec (a design document) outlining your > > proposed solution. Details on how to write a spec are discussed at > > [1]. The only extension points nova offers today are scheduler > > filters and weighers so your options for an out-of-tree extension > > approach will be limited. A third party project will arguably be the > > easiest approach but you will be restricted to talking to nova's > > REST APIs which may limit the design somewhat. This Blazar spec [2] > > could give you some ideas on this approach (assuming it was never > > actually implemented, though it may well have been). > > > > > My university gave me three compute hosts and one control host to > > > implement this solution for the bachelor thesis. I’m currently > > > setting up Openstack and all the services on the control host all > > > by myself to understand all the functionality (sorry for not using > > > Packstack) 😉. All my hosts have CentOS 7 and the minimum > > > deployment which I configure is Train. > > > > > > My idea is to work with nova schedulers, because they seem to be > > > interesting for my case. I've found a whole infrastructure > > > description of the provisioning of an instance in Openstack > > > https://docs.openstack.org/operations-guide/de/_images/provision-a > > > n-instance.png > > > . > > > > > > The nova scheduler > > > https://docs.openstack.org/operations-guide/ops-customize-compute. > > > html > > > is the first component, where it is possible to implement > > > functions via Python and the Compute API > > > https://docs.openstack.org/api-ref/compute/?expanded=show-details- > > > of-specific-api-version-detail,list-servers-detail > > > to check for active VMs and probably delete them if needed before > > > a successful request for an instantiation can be made. > > > > > > What do you guys think about it? Does it seem like a good starting > > > point for you or is it the wrong approach? > > > > This could potentially work, but I suspect there will be serious > > performance implications with this, particularly at scale. Scheduler > > filters are historically used for simple things like "find me a > > group of hosts that have this metadata attribute I set on my image". > > Making API calls sounds like something that would take significant > > time and therefore slow down the schedule process. You'd also have > > to decide what your heuristic for deciding which VM(s) to delete > > would be, since there's nothing obvious in nova that you could use. > > You could use something as simple as filter extra specs or something > > as complicated as an external service. > > > > This should be lots to get you started. Once again, do make sure > > you're aware of what you're getting yourself into before you start. > > This could get complicated very quickly :) > > > > Cheers, > > Stephen > > > > > I'm very happy to have found you!!! > > > > > > Thank you really much for your time! > > > > > > [1] https://specs.openstack.org/openstack/nova-specs/readme.html > > [2] > > https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blaz > > ar-preemptible-instances.html > > > > > Best regards > > > Levon > > > > > > -----Ursprüngliche Nachricht----- > > > Von: Stephen Finucane > > > > Gesendet: Montag, 31. Mai 2021 12:34 > > > An: Levon Melikbekjan >; > > > openstack at lists.openstack.org > > > Betreff: Re: Customization of nova-scheduler > > > > > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > > > Hello Openstack team, > > > > > > > > is it possible to customize the nova-scheduler via Python? If > > > > yes, how? > > > > > > Yes, you can provide your own filters and weighers. This is > > > documented at [1]. > > > > > > Hope this helps, > > > Stephen > > > > > > [1] > > > https://docs.openstack.org/nova/latest/user/filter-scheduler#writi > > > ng-y > > > our-own-filter > > > > > > > > > > > Best regards > > > > Levon > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Jun 16 20:26:16 2021 From: melwittt at gmail.com (melanie witt) Date: Wed, 16 Jun 2021 13:26:16 -0700 Subject: AW: AW: AW: Customization of nova-scheduler In-Reply-To: <000001d762dc$8fa76b40$aef641c0$@yahoo.de> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> <000601d75e0c$586ce8f0$0946bad0$@yahoo.de> <000101d761f7$0a0bf090$1e23d1b0$@yahoo.de> <000001d762dc$8fa76b40$aef641c0$@yahoo.de> Message-ID: On 6/16/21 11:22, levonmelikbekjan at yahoo.de wrote: > I would like to explain you that, but now I am facing another issue. The > quota info message appeared „Quota exceeded for cores: Requested 4, but > already used 59 of 60 cores“. > > In my version Train is a path called > /usr/lib/python2.7/site-packages/nova/openstack with a file called > wsgi.py. To be more precise it is the __exit__ function in the > ResourceExceptionHandler, which outputs this information message. > > My question is, where does this function get the information from that > the limit of the VCPUs and possibly other resources has been exceeded? I > mean the scheduler doesn’t even run. I can’t get it until now. Maybe I > need to manipulate the nova-api rather then manipulating the > nova-scheduler. I assume you mean you get this message when you try to create a server? The quota check first occurs in nova-api and it uses the --flavor requested to determine the requested cores and ram, then it counts non-deleted instances in the database's flavors cores and ram for the project and compares the totals against the quota limit set in the /os-quota-sets API [1] and if one hasn't been set there it uses the config option values under [quota] in nova.conf. Hope that helps, -melwitt [1] https://docs.openstack.org/api-ref/compute/#show-a-quota From gmann at ghanshyammann.com Wed Jun 16 23:03:11 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 16 Jun 2021 18:03:11 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 17th at 1500 UTC In-Reply-To: <17a0c35752f.fad2fd90669617.8187659333285607760@ghanshyammann.com> References: <17a0c35752f.fad2fd90669617.8187659333285607760@ghanshyammann.com> Message-ID: <17a171072ff.f9524dfc89528.3390241747999629507@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on June 17th at 1500 UTC in #openstack-tc IRC OFTC channel. -https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Migration from 'Freenode' to 'OFTC' (gmann) ** https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc * Recommendation on moving the meeting channel to project channel ** https://review.opendev.org/c/openstack/project-team-guide/+/794839 * Governance nono-active repos retirement & cleanup ** https://etherpad.opendev.org/p/governance-repos-cleanup * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Mon, 14 Jun 2021 15:27:47 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > NOTE: TC MEETINGS WILL BE HELD IN #openstack-tc CHANNEL ON OFTC NETWORK (NOT FREENODE) > > Technical Committee's next weekly meeting is scheduled for June 17th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, June 16th , at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > > -gmann > > From peiyong.zhang at salesforce.com Wed Jun 16 23:50:22 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Wed, 16 Jun 2021 16:50:22 -0700 Subject: ImportError: cannot import name greenpool Message-ID: We hit this error during "glance-manage db_sync": *ImportError: cannot import name greenpool.* Any idea what the root cause is and how to fix it? We have the following rpms installed (thought related). python2-greenlet-0.4.9-1.el7.x86_64.rpm python2-eventlet-0.18.4-2.el7.noarch.rpm python2-gevent-1.1.2-2.el7.x86_64.rpm Debug: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Sleeping for 5 seconds between tries Debug: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Exec try 10/10 Debug: Exec[neutron-db-sync](provider=posix): Executing 'neutron-db-manage upgrade heads' Debug: Executing: 'neutron-db-manage upgrade heads' Debug: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Sleeping for 5 seconds between tries Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Traceback (most recent call last): Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/bin/neutron-db-manage", line 10, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: sys.exit(main()) Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 657, in main Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: return_val |= bool(CONF.command.func(config, CONF.command.name)) Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 179, in do_upgrade Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: run_sanity_checks(config, revision) Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 641, in run_sanity_checks Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: script_dir.run_env() Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 425, in run_env Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: util.load_python_file(self.dir, 'env.py') Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in load_python_file Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: module = load_module_py(module_id, path) Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 141, in load_module_py Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: mod = imp.load_source(module_id, path, fp) Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py", line 24, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: from neutron.db.migration.models import head # noqa Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/db/migration/models/head.py", line 28, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: from neutron.common import utils Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 35, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: import eventlet Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/eventlet/__init__.py", line 10, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: from eventlet import convenience Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/eventlet/convenience.py", line 4, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: from eventlet import greenpool Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: ImportError: cannot import name greenpool Error: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Failed to call refresh: 'neutron-db-manage upgrade heads' returned 1 instead of one of [0] Error: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: 'neutron-db-manage upgrade heads' returned 1 instead of one of [0] Notice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Dependency Exec[neutron-db-sync] has failures: true -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Thu Jun 17 03:41:55 2021 From: adivya1.singh at gmail.com (Adivya Singh) Date: Thu, 17 Jun 2021 09:11:55 +0530 Subject: Sql query for generating list of router associated with IP and there L3 agent Message-ID: Hi, Does anybody know of this sql query , or any reference to Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Jun 17 06:01:54 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 17 Jun 2021 11:31:54 +0530 Subject: [EXTERNAL] Error creating VMs In-Reply-To: References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: Found nova-conductor and nova-scheduler logs [1],[2]. But I have no idea about what caused this issue. [1] http://paste.openstack.org/show/806720/ [2] http://paste.openstack.org/show/806721/ On Wed, Jun 16, 2021 at 8:08 PM Braden, Albert wrote: > Someone will need to dig through the starlingx documentation and figure > out where the log files are located. Do you want to do that? > > > > https://docs.starlingx.io/ > > > > *From:* open infra > *Sent:* Wednesday, June 16, 2021 9:12 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > I can see sm-scheduler.log not sure if its the correct log file you > mentioned, but I can't see conductor log. > > This environment have been deployed using starlingx. > > > > On Wed, Jun 16, 2021 at 6:16 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > Can you check the scheduler and conductor logs on the controllers? There > should be entries describing why the instance failed to schedule. You may > need to set “debug=true” in nova.conf to get more details. > > > > *From:* open infra > *Sent:* Wednesday, June 16, 2021 1:36 AM > *To:* openstack-discuss > *Subject:* [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Hi > > > > After setting up OpenStack environment using STX R5 (two controllers, two > storage nodes and one worker), I have deployed a VM. VM ended up with ERROR > status. > > > > I highly appreciate if someone can guide to dig further (what logs to > check ) or to fix this issue. > > > > http://paste.openstack.org/show/806626/ > > > > Regards > > Danishka > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Thu Jun 17 08:13:39 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Thu, 17 Jun 2021 16:13:39 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Hi Mark, I made some time to test this again today with Victoria on a different ACH. During host configure, it fails not finding python: TASK [Verify that a command can be executed] ********************************************************************************************************************************** fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 192.168.29.235 closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python3: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} PLAY RECAP ******************************************************************************************************************************************************************** juc-ucsb-5-p : ok=4 changed=1 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 The task you mentioned previously, was ran but was not run against the host because no hosts matched: PLAY [Ensure python is installed] ********************************************************************************************************************************************* skipping: no hosts matched I looked at `venvs/kayobe/share/kayobe/ansible/kayobe-ansible-user.yml` and a comment in there says it's only run if the kayobe user account is inaccessible. In my deployment I have "#kayobe_ansible_user:" which is not defined by me. Previously, I defined it as my management user and it caused an issue with the password. So I'm unsure why this is an issue. To work around, I manually installed python and the host configure was successful this time around. I tried this twice and same experience both times. Then later, during service deploy it fails here: RUNNING HANDLER [common : Restart fluentd container] ************************************************************************************************************************** fatal: [juc-ucsb-5-p]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/client.py\", line 259, in _raise_for_status\\n response.raise_for_status()\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/requests/models.py\", line 941, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/fluentd/start\\n\\nDuring handling of the above exception, another exception occurred:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", line 1131, in main\\n File \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", line 785, in recreate_or_restart_container\\n File \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", line 817, in start_container\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/utils/decorators.py\", line 19, in wrapped\\n return f(self, resource_id, *args, **kwargs)\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/container.py\", line 1108, in start\\n self._raise_for_status(res)\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/client.py\", line 261, in _raise_for_status\\n raise create_api_error_from_http_exception(e)\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/errors.py\", line 31, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation)\\ndocker.errors.APIError: 500 Server Error: Internal Server Error (\"error while creating mount source path \\'/etc/localtime\\': mkdir /etc/lo The error says that the file exists. So the first time I just renamed the symlink file and then this was successful in terms of allowing the deploy process to proceed past this point of failure. The 2nd time around, the rename was not good enough because there's a check to make sure that the file is present there. So the 2nd time around I issued "touch /etc/localtime" after renaming the existing and then this passed. Lastly, the deploy fails with a blocking action that I cannot resolve myself: TASK [openvswitch : Ensuring OVS bridge is properly setup] ******************************************************************************************************************** changed: [juc-ucsb-5-p] => (item=['enp6s0-ovs', 'enp6s0']) This step breaks networking on the host. Looking at the openvswitchdb, I think this could be something similar to the issue seen before with Wallaby. The first time I tried this was with enp6s0 configured as a bond0 as desired. I then tried without a bond0 and both times got the same result. If I reboot the host then I can get successful ping replies for a short while before they stop again. Same experience as previous. I believe the pings stop when the bridge config is applied from the container shortly after host boot up. Ovs-vsctl show output: [1] I took a look at the logs [2] but to me I dont see anything alarming or that could point to the issue. I've previously tried turning off IPv6 and this did not have success in this part, although the log message about IPv6 went away. I tried removing the physical interface from the bridge "ovs-vsctl del-port..." and as soon as I do this, I can ping the host once again. Once I re-add the port back to the bridge, I can no longer connect to the host. There's no errors from ovs-vsctl at this point, either. [1] ovs-vsctl output screenshot [2] ovs logs screenshot BTW I am trimming the rest of the mail off because it exceeds 40kb size for the group. Kind regards, Tony Pearce > ... > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Jun 17 12:56:53 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 17 Jun 2021 14:56:53 +0200 Subject: Current market share of OpenStack implementations? In-Reply-To: References: Message-ID: <82fd8919-5ab8-521c-a6be-a7698c7d4e1b@debian.org> On 6/16/21 4:16 PM, Jiri Podivin wrote: > Answer to this question would interest me as well. > I suppose the marketing, or equivalent, departments might have a good > idea about their own customers. > But I'm not sure how they would feel about sharing it, especially in > detail. If you think in terms of "customers" yes, but these are free software, so they can be installed without even asking... Cheers, Thomas Goirand (zigo) From skaplons at redhat.com Thu Jun 17 13:01:49 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 17 Jun 2021 15:01:49 +0200 Subject: [neutron] Drivers meeting agenda for 18.06.2021 Message-ID: <5345361.cRlJu7B4Gb@p1> Hi, Agenda for tomorrow's drivers meeting is at [1]. We have one new RFE to discuss [2] - It's another approach to do OpenFlow based DVR :) We have also one new RFE [3] which I would like You to check and help triaging before we will discuss it in the drivers meeting. [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda [2] https://bugs.launchpad.net/neutron/+bug/1931953 [3] https://bugs.launchpad.net/neutron/+bug/1932154 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From C-Albert.Braden at charter.com Thu Jun 17 13:25:04 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 17 Jun 2021 13:25:04 +0000 Subject: [EXTERNAL] Error creating VMs In-Reply-To: References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: <3599ddf2a37d4069b6fc1e3bb9d9efa1@ncwmexgp009.CORP.CHARTERCOM.com> Both links show the conductor log. It tells us that no valid host was found. The scheduler log should show why no valid host was found. From: open infra Sent: Thursday, June 17, 2021 2:02 AM To: Braden, Albert Cc: openstack-discuss Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Found nova-conductor and nova-scheduler logs [1],[2]. But I have no idea about what caused this issue. [1] http://paste.openstack.org/show/806720/ [2] http://paste.openstack.org/show/806721/ On Wed, Jun 16, 2021 at 8:08 PM Braden, Albert > wrote: Someone will need to dig through the starlingx documentation and figure out where the log files are located. Do you want to do that? https://docs.starlingx.io/ From: open infra > Sent: Wednesday, June 16, 2021 9:12 AM To: Braden, Albert > Cc: openstack-discuss > Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. I can see sm-scheduler.log not sure if its the correct log file you mentioned, but I can't see conductor log. This environment have been deployed using starlingx. On Wed, Jun 16, 2021 at 6:16 PM Braden, Albert > wrote: Can you check the scheduler and conductor logs on the controllers? There should be entries describing why the instance failed to schedule. You may need to set “debug=true” in nova.conf to get more details. From: open infra > Sent: Wednesday, June 16, 2021 1:36 AM To: openstack-discuss > Subject: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi After setting up OpenStack environment using STX R5 (two controllers, two storage nodes and one worker), I have deployed a VM. VM ended up with ERROR status. I highly appreciate if someone can guide to dig further (what logs to check ) or to fix this issue. http://paste.openstack.org/show/806626/ Regards Danishka The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Jun 17 13:57:30 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 17 Jun 2021 19:27:30 +0530 Subject: [EXTERNAL] Error creating VMs In-Reply-To: <3599ddf2a37d4069b6fc1e3bb9d9efa1@ncwmexgp009.CORP.CHARTERCOM.com> References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> <3599ddf2a37d4069b6fc1e3bb9d9efa1@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: Hi Albert, Sorry for the inconvenience. Please note that I have recreated both the data network at starlingx (physical network of openstack) and the network of openstack. But I still have the same issue. Please find scheduler and conductor logs. # Scheduler Logs http://paste.openstack.org/show/806729/ ## Conductor logs http://paste.openstack.org/show/806727/ On Thu, Jun 17, 2021 at 6:55 PM Braden, Albert wrote: > Both links show the conductor log. It tells us that no valid host was > found. The scheduler log should show why no valid host was found. > > > > *From:* open infra > *Sent:* Thursday, June 17, 2021 2:02 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Found nova-conductor and nova-scheduler logs [1],[2]. > > But I have no idea about what caused this issue. > > > > [1] http://paste.openstack.org/show/806720/ > > [2] http://paste.openstack.org/show/806721/ > > > > > > On Wed, Jun 16, 2021 at 8:08 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > Someone will need to dig through the starlingx documentation and figure > out where the log files are located. Do you want to do that? > > > > https://docs.starlingx.io/ > > > > *From:* open infra > *Sent:* Wednesday, June 16, 2021 9:12 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > I can see sm-scheduler.log not sure if its the correct log file you > mentioned, but I can't see conductor log. > > This environment have been deployed using starlingx. > > > > On Wed, Jun 16, 2021 at 6:16 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > Can you check the scheduler and conductor logs on the controllers? There > should be entries describing why the instance failed to schedule. You may > need to set “debug=true” in nova.conf to get more details. > > > > *From:* open infra > *Sent:* Wednesday, June 16, 2021 1:36 AM > *To:* openstack-discuss > *Subject:* [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Hi > > > > After setting up OpenStack environment using STX R5 (two controllers, two > storage nodes and one worker), I have deployed a VM. VM ended up with ERROR > status. > > > > I highly appreciate if someone can guide to dig further (what logs to > check ) or to fix this issue. > > > > http://paste.openstack.org/show/806626/ > > > > Regards > > Danishka > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Thu Jun 17 14:02:47 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 17 Jun 2021 14:02:47 +0000 Subject: [EXTERNAL] Error creating VMs In-Reply-To: References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> <3599ddf2a37d4069b6fc1e3bb9d9efa1@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: <51ccfe2090bb444380dd3c09ed6e0de5@ncwmexgp009.CORP.CHARTERCOM.com> It looks like your RMQ is broken. What do you get from “rabbitmqctl cluster_status”? From: open infra Sent: Thursday, June 17, 2021 9:58 AM To: Braden, Albert Cc: openstack-discuss Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi Albert, Sorry for the inconvenience. Please note that I have recreated both the data network at starlingx (physical network of openstack) and the network of openstack. But I still have the same issue. Please find scheduler and conductor logs. # Scheduler Logs http://paste.openstack.org/show/806729/ ## Conductor logs http://paste.openstack.org/show/806727/ On Thu, Jun 17, 2021 at 6:55 PM Braden, Albert > wrote: Both links show the conductor log. It tells us that no valid host was found. The scheduler log should show why no valid host was found. From: open infra > Sent: Thursday, June 17, 2021 2:02 AM To: Braden, Albert > Cc: openstack-discuss > Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Found nova-conductor and nova-scheduler logs [1],[2]. But I have no idea about what caused this issue. [1] http://paste.openstack.org/show/806720/ [2] http://paste.openstack.org/show/806721/ On Wed, Jun 16, 2021 at 8:08 PM Braden, Albert > wrote: Someone will need to dig through the starlingx documentation and figure out where the log files are located. Do you want to do that? https://docs.starlingx.io/ From: open infra > Sent: Wednesday, June 16, 2021 9:12 AM To: Braden, Albert > Cc: openstack-discuss > Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. I can see sm-scheduler.log not sure if its the correct log file you mentioned, but I can't see conductor log. This environment have been deployed using starlingx. On Wed, Jun 16, 2021 at 6:16 PM Braden, Albert > wrote: Can you check the scheduler and conductor logs on the controllers? There should be entries describing why the instance failed to schedule. You may need to set “debug=true” in nova.conf to get more details. From: open infra > Sent: Wednesday, June 16, 2021 1:36 AM To: openstack-discuss > Subject: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi After setting up OpenStack environment using STX R5 (two controllers, two storage nodes and one worker), I have deployed a VM. VM ended up with ERROR status. I highly appreciate if someone can guide to dig further (what logs to check ) or to fix this issue. http://paste.openstack.org/show/806626/ Regards Danishka The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Jun 17 14:06:14 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 17 Jun 2021 19:36:14 +0530 Subject: [EXTERNAL] Error creating VMs In-Reply-To: <51ccfe2090bb444380dd3c09ed6e0de5@ncwmexgp009.CORP.CHARTERCOM.com> References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> <3599ddf2a37d4069b6fc1e3bb9d9efa1@ncwmexgp009.CORP.CHARTERCOM.com> <51ccfe2090bb444380dd3c09ed6e0de5@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: controller-0:/var/log/pods$ sudo rabbitmqctl cluster_status Password: Cluster status of node rabbit at localhost ... [{nodes,[{disc,[rabbit at localhost]}]}, {running_nodes,[rabbit at localhost]}, {cluster_name,<<"rabbit at controller-0">>}, {partitions,[]}, {alarms,[{rabbit at localhost,[]}]}] On Thu, Jun 17, 2021 at 7:32 PM Braden, Albert wrote: > It looks like your RMQ is broken. What do you get from “rabbitmqctl > cluster_status”? > > > > *From:* open infra > *Sent:* Thursday, June 17, 2021 9:58 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Hi Albert, > > Sorry for the inconvenience. > > Please note that I have recreated both the data network at starlingx (physical network of openstack) and the network of openstack. > > But I still have the same issue. Please find scheduler and conductor logs. > > # Scheduler Logs > > http://paste.openstack.org/show/806729/ > > > > ## Conductor logs > > http://paste.openstack.org/show/806727/ > > > > On Thu, Jun 17, 2021 at 6:55 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > Both links show the conductor log. It tells us that no valid host was > found. The scheduler log should show why no valid host was found. > > > > *From:* open infra > *Sent:* Thursday, June 17, 2021 2:02 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Found nova-conductor and nova-scheduler logs [1],[2]. > > But I have no idea about what caused this issue. > > > > [1] http://paste.openstack.org/show/806720/ > > [2] http://paste.openstack.org/show/806721/ > > > > > > On Wed, Jun 16, 2021 at 8:08 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > Someone will need to dig through the starlingx documentation and figure > out where the log files are located. Do you want to do that? > > > > https://docs.starlingx.io/ > > > > *From:* open infra > *Sent:* Wednesday, June 16, 2021 9:12 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > I can see sm-scheduler.log not sure if its the correct log file you > mentioned, but I can't see conductor log. > > This environment have been deployed using starlingx. > > > > On Wed, Jun 16, 2021 at 6:16 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > Can you check the scheduler and conductor logs on the controllers? There > should be entries describing why the instance failed to schedule. You may > need to set “debug=true” in nova.conf to get more details. > > > > *From:* open infra > *Sent:* Wednesday, June 16, 2021 1:36 AM > *To:* openstack-discuss > *Subject:* [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Hi > > > > After setting up OpenStack environment using STX R5 (two controllers, two > storage nodes and one worker), I have deployed a VM. VM ended up with ERROR > status. > > > > I highly appreciate if someone can guide to dig further (what logs to > check ) or to fix this issue. > > > > http://paste.openstack.org/show/806626/ > > > > Regards > > Danishka > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Jun 17 14:52:29 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 17 Jun 2021 20:22:29 +0530 Subject: [glance] nominating Cyril Roelandt for glance core Message-ID: Hi All, I am nominating Cyril Roelandt (cyril-roelandt LP and Steap on IRC) to be a Glance core. Cyril has been around the Glance community for a long time and is familiar with the architecture and design patterns used in Glance and its related projects. He's contributed code, triaged bugs, provided bug fixes, and did quality reviews for Glance. He is also helping me in reducing our bug backlogs. Considering the current situation with the project, however, it would be an enormous help to have someone as knowledgeable about Glance as Cyril to have +2 abilities. I discussed this with cyril, he's agreed to be a core reviewer. In any case, I'd like to put Cyril to work as soon as possible! So please reply to this message with comments or concerns before 23:59 UTC on Monday 21 June. I'd like to confirm Cyril as a core on Tuesday 22 June. Thanks and Regards, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu Jun 17 15:32:59 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 17 Jun 2021 11:32:59 -0400 Subject: [glance] nominating Cyril Roelandt for glance core In-Reply-To: References: Message-ID: <241998ba-52c3-13e3-042d-1c18a9c5833b@gmail.com> On 6/17/21 10:52 AM, Abhishek Kekane wrote: > Hi All, > > I am nominating Cyril Roelandt (cyril-roelandt LP and Steap on IRC) to > be a Glance core. Cyril has been around the Glance community for > a long time and is familiar with the architecture and design patterns > used in Glance and its related projects.  He's contributed code, > triaged bugs, provided bug fixes, and did quality reviews for Glance. He > is also helping me in reducing our bug backlogs. > > Considering the current situation with the project, however, it would be > an enormous help to have someone as knowledgeable about Glance as Cyril > to have +2 abilities. I discussed this with cyril, he's agreed to be a > core reviewer. Cyril's participation in the project definitely increased in wallaby and has continued into xena, and I'm happy to see that he's willing to take on this responsibility. > In any case, I'd like to put Cyril to work as soon as possible!  So > please reply to this message with comments or concerns before 23:59 > UTC on Monday 21 June. I'd like to confirm Cyril as a core on Tuesday 22 > June. I also would like to see Cyril put to work as soon as possible! +2 from me. > > Thanks and Regards, > > Abhishek From thomas at goirand.fr Thu Jun 17 12:55:51 2021 From: thomas at goirand.fr (Thomas Goirand) Date: Thu, 17 Jun 2021 14:55:51 +0200 Subject: Current market share of OpenStack implementations? In-Reply-To: References: Message-ID: On 6/16/21 3:58 PM, Linebarger, John wrote: > This is more of a marketing than a technical question, but the online > sources seem to be either a few years old or the links have gone stale. > What is the collective assessment of the OpenStack Hive Mind about the > current worldwide market share of OpenStack implementations, in > descending order? And which ones are growing and which ones are leveling > off or declining? Candidates would seem to be: > >   > > 1. TripleO OpenStack > > 2. Canonical’s Charmed OpenStack > > 3. RedHat OpenStack Platform > > 4. Mirantis Cloud Platform > > 5. Packstack > > 6. Devstack (possibly; but that’s just build from source, right?) Thanks to not forget Debian: https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer Cheers, Thomas Goirand (zigo) From C-Albert.Braden at charter.com Thu Jun 17 14:21:03 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 17 Jun 2021 14:21:03 +0000 Subject: [EXTERNAL] Error creating VMs In-Reply-To: References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> <3599ddf2a37d4069b6fc1e3bb9d9efa1@ncwmexgp009.CORP.CHARTERCOM.com> <51ccfe2090bb444380dd3c09ed6e0de5@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: It looks like RMQ is up but services can’t connect to it. What do you see in the RMQ web interface? This might be a clue: ERROR oslo.messaging._drivers.impl_rabbit [req-b30bb3d6-0ab1-4dbb-aed9-1d9cece3eac7 d9f7048c1cd947cfa8ecef128a6cee89 e8813293073545f99658adbec2f80c1d - default default] Connection failed: failed to resolve broker hostname (retrying in 0 seconds): OSError: failed to resolve broker hostname Check for a typo in your config that points services to an incorrect RMQ hostname, or a networking issue that prevents them from connecting. From: open infra Sent: Thursday, June 17, 2021 10:06 AM To: Braden, Albert Cc: openstack-discuss Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. controller-0:/var/log/pods$ sudo rabbitmqctl cluster_status Password: Cluster status of node rabbit at localhost ... [{nodes,[{disc,[rabbit at localhost]}]}, {running_nodes,[rabbit at localhost]}, {cluster_name,<<"rabbit at controller-0">>}, {partitions,[]}, {alarms,[{rabbit at localhost,[]}]}] On Thu, Jun 17, 2021 at 7:32 PM Braden, Albert > wrote: It looks like your RMQ is broken. What do you get from “rabbitmqctl cluster_status”? From: open infra > Sent: Thursday, June 17, 2021 9:58 AM To: Braden, Albert > Cc: openstack-discuss > Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi Albert, Sorry for the inconvenience. Please note that I have recreated both the data network at starlingx (physical network of openstack) and the network of openstack. But I still have the same issue. Please find scheduler and conductor logs. # Scheduler Logs http://paste.openstack.org/show/806729/ ## Conductor logs http://paste.openstack.org/show/806727/ On Thu, Jun 17, 2021 at 6:55 PM Braden, Albert > wrote: Both links show the conductor log. It tells us that no valid host was found. The scheduler log should show why no valid host was found. From: open infra > Sent: Thursday, June 17, 2021 2:02 AM To: Braden, Albert > Cc: openstack-discuss > Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Found nova-conductor and nova-scheduler logs [1],[2]. But I have no idea about what caused this issue. [1] http://paste.openstack.org/show/806720/ [2] http://paste.openstack.org/show/806721/ On Wed, Jun 16, 2021 at 8:08 PM Braden, Albert > wrote: Someone will need to dig through the starlingx documentation and figure out where the log files are located. Do you want to do that? https://docs.starlingx.io/ From: open infra > Sent: Wednesday, June 16, 2021 9:12 AM To: Braden, Albert > Cc: openstack-discuss > Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. I can see sm-scheduler.log not sure if its the correct log file you mentioned, but I can't see conductor log. This environment have been deployed using starlingx. On Wed, Jun 16, 2021 at 6:16 PM Braden, Albert > wrote: Can you check the scheduler and conductor logs on the controllers? There should be entries describing why the instance failed to schedule. You may need to set “debug=true” in nova.conf to get more details. From: open infra > Sent: Wednesday, June 16, 2021 1:36 AM To: openstack-discuss > Subject: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi After setting up OpenStack environment using STX R5 (two controllers, two storage nodes and one worker), I have deployed a VM. VM ended up with ERROR status. I highly appreciate if someone can guide to dig further (what logs to check ) or to fix this issue. http://paste.openstack.org/show/806626/ Regards Danishka The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramerama at tataelxsi.co.in Thu Jun 17 14:24:30 2021 From: ramerama at tataelxsi.co.in (Ramesh Ramanathan B) Date: Thu, 17 Jun 2021 14:24:30 +0000 Subject: Nova shows incorrect VM status when compute is down. Message-ID: Dear All, One observation we have while using Open Stack Rocky is, when a compute node goes down, the VM status still shows active (the servers running in the compute node that went down). Is this the expected behavior? Any configurations required to get the right status. In the attached image the compute is down, but the VM status still shows active. We are running a data center so it is not practical to run nova reset-state for all the servers. Is there an API to force update Nova to show the correct status? Or any configurations missing that is causing this? Thanks Regards, Ramesh ________________________________ Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient of this message , or if this message has been addressed to you in error, please immediately alert the sender by reply email and then delete this message and any attachments. If you are not the intended recipient, you are hereby notified that any use, dissemination, copying, or storage of this message or its attachments is strictly prohibited. Email transmission cannot be guaranteed to be secure or error-free, as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender, therefore, does not accept liability for any errors, omissions or contaminations in the contents of this message which might have occurred as a result of email transmission. If verification is required, please request for a hard-copy version. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MicrosoftTeams-image (12).png Type: image/png Size: 76305 bytes Desc: MicrosoftTeams-image (12).png URL: From whayutin at redhat.com Thu Jun 17 16:26:36 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 17 Jun 2021 10:26:36 -0600 Subject: [TripleO] moving stable/stein and stable/queens to End of Life In-Reply-To: References: Message-ID: +1 thanks for the clear communication and work! On Wed, Jun 16, 2021 at 9:21 AM Marios Andreou wrote: > Hello TripleO, > > I am proposing to move to End of Life for stable/stein and > stable/queens branches across all TripleO repos (stein [1] and queens > [2]). > > The proposal was introduced and discussed in recent TripleO irc > meetings [3] and there have been no objections so far. > > The main reason is to allow us to focus our resources on the more > active branches, train ussuri victoria wallaby and xena (!). We moved > the rocky branch to EOL last cycle with [4] for the same reason. > > I think this move is well justified as these branches are getting > relatively few commits lately (so calling into question the resources > we are dedicating to running and maintaining the check, gate and 3rd > party periodic/promotion lines). Looking at tripleo-common, > python-tripleoclient and tripleo-heat-templates I counted 9 commits to > stable/stein and 23 commits to stable/queens since November 2020 > (details on those numbers at [5][6]). > > Please speak up if you have any concerns or questions about the > proposal. If there are none then the next step is to post a review > against the releases repo to make it official. > > regards, marios > > > [1] https://releases.openstack.org/teams/tripleo.html#stein > [2] https://releases.openstack.org/teams/tripleo.html#queens > [3] > https://meetings.opendev.org/meetings/tripleo/2021/tripleo.2021-06-08-14.00.html > [4] https://review.opendev.org/c/openstack/releases/+/774244 > [5] > https://gist.github.com/marios/b3155fe3b1318cc26bfa4bc15c764a26#gistcomment-3752102 > [6] > https://gist.github.com/marios/b3155fe3b1318cc26bfa4bc15c764a26#gistcomment-3755127 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Jun 17 16:36:19 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 17 Jun 2021 19:36:19 +0300 Subject: [TripleO] next irc meeting Tuesday 22 June @ 1400 UTC in OFTC #tripleo Message-ID: Reminder the next TripleO irc meeting is this coming Tuesday 22 June 1400 UTC in OFTC irc channel #tripleo. agenda: https://wiki.openstack.org/wiki/Meetings/TripleO one-off items: https://etherpad.opendev.org/p/tripleo-meeting-items (feel free to add any tripleo status/ongoing work etc to the etherpad). Our last meeting was on Jun 8th - you can find logs @ http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-06-08-14.00.html Hope you can make it on Tuesday, regards, marios From openinfradn at gmail.com Thu Jun 17 16:38:19 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 17 Jun 2021 22:08:19 +0530 Subject: [EXTERNAL] Error creating VMs In-Reply-To: References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> <3599ddf2a37d4069b6fc1e3bb9d9efa1@ncwmexgp009.CORP.CHARTERCOM.com> <51ccfe2090bb444380dd3c09ed6e0de5@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: Another instance created at 2021-06-17T16:10:06Z, and it also failed. I can't see rabbitmq errors around that timestamp. scheduler log http://paste.openstack.org/show/806737/ conducter log http://paste.openstack.org/show/806738/ At 2021-06-17 15:52:38.810, system was able to connect to AMQP http://paste.openstack.org/show/806739/ controller-0:/var/log/pods$ sudo rabbitmqctl list_queues Password: Listing queues ... sysinv.ceph_manager.192.168.204.1 0 sysinv.ceph_manager 0 barbican.workers 0 sysinv.fpga_agent_manager.controller-0 0 barbican.workers.barbican.queue 0 sysinv.conductor_manager 0 sysinv.agent_manager_fanout_fed76e414eb04da084ab35a1c27e1bf1 0 sysinv.agent_manager 0 sysinv.ceph_manager_fanout_d585fb522f46431da60741573a7f8575 0 notifications.info 0 sysinv.conductor_manager_fanout_cba8926fa47f4780a9e17f3d9b889500 0 sysinv.fpga_agent_manager 0 sysinv-keystone-listener-workers 0 sysinv.fpga_agent_manager_fanout_f00a01bfd3f54a64860f8d7454e9a78e 0 sysinv.agent_manager.controller-0 0 barbican.workers_fanout_2c97c319faa943e88eaed4c101e530c7 0 sysinv.conductor_manager.controller-0 0 On Thu, Jun 17, 2021 at 7:51 PM Braden, Albert wrote: > It looks like RMQ is up but services can’t connect to it. What do you see > in the RMQ web interface? > > > > This might be a clue: > > > > ERROR oslo.messaging._drivers.impl_rabbit > [req-b30bb3d6-0ab1-4dbb-aed9-1d9cece3eac7 d9f7048c1cd947cfa8ecef128a6cee89 > e8813293073545f99658adbec2f80c1d - default default] Connection failed: > failed to resolve broker hostname (retrying in 0 seconds): OSError: failed > to resolve broker hostname > > > > Check for a typo in your config that points services to an incorrect RMQ > hostname, or a networking issue that prevents them from connecting. > > > > *From:* open infra > *Sent:* Thursday, June 17, 2021 10:06 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > controller-0:/var/log/pods$ sudo rabbitmqctl cluster_status > Password: > Cluster status of node rabbit at localhost ... > [{nodes,[{disc,[rabbit at localhost]}]}, > {running_nodes,[rabbit at localhost]}, > {cluster_name,<<"rabbit at controller-0">>}, > {partitions,[]}, > {alarms,[{rabbit at localhost,[]}]}] > > > > On Thu, Jun 17, 2021 at 7:32 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > It looks like your RMQ is broken. What do you get from “rabbitmqctl > cluster_status”? > > > > *From:* open infra > *Sent:* Thursday, June 17, 2021 9:58 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Hi Albert, > > Sorry for the inconvenience. > > Please note that I have recreated both the data network at starlingx (physical network of openstack) and the network of openstack. > > But I still have the same issue. Please find scheduler and conductor logs. > > # Scheduler Logs > > http://paste.openstack.org/show/806729/ > > > > ## Conductor logs > > http://paste.openstack.org/show/806727/ > > > > On Thu, Jun 17, 2021 at 6:55 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > Both links show the conductor log. It tells us that no valid host was > found. The scheduler log should show why no valid host was found. > > > > *From:* open infra > *Sent:* Thursday, June 17, 2021 2:02 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Found nova-conductor and nova-scheduler logs [1],[2]. > > But I have no idea about what caused this issue. > > > > [1] http://paste.openstack.org/show/806720/ > > [2] http://paste.openstack.org/show/806721/ > > > > > > On Wed, Jun 16, 2021 at 8:08 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > Someone will need to dig through the starlingx documentation and figure > out where the log files are located. Do you want to do that? > > > > https://docs.starlingx.io/ > > > > *From:* open infra > *Sent:* Wednesday, June 16, 2021 9:12 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > I can see sm-scheduler.log not sure if its the correct log file you > mentioned, but I can't see conductor log. > > This environment have been deployed using starlingx. > > > > On Wed, Jun 16, 2021 at 6:16 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > Can you check the scheduler and conductor logs on the controllers? There > should be entries describing why the instance failed to schedule. You may > need to set “debug=true” in nova.conf to get more details. > > > > *From:* open infra > *Sent:* Wednesday, June 16, 2021 1:36 AM > *To:* openstack-discuss > *Subject:* [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Hi > > > > After setting up OpenStack environment using STX R5 (two controllers, two > storage nodes and one worker), I have deployed a VM. VM ended up with ERROR > status. > > > > I highly appreciate if someone can guide to dig further (what logs to > check ) or to fix this issue. > > > > http://paste.openstack.org/show/806626/ > > > > Regards > > Danishka > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peiyong.zhang at salesforce.com Thu Jun 17 17:03:40 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Thu, 17 Jun 2021 10:03:40 -0700 Subject: ImportError: cannot import name greenpool Message-ID: Not sure if my previous email went through or not. Just resend it. We hit this error during "glance-manage db_sync": *ImportError: cannot import name greenpool.* Any idea what the root cause is and how to fix it? We have the following rpms installed (thought related). python2-greenlet-0.4.9-1.el7.x86_64.rpm python2-eventlet-0.18.4-2.el7.noarch.rpm python2-gevent-1.1.2-2.el7.x86_64.rpm Debug: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Sleeping for 5 seconds between tries Debug: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Exec try 10/10 Debug: Exec[neutron-db-sync](provider=posix): Executing 'neutron-db-manage upgrade heads' Debug: Executing: 'neutron-db-manage upgrade heads' Debug: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Sleeping for 5 seconds between tries Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Traceback (most recent call last): Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/bin/neutron-db-manage", line 10, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: sys.exit(main()) Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 657, in main Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: return_val |= bool(CONF.command.func(config, CONF.command.name )) Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 179, in do_upgrade Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: run_sanity_checks(config, revision) Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 641, in run_sanity_checks Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: script_dir.run_env() Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 425, in run_env Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: util.load_python_file(self.dir, 'env.py') Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in load_python_file Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: module = load_module_py(module_id, path) Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 141, in load_module_py Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: mod = imp.load_source(module_id, path, fp) Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py", line 24, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: from neutron.db.migration.models import head # noqa Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/db/migration/models/head.py", line 28, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: from neutron.common import utils Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 35, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: import eventlet Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/eventlet/__init__.py", line 10, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: from eventlet import convenience Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: File "/usr/lib/python2.7/site-packages/eventlet/convenience.py", line 4, in Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: from eventlet import greenpool Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: ImportError: cannot import name greenpool Error: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Failed to call refresh: 'neutron-db-manage upgrade heads' returned 1 instead of one of [0] Error: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: 'neutron-db-manage upgrade heads' returned 1 instead of one of [0] Notice: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Dependency Exec[neutron-db-sync] has failures: true -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Jun 17 17:11:53 2021 From: melwittt at gmail.com (melanie witt) Date: Thu, 17 Jun 2021 10:11:53 -0700 Subject: Nova shows incorrect VM status when compute is down. In-Reply-To: References: Message-ID: On 6/17/21 07:24, Ramesh Ramanathan B wrote: > Dear All, > > One observation we have while using Open Stack Rocky is, when a compute > node goes down, the VM status still shows active (the servers running in > the compute node that went down). Is this the expected behavior? Any > configurations required to get the right status. > > In the attached image the compute is down, but the VM status still shows > active. We are running a data center so it is not practical to run nova > reset-state for all the servers. Is there an API to force update Nova to > show the correct status? Or any configurations missing that is causing this? This is expected behavior. The way we expose this condition is via the host_status field on the VM. By default, this is exposed only via the admin API but can be changed as desired via policy [1][2]. For example (this is admin): > $ nova list --fields id,name,status,task_state,power_state,host_status,networks > +--------------------------------------+-------+---------+------------+-------------+-------------+--------------------------------------+ > | ID | Name | Status | Task State | Power State | Host Status | Networks | > +--------------------------------------+-------+---------+------------+-------------+-------------+--------------------------------------+ > | f1aece4e-e7f7-436b-9faf-799129503dcc | eight | SHUTOFF | None | Shutdown | UP | public=2001:db8::3aa, 192.168.33.189 | > +--------------------------------------+-------+---------+------------+-------------+-------------+--------------------------------------+ The initial proposal was to make the VM status UNKNOWN when the host is down but the consensus was to stick with the host_status and keep it separate from the VM status. See the abandoned spec for details [3]. HTH, -melwitt [1] os_compute_api:servers:show:host_status:unknown-only in https://docs.openstack.org/nova/latest/configuration/policy.html [2] https://blueprints.launchpad.net/nova/+spec/policy-rule-for-host-status-unknown [3] https://review.opendev.org/c/openstack/nova-specs/+/666181 From smooney at redhat.com Thu Jun 17 17:18:56 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 17 Jun 2021 18:18:56 +0100 Subject: Nova shows incorrect VM status when compute is down. In-Reply-To: References: Message-ID: <2805adacfd746e0356ba61e3dedbe75b7034055d.camel@redhat.com> On Thu, 2021-06-17 at 14:24 +0000, Ramesh Ramanathan B wrote: > Dear All, > > One observation we have while using Open Stack Rocky is, when a > compute node goes down, the VM status still shows active (the servers > running in the compute node that went down). Is this the expected > behavior? Any configurations required to get the right status. yes this is expected behavior when the compute agent heartbeat is missed and we do not know the status of the vms we continue to report them in the last state we knew of. wedicussed adding an unknow state at onepoint to the api. im not sure if that has been added yet melanie i think you reviewd or worked on that? there was concern about exposing this as it is exposing info about the backend hosts for exampel if a cell db connection goes down but the vm is still active it woudl be incorrect to report the vm state as down because it actully unknown and in this case the vm is still active. in the case were the comptue agent was stopped for mainatnce we also do not want to set the vms state as down as again stoping the agent will not prevent the vms form working. in either case of the cell connection being tempory disrupted or the compute agent being stopped reporting the vm as downs in the api could lead to data currption if you evacuated the vm or a user deleted it and tried to resue its data voluems for a new vms so ingeneral it incorrect to assuem that the vm status in the db refect the state of the vm on the host if the compute agent is down and its not correct to udpate the status in the db to down. making it as unkonw coudl be valide but some operator objected to that as it was leaking information about there data ceneter(such as they are currently doing an upgrade/matainece and hvae stopped the agent) to custoemr that they seee as a security issue. > > In the attached image the compute is down, but the VM status still > shows active. We are running a data center so it is not practical to > run nova reset-state for all the servers. reset-state is not intended to be used for this. infact reset-state should almost never be used. you should treat every invocation of reset state as running an arbiraty sql update query and avoid it unless absolute nessisary. > Is there an API to force update Nova to show the correct status? Or > any configurations missing that is causing this? > > Thanks > > Regards, > Ramesh > > ________________________________ > Disclaimer: This email and any files transmitted with it are > confidential and intended solely for the use of the individual or > entity to whom they are addressed. If you are not the intended > recipient of this message , or if this message has been addressed to > you in error, please immediately alert the sender by reply email and > then delete this message and any attachments. If you are not the > intended recipient, you are hereby notified that any use, > dissemination, copying, or storage of this message or its attachments > is strictly prohibited. Email transmission cannot be guaranteed to be > secure or error-free, as information could be intercepted, corrupted, > lost, destroyed, arrive late or incomplete, or contain viruses. The > sender, therefore, does not accept liability for any errors, > omissions or contaminations in the contents of this message which > might have occurred as a result of email transmission. If > verification is required, please request for a hard-copy version. > ________________________________ From fungi at yuggoth.org Thu Jun 17 17:28:40 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 17 Jun 2021 17:28:40 +0000 Subject: [dev] Mass patch filing for skipdist->skipsdist in tox.ini Message-ID: <20210617172840.zqa7em2vsoledf4q@yuggoth.org> I just pushed changes for the master branches of 26 projects under topic:tox-skipsdist to correct a frequent typographical error found in many projects' tox.ini files, which seems to get copied around a lot. I'm hoping by getting rid of the mistake everywhere we can stamp out future cargo-cult copies. The skipsdist option is set by many projects to avoid unilaterally building an sdist of the local tree every time tox is invoked, which can sometimes be quite time-consuming. Unfortunately it's all too easy to miss that 's' in the middle of the option name, and since tox ignores options it doesn't recognize, this ends up not having the intended effect. If you would rather keep the current behavior in your project, feel free to amend the change to one which removes the invalid "skipdist" line from your tox.ini instead. If you want to backport this to your stable branches, you can do that as well (it may even save some seconds on average tox-based job runtime); my main concern was more for people copying the mistake around, and it's far more likely they'll be copying from master branches so I didn't bother proposing any backports. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Thu Jun 17 17:37:46 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 17 Jun 2021 10:37:46 -0700 Subject: ImportError: cannot import name greenpool In-Reply-To: References: Message-ID: On Thu, Jun 17, 2021, at 10:03 AM, Pete Zhang wrote: > Not sure if my previous email went through or not. Just resend it. > > We hit this error during "glance-manage db_sync": > *ImportError: cannot import name greenpool.* > Any idea what the root cause is and how to fix it? > We have the following rpms installed (thought related). > > `python2-greenlet-0.4.9-1.el7.x86_64.rpm` > `python2-eventlet-0.18.4-2.el7.noarch.rpm` > `python2-gevent-1.1.2-2.el7.x86_64.rpm` > > snip > Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: > from eventlet import greenpool > > Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: > ImportError: cannot import name greenpool I'm not familiar with the CentOS7 packaging, but as a sanity check I ran `pip install greenlet==0.4.9 eventlet==0.18.4` in a python2 virtualenv then in a python2 interpreter `from eventlet import greenpool` runs successfully. I would try running this import by hand on your system to see if you can get any more information. Could be a packaging issue or potentially some sort of name collision between script names? Clark From C-Albert.Braden at charter.com Thu Jun 17 17:44:45 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 17 Jun 2021 17:44:45 +0000 Subject: [EXTERNAL] Error creating VMs In-Reply-To: References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> <3599ddf2a37d4069b6fc1e3bb9d9efa1@ncwmexgp009.CORP.CHARTERCOM.com> <51ccfe2090bb444380dd3c09ed6e0de5@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: <598a0aa92ca243e4bb40f1576ade3788@ncwmexgp009.CORP.CHARTERCOM.com> I see “no valid host” at 10:48:39 in the conductor log: 2021-06-17T10:48:39.65107847Z stdout F nova.exception.NoValidHost: No valid host was found. In the scheduler log at 10:37:43 we see the scheduler starting and the RMQ error, followed by the scheduling failure at 10:48:39. It looks like the scheduler can’t connect to RMQ. 2021-06-17T10:37:43.038362995Z stdout F 2021-06-17 10:37:43.038 1 INFO nova.service [-] Starting scheduler node (version 21.2.1) 2021-06-17T10:37:43.08082925Z stdout F 2021-06-17 10:37:43.079 1 ERROR oslo.messaging._drivers.impl_rabbit [req-775b2b25-b9ef-4642-a6fd-7574eab7cc37 - - - - -] Connection failed: failed to resolve broker hostname (retrying in 0 seconds): OSError: failed to resolve broker hostname 2021-06-17T10:48:39.197318418Z stdout F 2021-06-17 10:48:39.196 1 INFO nova.scheduler.manager [req-b30bb3d6-0ab1-4dbb-aed9-1d9cece3eac7 d9f7048c1cd947cfa8ecef128a6cee89 e8813293073545f99658adbec2f80c1d - default default] Got no allocation candidates from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up. From: open infra Sent: Thursday, June 17, 2021 12:38 PM To: Braden, Albert Cc: openstack-discuss Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Another instance created at 2021-06-17T16:10:06Z, and it also failed. I can't see rabbitmq errors around that timestamp. scheduler log http://paste.openstack.org/show/806737/ conducter log http://paste.openstack.org/show/806738/ At 2021-06-17 15:52:38.810, system was able to connect to AMQP http://paste.openstack.org/show/806739/ controller-0:/var/log/pods$ sudo rabbitmqctl list_queues Password: Listing queues ... sysinv.ceph_manager.192.168.204.1 0 sysinv.ceph_manager 0 barbican.workers 0 sysinv.fpga_agent_manager.controller-0 0 barbican.workers.barbican.queue 0 sysinv.conductor_manager 0 sysinv.agent_manager_fanout_fed76e414eb04da084ab35a1c27e1bf1 0 sysinv.agent_manager 0 sysinv.ceph_manager_fanout_d585fb522f46431da60741573a7f8575 0 notifications.info 0 sysinv.conductor_manager_fanout_cba8926fa47f4780a9e17f3d9b889500 0 sysinv.fpga_agent_manager 0 sysinv-keystone-listener-workers 0 sysinv.fpga_agent_manager_fanout_f00a01bfd3f54a64860f8d7454e9a78e 0 sysinv.agent_manager.controller-0 0 barbican.workers_fanout_2c97c319faa943e88eaed4c101e530c7 0 sysinv.conductor_manager.controller-0 0 On Thu, Jun 17, 2021 at 7:51 PM Braden, Albert > wrote: It looks like RMQ is up but services can’t connect to it. What do you see in the RMQ web interface? This might be a clue: ERROR oslo.messaging._drivers.impl_rabbit [req-b30bb3d6-0ab1-4dbb-aed9-1d9cece3eac7 d9f7048c1cd947cfa8ecef128a6cee89 e8813293073545f99658adbec2f80c1d - default default] Connection failed: failed to resolve broker hostname (retrying in 0 seconds): OSError: failed to resolve broker hostname Check for a typo in your config that points services to an incorrect RMQ hostname, or a networking issue that prevents them from connecting. From: open infra > Sent: Thursday, June 17, 2021 10:06 AM To: Braden, Albert > Cc: openstack-discuss > Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. controller-0:/var/log/pods$ sudo rabbitmqctl cluster_status Password: Cluster status of node rabbit at localhost ... [{nodes,[{disc,[rabbit at localhost]}]}, {running_nodes,[rabbit at localhost]}, {cluster_name,<<"rabbit at controller-0">>}, {partitions,[]}, {alarms,[{rabbit at localhost,[]}]}] On Thu, Jun 17, 2021 at 7:32 PM Braden, Albert > wrote: It looks like your RMQ is broken. What do you get from “rabbitmqctl cluster_status”? From: open infra > Sent: Thursday, June 17, 2021 9:58 AM To: Braden, Albert > Cc: openstack-discuss > Subject: Re: [EXTERNAL] Error creating VMs CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Hi Albert, Sorry for the inconvenience. Please note that I have recreated both the data network at starlingx (physical network of openstack) and the network of openstack. But I still have the same issue. Please find scheduler and conductor logs. # Scheduler Logs E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at garloff.de Thu Jun 17 17:54:40 2021 From: openstack at garloff.de (Kurt Garloff) Date: Thu, 17 Jun 2021 19:54:40 +0200 Subject: [nova] SCS standardized flavor naming Message-ID: <5730c898-d979-f95c-5905-425dddab5e65@garloff.de> Hi, we (SCS) are working on defining a fully open source cloud and container stack as part of the Gaia-X[1] project. The intention is to provide a common well-standardized way to deploy, manage, configure, and operate the needed software. The vision is to have a network of federated clouds that can be used as one, which requires IAM federation and a high level of compatibility and uniformity. Our project is called Sovereign Cloud Stack (SCS)[2]. Obviously, we are using existing open source projects from the OIF, the CNCF and others and are seeking alignment with these communities. Some experts well-known in the OpenStack universe are participating in our effort. On the OpenStack side, we are using OSISM[3] which leverages kolla-ansible. We would like to seek your input and feedback into our attempt of defining a standardized naming scheme for flavors and a list of standard flavors available in all clouds that deliver SCS-compliant IaaS. Find the draft proposal at https://github.com/SovereignCloudStack/Operational-Docs/blob/main/flavor-naming-draft.MD We prefer feedback as github issues and/or PRs. Knowing that the OpenStack community prefers gerrit, we'll of course also incorporate any comment we get via this mailing list into our thinking. We hope you can accept us pasting content from mails into github issues, so we create a track record of the taken decisions. (Please indicate if this is not OK for you and we'll refrain from doing so.) Before you ask why we believe the proposal is useful: We are perfectly aware that it is possible to discover flavor properties; however, in many automation recipes (playbooks, terraform vars etc) we find flavor names encoded and it is one pain point having to adapt them on every cloud that you use. So we want to have some uniformity on this across SCS clouds. (With similar reasoning, expect a image naming and image metadata proposal next.) We are looking for feedback in two directions: (1) If you are aware of similar efforts to standardize flavor naming,     please point us to it, so we can seek contact and align. (2) Please look at the proposal itself. When looking into the details     how we specify how to optionally(!) encode a number of details into     flavor names, please keep in mind that this is indeed optional. We     expect most flavor names to be as simple as SCS-4V:8:20 or even     SCS-4V:8, even though complicated SCS-8C:32:2x200S-bms-i2-GNa:64-ib     [4] is possible for clouds that provide that level of differentiation     and want/need to expose this via the flavor name. Of course, input from existing providers of OpenStack infrastructure is particularly valuable. Feedback welcome! [1] https://gaia-x.eu/ [2] https://scs.community/ [3] https://osism.de/ [4] In case you wonder: 8 dedicated cores, 32GiB RAM, 2x200GB SSD disks     on bare metal sys, intel Cascade Lake, nVidia GPU with 64 Ampere SMs     and InfiniBand. PS: Cc'ing some folks who have contributed to this. -- Kurt Garloff CTO Sovereign Cloud Stack OSB Alliance e.V. From Albert.Shih at obspm.fr Thu Jun 17 18:16:50 2021 From: Albert.Shih at obspm.fr (Albert Shih) Date: Thu, 17 Jun 2021 20:16:50 +0200 Subject: [victoria][cinder ?] Dell Unity + Iscsi In-Reply-To: <20210607085409.5heiwmvt67nv4kwa@localhost> References: <20210607085409.5heiwmvt67nv4kwa@localhost> Message-ID: Le 07/06/2021 à 10:54:09+0200, Gorka Eguileor a écrit > > Hi everyone > > > > > > I've a small openstack configuration with 4 computes nodes, a Dell Unity 480F for the storage. > > > > I'm using cinder with iscsi. > > > > Everything work when I create a instance. But some instance after few time > > are not reponsive. When I check on the hypervisor I can see > > > > [888240.310461] sd 14:0:0:2: [sdb] tag#120 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE > > [888240.310493] sd 14:0:0:2: [sdb] tag#120 Sense Key : Illegal Request [current] > > [888240.310502] sd 14:0:0:2: [sdb] tag#120 Add. Sense: Logical unit not supported > > [888240.310510] sd 14:0:0:2: [sdb] tag#120 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 > > [888240.310519] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 > > [888240.311045] sd 14:0:0:2: [sdb] tag#121 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE > > [888240.311050] sd 14:0:0:2: [sdb] tag#121 Sense Key : Illegal Request [current] > > [888240.311065] sd 14:0:0:2: [sdb] tag#121 Add. Sense: Logical unit not supported > > [888240.311070] sd 14:0:0:2: [sdb] tag#121 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 > > [888240.311074] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0 > > [888240.342482] sd 14:0:0:2: [sdb] tag#70 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE > > [888240.342490] sd 14:0:0:2: [sdb] tag#70 Sense Key : Illegal Request [current] > > [888240.342496] sd 14:0:0:2: [sdb] tag#70 Add. Sense: Logical unit not supported > > > > I check on the hypervisor, no error at all on the ethernet interface. > > > > I check on the switch, no error at all on the interface on the switch. > > > > No sure but it's seem the problem appear more often when the instance are > > doing nothing during some time. > > > > Hi, > > You should first check if the volume is still exported and mapped to the > host in Unity's web console. > > If it is still properly mapped, you should configure mutlipathing to > make it more resilient. > > If it isn't you probably should confirm that all nodes have different > initiator name (/etc/iscsi/initiatorname.iscsi) and different hostname > (if configured in nova's conf file under "host" or at the Linux level if > not). Yes...it's this f***ing install who put same initiatorname for two of my compute who make a big mess. Thanks a lot for your help Regards. -- Albert SHIH Observatoire de Paris xmpp: jas at obspm.fr Heure local/Local time: Thu Jun 17 08:15:39 PM CEST 2021 From peiyong.zhang at salesforce.com Thu Jun 17 18:58:41 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Thu, 17 Jun 2021 11:58:41 -0700 Subject: ImportError: cannot import name greenpool In-Reply-To: References: Message-ID: Clark, I adjusted the version of greenlet/eventlet as required by other modules. here is the output: ImportError: No module named dnskeybase sh-4.2# yum list installed | grep "greenlet\|eventlet\|gevent" python2-eventlet.noarch 0.25.1-1.el7 @local_openstack-tnrp python2-gevent.x86_64 1.1.2-2.el7 @local_openstack-tnrp python2-greenlet.x86_64 0.4.12-1.el7 @local_openstack-tnrp sh-4.2# python2 Python 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from eventlet import greenpool Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/eventlet/__init__.py", line 10, in from eventlet import convenience File "/usr/lib/python2.7/site-packages/eventlet/convenience.py", line 7, in from eventlet.green import socket File "/usr/lib/python2.7/site-packages/eventlet/green/socket.py", line 21, in from eventlet.support import greendns File "/usr/lib/python2.7/site-packages/eventlet/support/greendns.py", line 67, in setattr(dns.rdtypes, pkg, import_patched('dns.rdtypes.' + pkg)) File "/usr/lib/python2.7/site-packages/eventlet/support/greendns.py", line 59, in import_patched return patcher.import_patched(module_name, **modules) File "/usr/lib/python2.7/site-packages/eventlet/patcher.py", line 126, in import_patched *additional_modules + tuple(kw_additional_modules.items())) File "/usr/lib/python2.7/site-packages/eventlet/patcher.py", line 100, in inject module = __import__(module_name, {}, {}, module_name.split('.')[:-1]) ImportError: No module named dnskeybase >>> On Thu, Jun 17, 2021 at 10:44 AM Clark Boylan wrote: > On Thu, Jun 17, 2021, at 10:03 AM, Pete Zhang wrote: > > Not sure if my previous email went through or not. Just resend it. > > > > We hit this error during "glance-manage db_sync": > > *ImportError: cannot import name greenpool.* > > Any idea what the root cause is and how to fix it? > > We have the following rpms installed (thought related). > > > > `python2-greenlet-0.4.9-1.el7.x86_64.rpm` > > `python2-eventlet-0.18.4-2.el7.noarch.rpm` > > `python2-gevent-1.1.2-2.el7.x86_64.rpm` > > > > > > snip > > > Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: > > from eventlet import greenpool > > > > Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: > > ImportError: cannot import name greenpool > > I'm not familiar with the CentOS7 packaging, but as a sanity check I ran > `pip install greenlet==0.4.9 eventlet==0.18.4` in a python2 virtualenv then > in a python2 interpreter `from eventlet import greenpool` runs > successfully. I would try running this import by hand on your system to see > if you can get any more information. Could be a packaging issue or > potentially some sort of name collision between script names? > > Clark > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Jun 17 19:18:53 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 17 Jun 2021 12:18:53 -0700 Subject: ImportError: cannot import name greenpool In-Reply-To: References: Message-ID: <2fd58158-e96c-4b21-a4e7-e219044bdb22@www.fastmail.com> On Thu, Jun 17, 2021, at 11:58 AM, Pete Zhang wrote: > Clark, > > I adjusted the version of greenlet/eventlet as required by other > modules. here is the output: > > ImportError: No module named dnskeybase > > sh-4.2# yum list installed | grep "greenlet\|eventlet\|gevent" > > python2-eventlet.noarch 0.25.1-1.el7 @local_openstack-tnrp > > python2-gevent.x86_64 1.1.2-2.el7 @local_openstack-tnrp > > python2-greenlet.x86_64 0.4.12-1.el7 @local_openstack-tnrp > > sh-4.2# python2 > > Python 2.7.5 (default, Oct 30 2018, 23:45:53) > > [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> from eventlet import greenpool > > Traceback (most recent call last): > > File "", line 1, in > > File "/usr/lib/python2.7/site-packages/eventlet/__init__.py", line > 10, in > > from eventlet import convenience > > File "/usr/lib/python2.7/site-packages/eventlet/convenience.py", line > 7, in > > from eventlet.green import socket > > File "/usr/lib/python2.7/site-packages/eventlet/green/socket.py", > line 21, in > > from eventlet.support import greendns > > File "/usr/lib/python2.7/site-packages/eventlet/support/greendns.py", > line 67, in > > setattr(dns.rdtypes, pkg, import_patched('dns.rdtypes.' + pkg)) > > File "/usr/lib/python2.7/site-packages/eventlet/support/greendns.py", > line 59, in import_patched > > return patcher.import_patched(module_name, **modules) > > File "/usr/lib/python2.7/site-packages/eventlet/patcher.py", line > 126, in import_patched > > *additional_modules + tuple(kw_additional_modules.items())) > > File "/usr/lib/python2.7/site-packages/eventlet/patcher.py", line > 100, in inject > > module = __import__(module_name, {}, {}, module_name.split('.')[:-1]) > > ImportError: No module named dnskeybase > > >>> The internet indicates [0] this is a problem with your dnspython installation. That post uses pip, but you are using distro packages so you may need to map things a bit to do further debugging. Hopefully, that helps get things sorted though. [0] https://stackoverflow.com/questions/55152733/eventlet-importerror-no-module-named-dnskeybase > > > On Thu, Jun 17, 2021 at 10:44 AM Clark Boylan wrote: > > On Thu, Jun 17, 2021, at 10:03 AM, Pete Zhang wrote: > > > Not sure if my previous email went through or not. Just resend it. > > > > > > We hit this error during "glance-manage db_sync": > > > *ImportError: cannot import name greenpool.* > > > Any idea what the root cause is and how to fix it? > > > We have the following rpms installed (thought related). > > > > > > `python2-greenlet-0.4.9-1.el7.x86_64.rpm` > > > `python2-eventlet-0.18.4-2.el7.noarch.rpm` > > > `python2-gevent-1.1.2-2.el7.x86_64.rpm` > > > > > > > > > > snip > > > > > Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: > > > from eventlet import greenpool > > > > > > Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: > > > ImportError: cannot import name greenpool > > > > I'm not familiar with the CentOS7 packaging, but as a sanity check I ran `pip install greenlet==0.4.9 eventlet==0.18.4` in a python2 virtualenv then in a python2 interpreter `from eventlet import greenpool` runs successfully. I would try running this import by hand on your system to see if you can get any more information. Could be a packaging issue or potentially some sort of name collision between script names? > > > > Clark From peiyong.zhang at salesforce.com Thu Jun 17 20:04:08 2021 From: peiyong.zhang at salesforce.com (Pete Zhang) Date: Thu, 17 Jun 2021 13:04:08 -0700 Subject: ImportError: cannot import name greenpool In-Reply-To: <2fd58158-e96c-4b21-a4e7-e219044bdb22@www.fastmail.com> References: <2fd58158-e96c-4b21-a4e7-e219044bdb22@www.fastmail.com> Message-ID: >From the [0], it appears dnspython is NOT installed properly. I installed python-dnspython (not sure if its the same as dnspython) and still don't see *dns* or "base" under /lib/python2.7/site-packages/dns/rdtypes/ANY as [0] mentioned. sh-4.2# pwd /lib/python2.7/site-packages/dns/rdtypes/ANY sh-4.2# ls -ail *dns* ls: cannot access *dns*: No such file or directory sh-4.2# ls -ail *base* ls: cannot access *base*: No such file or directory sh-4.2# ls -ail __init* 540159 -rw-r--r--. 1 root root 1169 Jun 13 2015 __init__.py 540248 -rw-r--r--. 2 root root 602 Aug 3 2017 __init__.pyc 540248 -rw-r--r--. 2 root root 602 Aug 3 2017 __init__.pyo sh-4.2# yum list | grep dnspython python-dnspython.noarch 1:1.10.0-1 @ORB-extras sfdc-python27-dnspython.noarch 1.15.0-2019.10.311854.7.el7 strata_sfdc-python sfdc-python35-dnspython.noarch 1.15.0-2019.04.081624.7.el7 strata_sfdc-python sfdc-python36-dnspython.noarch 1.15.0-2021.05.122008.34.el7 sh-4.2# yum list installed | grep dnspython python-dnspython.noarch 1:1.10.0-1 @ORB-extras sh-4.2# On Thu, Jun 17, 2021 at 12:19 PM Clark Boylan wrote: > On Thu, Jun 17, 2021, at 11:58 AM, Pete Zhang wrote: > > Clark, > > > > I adjusted the version of greenlet/eventlet as required by other > > modules. here is the output: > > > > ImportError: No module named dnskeybase > > > > sh-4.2# yum list installed | grep "greenlet\|eventlet\|gevent" > > > > python2-eventlet.noarch 0.25.1-1.el7 > @local_openstack-tnrp > > > > python2-gevent.x86_64 1.1.2-2.el7 > @local_openstack-tnrp > > > > python2-greenlet.x86_64 0.4.12-1.el7 > @local_openstack-tnrp > > > > sh-4.2# python2 > > > > Python 2.7.5 (default, Oct 30 2018, 23:45:53) > > > > [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2 > > > > Type "help", "copyright", "credits" or "license" for more information. > > > > >>> from eventlet import greenpool > > > > Traceback (most recent call last): > > > > File "", line 1, in > > > > File "/usr/lib/python2.7/site-packages/eventlet/__init__.py", line > > 10, in > > > > from eventlet import convenience > > > > File "/usr/lib/python2.7/site-packages/eventlet/convenience.py", line > > 7, in > > > > from eventlet.green import socket > > > > File "/usr/lib/python2.7/site-packages/eventlet/green/socket.py", > > line 21, in > > > > from eventlet.support import greendns > > > > File "/usr/lib/python2.7/site-packages/eventlet/support/greendns.py", > > line 67, in > > > > setattr(dns.rdtypes, pkg, import_patched('dns.rdtypes.' + pkg)) > > > > File "/usr/lib/python2.7/site-packages/eventlet/support/greendns.py", > > line 59, in import_patched > > > > return patcher.import_patched(module_name, **modules) > > > > File "/usr/lib/python2.7/site-packages/eventlet/patcher.py", line > > 126, in import_patched > > > > *additional_modules + tuple(kw_additional_modules.items())) > > > > File "/usr/lib/python2.7/site-packages/eventlet/patcher.py", line > > 100, in inject > > > > module = __import__(module_name, {}, {}, module_name.split('.')[:-1]) > > > > ImportError: No module named dnskeybase > > > > >>> > > The internet indicates [0] this is a problem with your dnspython > installation. That post uses pip, but you are using distro packages so you > may need to map things a bit to do further debugging. Hopefully, that helps > get things sorted though. > > [0] > https://stackoverflow.com/questions/55152733/eventlet-importerror-no-module-named-dnskeybase > > > > > > > On Thu, Jun 17, 2021 at 10:44 AM Clark Boylan > wrote: > > > On Thu, Jun 17, 2021, at 10:03 AM, Pete Zhang wrote: > > > > Not sure if my previous email went through or not. Just resend it. > > > > > > > > We hit this error during "glance-manage db_sync": > > > > *ImportError: cannot import name greenpool.* > > > > Any idea what the root cause is and how to fix it? > > > > We have the following rpms installed (thought related). > > > > > > > > `python2-greenlet-0.4.9-1.el7.x86_64.rpm` > > > > `python2-eventlet-0.18.4-2.el7.noarch.rpm` > > > > `python2-gevent-1.1.2-2.el7.x86_64.rpm` > > > > > > > > > > > > > > snip > > > > > > > Notice: > /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: > > > > from eventlet import greenpool > > > > > > > > Notice: > /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: > > > > ImportError: cannot import name greenpool > > > > > > I'm not familiar with the CentOS7 packaging, but as a sanity check I > ran `pip install greenlet==0.4.9 eventlet==0.18.4` in a python2 virtualenv > then in a python2 interpreter `from eventlet import greenpool` runs > successfully. I would try running this import by hand on your system to see > if you can get any more information. Could be a packaging issue or > potentially some sort of name collision between script names? > > > > > > Clark > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Jun 17 22:40:30 2021 From: zigo at debian.org (Thomas Goirand) Date: Fri, 18 Jun 2021 00:40:30 +0200 Subject: [nova] SCS standardized flavor naming In-Reply-To: <5730c898-d979-f95c-5905-425dddab5e65@garloff.de> References: <5730c898-d979-f95c-5905-425dddab5e65@garloff.de> Message-ID: On 6/17/21 7:54 PM, Kurt Garloff wrote: > Hi, > > we (SCS) are working on defining a fully open source cloud and container > stack as part of the Gaia-X[1] project. The intention is to provide a > common well-standardized way to deploy, manage, configure, and operate > the needed software. The vision is to have a network of federated clouds > that can be used as one, which requires IAM federation and a high level > of compatibility and uniformity. Our project is called Sovereign Cloud > Stack (SCS)[2]. > Obviously, we are using existing open source projects from the OIF, the > CNCF and others and are seeking alignment with these communities. Some > experts well-known in the OpenStack universe are participating in our > effort. On the OpenStack side, we are using OSISM[3] which leverages > kolla-ansible. > > We would like to seek your input and feedback into our attempt of > defining a standardized naming scheme for flavors and a list of > standard flavors available in all clouds that deliver SCS-compliant IaaS. > > Find the draft proposal at > https://github.com/SovereignCloudStack/Operational-Docs/blob/main/flavor-naming-draft.MD > > We prefer feedback as github issues and/or PRs. > Knowing that the OpenStack community prefers gerrit, we'll of course > also incorporate any comment we get via this mailing list into our > thinking. We hope you can accept us pasting content from mails into > github issues, so we create a track record of the taken decisions. > (Please indicate if this is not OK for you and we'll refrain from doing > so.) > > Before you ask why we believe the proposal is useful: > We are perfectly aware that it is possible to discover flavor properties; > however, in many automation recipes (playbooks, terraform vars etc) we > find flavor names encoded and it is one pain point having to adapt them > on every cloud that you use. So we want to have some uniformity on this > across SCS clouds. (With similar reasoning, expect a image naming and > image metadata proposal next.) > > We are looking for feedback in two directions: > > (1) If you are aware of similar efforts to standardize flavor naming, >     please point us to it, so we can seek contact and align. > > (2) Please look at the proposal itself. When looking into the details >     how we specify how to optionally(!) encode a number of details into >     flavor names, please keep in mind that this is indeed optional. We >     expect most flavor names to be as simple as SCS-4V:8:20 or even >     SCS-4V:8, even though complicated SCS-8C:32:2x200S-bms-i2-GNa:64-ib >     [4] is possible for clouds that provide that level of differentiation >     and want/need to expose this via the flavor name. > > Of course, input from existing providers of OpenStack infrastructure is > particularly valuable. > > Feedback welcome! > > [1] https://gaia-x.eu/ > [2] https://scs.community/ > [3] https://osism.de/ > [4] In case you wonder: 8 dedicated cores, 32GiB RAM, 2x200GB SSD disks >     on bare metal sys, intel Cascade Lake, nVidia GPU with 64 Ampere SMs >     and InfiniBand. > > PS: Cc'ing some folks who have contributed to this. Hi, While I do like the idea of a standard for flavor naming, I don't like at all what I've seen as your example. IMO, it's best if more explicit. I wouldn't be able to read one of your flavor names, and immediately know what it is made of. To the contrary... We're using naming scheme like this: nvt4-a8-ram24-disk50-perf2 This means: - nvt4: nvidia T4 GPU - a8: AMD VCPU 8 (we also have i4 for example, for Intel) - ram24: 24 GB of RAM - disk50: 50 GB of local system disk - perf2: level 2 of IOps / IO bandwidth Having explicit and full "ram", "disk" and "perf" in the name helps a lot to understand. I think it's much nicer than then cryptic: "SCS-16T:64:200s-GNa:64-ib" which I would never be able to decode without a document on the side. I understand that you're attempting to make the flavor name smaller, but IMO that's a bad idea. I don't see any problem with an explicit and longer flavor name. Cheers, Thomas Goirand (zigo) From yangyi01 at inspur.com Fri Jun 18 01:16:14 2021 From: yangyi01 at inspur.com (=?utf-8?B?WWkgWWFuZyAo5p2o54eaKS3kupHmnI3liqHpm4blm6I=?=) Date: Fri, 18 Jun 2021 01:16:14 +0000 Subject: =?utf-8?B?562U5aSNOiBbbmV1dHJvbl0gRHJpdmVycyBtZWV0aW5nIGFnZW5kYSBmb3Ig?= =?utf-8?Q?18.06.2021?= In-Reply-To: <5345361.cRlJu7B4Gb@p1> References: <5345361.cRlJu7B4Gb@p1> Message-ID: <76fa6f8f35b84ac7b2a30ddde031413c@inspur.com> Anybody can help confirm this meeting is at 14:00 UTC Fri, Beijing Time 22:00 UTC+8 Fri? -----邮件原件----- 发件人: Slawek Kaplonski [mailto:skaplons at redhat.com] 发送时间: 2021年6月17日 21:02 收件人: miguel at mlavalle.com; amotoki at gmail.com; YAMAMOTO Takashi ; ralonsoh at redhat.com; Nate Johnson ; Haley, Brian 抄送: openstack-discuss at lists.openstack.org 主题: [neutron] Drivers meeting agenda for 18.06.2021 Hi, Agenda for tomorrow's drivers meeting is at [1]. We have one new RFE to discuss [2] - It's another approach to do OpenFlow based DVR :) We have also one new RFE [3] which I would like You to check and help triaging before we will discuss it in the drivers meeting. [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda [2] https://bugs.launchpad.net/neutron/+bug/1931953 [3] https://bugs.launchpad.net/neutron/+bug/1932154 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3600 bytes Desc: not available URL: From amotoki at gmail.com Fri Jun 18 01:43:40 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 18 Jun 2021 10:43:40 +0900 Subject: [neutron] Drivers meeting agenda for 18.06.2021 In-Reply-To: <76fa6f8f35b84ac7b2a30ddde031413c@inspur.com> References: <5345361.cRlJu7B4Gb@p1> <76fa6f8f35b84ac7b2a30ddde031413c@inspur.com> Message-ID: You can check the time in your local time by clicking the link of the meeting time (1400 UTC) in https://meetings.opendev.org/#Neutron_drivers_Meeting (the neutron drivers meeting info). The URL https://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=00&sec=0 says 2200 UTC at Beijing. Akihiro Motoki (irc: amotoki) On Fri, Jun 18, 2021 at 10:17 AM Yi Yang (杨燚)-云服务集团 wrote: > > Anybody can help confirm this meeting is at 14:00 UTC Fri, Beijing Time 22:00 UTC+8 Fri? > > -----邮件原件----- > 发件人: Slawek Kaplonski [mailto:skaplons at redhat.com] > 发送时间: 2021年6月17日 21:02 > 收件人: miguel at mlavalle.com; amotoki at gmail.com; YAMAMOTO Takashi ; ralonsoh at redhat.com; Nate Johnson ; Haley, Brian > 抄送: openstack-discuss at lists.openstack.org > 主题: [neutron] Drivers meeting agenda for 18.06.2021 > > Hi, > > Agenda for tomorrow's drivers meeting is at [1]. > We have one new RFE to discuss [2] - It's another approach to do OpenFlow based DVR :) We have also one new RFE [3] which I would like You to check and help triaging before we will discuss it in the drivers meeting. > > [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda > [2] https://bugs.launchpad.net/neutron/+bug/1931953 > [3] https://bugs.launchpad.net/neutron/+bug/1932154 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From haleyb.dev at gmail.com Fri Jun 18 02:16:08 2021 From: haleyb.dev at gmail.com (Brian Haley) Date: Thu, 17 Jun 2021 22:16:08 -0400 Subject: [neutron] Drivers meeting agenda for 18.06.2021 In-Reply-To: <5345361.cRlJu7B4Gb@p1> References: <5345361.cRlJu7B4Gb@p1> Message-ID: <4b08ecde-581d-2699-9569-b6734a33b033@gmail.com> Hi Slawek, I will not be able to attend the meeting tomorrow, have a conflicting appointment, notes on the two RFEs below. On 6/17/21 9:01 AM, Slawek Kaplonski wrote: > Hi, > > Agenda for tomorrow's drivers meeting is at [1]. > We have one new RFE to discuss [2] - It's another approach to do OpenFlow > based DVR :) > We have also one new RFE [3] which I would like You to check and help triaging > before we will discuss it in the drivers meeting. > > [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda > [2] https://bugs.launchpad.net/neutron/+bug/1931953 (Openflow-based DVR L3) As Liu mentioned in the bug, there was a spec/blueprint/almost-implementation before we decided to adopt OVN as the flow-based DVR option. While I can understand the desire to want to have less hops in ML2/OVS/DVR, we made the decision to invest in OVN going forward, so I don't think we should take on this work. > [3] https://bugs.launchpad.net/neutron/+bug/1932154 (Off-path SmartNIC Port Binding) From a high-level I'm fine with this, I'm sure Rodolfo will have more questions :) After reading the Nova spec Sean asked a good question - is there a requirement on the core OVN code to support this? I'll add that to the bug (done). -Brian From yangyi01 at inspur.com Fri Jun 18 02:18:17 2021 From: yangyi01 at inspur.com (=?utf-8?B?WWkgWWFuZyAo5p2o54eaKS3kupHmnI3liqHpm4blm6I=?=) Date: Fri, 18 Jun 2021 02:18:17 +0000 Subject: =?utf-8?B?562U5aSNOiBbbmV1dHJvbl0gRHJpdmVycyBtZWV0aW5nIGFnZW5kYSBmb3Ig?= =?utf-8?Q?18.06.2021?= In-Reply-To: References: <473bfa6f98a1bbab3a34c7c45fe9fdf0@sslemail.net> Message-ID: <191556d139f347ce8527fdb0ea90ec5a@inspur.com> Akihiro, got it, thank you so much. I also used timeanddate.com to check this. -----邮件原件----- 发件人: Akihiro Motoki [mailto:amotoki at gmail.com] 发送时间: 2021年6月18日 9:44 收件人: Yi Yang (杨燚)-云服务集团 抄送: skaplons at redhat.com; miguel at mlavalle.com; yamamoto at midokura.com; ralonsoh at redhat.com; njohnson at redhat.com; bhaley at redhat.com; openstack-discuss at lists.openstack.org 主题: Re: [neutron] Drivers meeting agenda for 18.06.2021 You can check the time in your local time by clicking the link of the meeting time (1400 UTC) in https://meetings.opendev.org/#Neutron_drivers_Meeting (the neutron drivers meeting info). The URL https://www.timeanddate.com/worldclock/fixedtime.html?hour=14&min=00&sec=0 says 2200 UTC at Beijing. Akihiro Motoki (irc: amotoki) On Fri, Jun 18, 2021 at 10:17 AM Yi Yang (杨燚)-云服务集团 wrote: > > Anybody can help confirm this meeting is at 14:00 UTC Fri, Beijing Time 22:00 UTC+8 Fri? > > -----邮件原件----- > 发件人: Slawek Kaplonski [mailto:skaplons at redhat.com] > 发送时间: 2021年6月17日 21:02 > 收件人: miguel at mlavalle.com; amotoki at gmail.com; YAMAMOTO Takashi > ; ralonsoh at redhat.com; Nate Johnson > ; Haley, Brian > 抄送: openstack-discuss at lists.openstack.org > 主题: [neutron] Drivers meeting agenda for 18.06.2021 > > Hi, > > Agenda for tomorrow's drivers meeting is at [1]. > We have one new RFE to discuss [2] - It's another approach to do OpenFlow based DVR :) We have also one new RFE [3] which I would like You to check and help triaging before we will discuss it in the drivers meeting. > > [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda > [2] https://bugs.launchpad.net/neutron/+bug/1931953 > [3] https://bugs.launchpad.net/neutron/+bug/1932154 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3600 bytes Desc: not available URL: From yangyi01 at inspur.com Fri Jun 18 03:27:38 2021 From: yangyi01 at inspur.com (=?utf-8?B?WWkgWWFuZyAo5p2o54eaKS3kupHmnI3liqHpm4blm6I=?=) Date: Fri, 18 Jun 2021 03:27:38 +0000 Subject: =?utf-8?B?562U5aSNOiBbbGlzdHMub3BlbnN0YWNrLm9yZ+S7o+WPkV1SZTogW25ldXRy?= =?utf-8?Q?on]_Drivers_meeting_agenda_for_18.06.2021?= In-Reply-To: <4b08ecde-581d-2699-9569-b6734a33b033@gmail.com> References: <766165e72dabae55e4ca7642efa01b69@sslemail.net> <4b08ecde-581d-2699-9569-b6734a33b033@gmail.com> Message-ID: <453fd330a1274a8a9d718ba5be767df6@inspur.com> Hi, Brian Is OVN only one option for Neutron now? The old blueprint is obsoleted because nobody will do this, I don't think it has been almost done. -----邮件原件----- 发件人: Brian Haley [mailto:haleyb.dev at gmail.com] 发送时间: 2021年6月18日 10:16 收件人: Slawek Kaplonski ; miguel at mlavalle.com; amotoki at gmail.com; YAMAMOTO Takashi ; ralonsoh at redhat.com; Nate Johnson 抄送: openstack-discuss at lists.openstack.org 主题: [lists.openstack.org代发]Re: [neutron] Drivers meeting agenda for 18.06.2021 Hi Slawek, I will not be able to attend the meeting tomorrow, have a conflicting appointment, notes on the two RFEs below. On 6/17/21 9:01 AM, Slawek Kaplonski wrote: > Hi, > > Agenda for tomorrow's drivers meeting is at [1]. > We have one new RFE to discuss [2] - It's another approach to do > OpenFlow based DVR :) We have also one new RFE [3] which I would like > You to check and help triaging before we will discuss it in the > drivers meeting. > > [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda > [2] https://bugs.launchpad.net/neutron/+bug/1931953 (Openflow-based DVR L3) As Liu mentioned in the bug, there was a spec/blueprint/almost-implementation before we decided to adopt OVN as the flow-based DVR option. While I can understand the desire to want to have less hops in ML2/OVS/DVR, we made the decision to invest in OVN going forward, so I don't think we should take on this work. > [3] https://bugs.launchpad.net/neutron/+bug/1932154 (Off-path SmartNIC Port Binding) From a high-level I'm fine with this, I'm sure Rodolfo will have more questions :) After reading the Nova spec Sean asked a good question - is there a requirement on the core OVN code to support this? I'll add that to the bug (done). -Brian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3600 bytes Desc: not available URL: From haleyb.dev at gmail.com Fri Jun 18 03:56:00 2021 From: haleyb.dev at gmail.com (Brian Haley) Date: Thu, 17 Jun 2021 23:56:00 -0400 Subject: =?UTF-8?B?UmU6IOetlOWkjTogW2xpc3RzLm9wZW5zdGFjay5vcmfku6Plj5FdUmU6?= =?UTF-8?Q?_=5bneutron=5d_Drivers_meeting_agenda_for_18=2e06=2e2021?= In-Reply-To: <453fd330a1274a8a9d718ba5be767df6@inspur.com> References: <766165e72dabae55e4ca7642efa01b69@sslemail.net> <4b08ecde-581d-2699-9569-b6734a33b033@gmail.com> <453fd330a1274a8a9d718ba5be767df6@inspur.com> Message-ID: Hi Yi, On 6/17/21 11:27 PM, Yi Yang (杨燚)-云服务集团 wrote: > Hi, Brian > > Is OVN only one option for Neutron now? The old blueprint is obsoleted because nobody will do this, I don't think it has been almost done. I would encourage you to discuss your RFE at the drivers meeting tomorrow, I was just stating my opinion since I will not be there. Part of this is based on the previous spec that was implementing this, https://review.opendev.org/c/openstack/neutron-specs/+/629761 - which was abandoned with this note: "With our continues development of DVR – Openflow solution and the evolution of OVN, we have done some analysis. The feature set in OVN is richer than the current DVR solution & is gaining momentum in the community. So we are intending to transition our DVR efforts to OVN and stop further development in DVR. We believe this will better help community and networking project. Intel will be glad to help customers transition from DVR to OVN." There were a number of patches that were pretty far along when this was done according to what's listed on https://blueprints.launchpad.net/neutron/+spec/openflow-based-dvr -Brian > -----邮件原件----- > 发件人: Brian Haley [mailto:haleyb.dev at gmail.com] > 发送时间: 2021年6月18日 10:16 > 收件人: Slawek Kaplonski ; miguel at mlavalle.com; amotoki at gmail.com; YAMAMOTO Takashi ; ralonsoh at redhat.com; Nate Johnson > 抄送: openstack-discuss at lists.openstack.org > 主题: [lists.openstack.org代发]Re: [neutron] Drivers meeting agenda for 18.06.2021 > > Hi Slawek, > > I will not be able to attend the meeting tomorrow, have a conflicting appointment, notes on the two RFEs below. > > On 6/17/21 9:01 AM, Slawek Kaplonski wrote: >> Hi, >> >> Agenda for tomorrow's drivers meeting is at [1]. >> We have one new RFE to discuss [2] - It's another approach to do >> OpenFlow based DVR :) We have also one new RFE [3] which I would like >> You to check and help triaging before we will discuss it in the >> drivers meeting. >> >> [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda >> [2] https://bugs.launchpad.net/neutron/+bug/1931953 > > (Openflow-based DVR L3) > > As Liu mentioned in the bug, there was a spec/blueprint/almost-implementation before we decided to adopt OVN as the flow-based DVR option. While I can understand the desire to want to have less hops in ML2/OVS/DVR, we made the decision to invest in OVN going forward, so I don't think we should take on this work. > >> [3] https://bugs.launchpad.net/neutron/+bug/1932154 > > (Off-path SmartNIC Port Binding) > > From a high-level I'm fine with this, I'm sure Rodolfo will have more questions :) After reading the Nova spec Sean asked a good question - is there a requirement on the core OVN code to support this? I'll add that to the bug (done). > > -Brian > From yangyi01 at inspur.com Fri Jun 18 05:52:24 2021 From: yangyi01 at inspur.com (=?utf-8?B?WWkgWWFuZyAo5p2o54eaKS3kupHmnI3liqHpm4blm6I=?=) Date: Fri, 18 Jun 2021 05:52:24 +0000 Subject: =?utf-8?B?562U5aSNOiDnrZTlpI06IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj5Y+RXVJl?= =?utf-8?B?OiBbbmV1dHJvbl0gRHJpdmVycyBtZWV0aW5nIGFnZW5kYSBmb3IgMTguMDYu?= =?utf-8?Q?2021?= In-Reply-To: References: <766165e72dabae55e4ca7642efa01b69@sslemail.net> <4b08ecde-581d-2699-9569-b6734a33b033@gmail.com> <453fd330a1274a8a9d718ba5be767df6@inspur.com> Message-ID: <264d952cf8a046a3af35caf7bad69414@inspur.com> Thanks Brian, this is comments from my previous colleague at Intel, I know that, it isn't Openstack community's decision, per my understanding, it is acceptable if someone can continue to do that, that is to say, this is Intel Openstack team's decision, not openstack community's conclusion. "With our continues development of DVR – Openflow solution and the evolution of OVN, we have done some analysis. The feature set in OVN is richer than the current DVR solution & is gaining momentum in the community. So we are intending to transition our DVR efforts to OVN and stop further development in DVR. We believe this will better help community and networking project. Intel will be glad to help customers transition from DVR to OVN." OVN is an option, but I don't think it is best option unless OVN team can use etcd to replace OVSDB, OVN team is struggling to fix its chicken rib by using DDlog, so far I don't see that Openstack users has big passion to switch to OVN. -----邮件原件----- 发件人: Brian Haley [mailto:haleyb.dev at gmail.com] 发送时间: 2021年6月18日 11:56 收件人: Yi Yang (杨燚)-云服务集团 ; skaplons at redhat.com; miguel at mlavalle.com; amotoki at gmail.com; yamamoto at midokura.com; ralonsoh at redhat.com; njohnson at redhat.com 抄送: openstack-discuss at lists.openstack.org 主题: Re: 答复: [lists.openstack.org代发]Re: [neutron] Drivers meeting agenda for 18.06.2021 Hi Yi, On 6/17/21 11:27 PM, Yi Yang (杨燚)-云服务集团 wrote: > Hi, Brian > > Is OVN only one option for Neutron now? The old blueprint is obsoleted because nobody will do this, I don't think it has been almost done. I would encourage you to discuss your RFE at the drivers meeting tomorrow, I was just stating my opinion since I will not be there. Part of this is based on the previous spec that was implementing this, https://review.opendev.org/c/openstack/neutron-specs/+/629761 - which was abandoned with this note: "With our continues development of DVR – Openflow solution and the evolution of OVN, we have done some analysis. The feature set in OVN is richer than the current DVR solution & is gaining momentum in the community. So we are intending to transition our DVR efforts to OVN and stop further development in DVR. We believe this will better help community and networking project. Intel will be glad to help customers transition from DVR to OVN." There were a number of patches that were pretty far along when this was done according to what's listed on https://blueprints.launchpad.net/neutron/+spec/openflow-based-dvr -Brian > -----邮件原件----- > 发件人: Brian Haley [mailto:haleyb.dev at gmail.com] > 发送时间: 2021年6月18日 10:16 > 收件人: Slawek Kaplonski ; miguel at mlavalle.com; > amotoki at gmail.com; YAMAMOTO Takashi ; > ralonsoh at redhat.com; Nate Johnson > 抄送: openstack-discuss at lists.openstack.org > 主题: [lists.openstack.org代发]Re: [neutron] Drivers meeting agenda for > 18.06.2021 > > Hi Slawek, > > I will not be able to attend the meeting tomorrow, have a conflicting appointment, notes on the two RFEs below. > > On 6/17/21 9:01 AM, Slawek Kaplonski wrote: >> Hi, >> >> Agenda for tomorrow's drivers meeting is at [1]. >> We have one new RFE to discuss [2] - It's another approach to do >> OpenFlow based DVR :) We have also one new RFE [3] which I would like >> You to check and help triaging before we will discuss it in the >> drivers meeting. >> >> [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda >> [2] https://bugs.launchpad.net/neutron/+bug/1931953 > > (Openflow-based DVR L3) > > As Liu mentioned in the bug, there was a spec/blueprint/almost-implementation before we decided to adopt OVN as the flow-based DVR option. While I can understand the desire to want to have less hops in ML2/OVS/DVR, we made the decision to invest in OVN going forward, so I don't think we should take on this work. > >> [3] https://bugs.launchpad.net/neutron/+bug/1932154 > > (Off-path SmartNIC Port Binding) > > From a high-level I'm fine with this, I'm sure Rodolfo will have more questions :) After reading the Nova spec Sean asked a good question - is there a requirement on the core OVN code to support this? I'll add that to the bug (done). > > -Brian > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3600 bytes Desc: not available URL: From ekuvaja at redhat.com Fri Jun 18 08:24:03 2021 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Fri, 18 Jun 2021 09:24:03 +0100 Subject: ImportError: cannot import name greenpool In-Reply-To: References: <2fd58158-e96c-4b21-a4e7-e219044bdb22@www.fastmail.com> Message-ID: On Thu, Jun 17, 2021 at 9:08 PM Pete Zhang wrote: > From the [0], it appears dnspython is NOT installed properly. > I installed python-dnspython (not sure if its the same as dnspython) and > still don't see *dns* or "base" under > /lib/python2.7/site-packages/dns/rdtypes/ANY as [0] mentioned. > > sh-4.2# pwd > > /lib/python2.7/site-packages/dns/rdtypes/ANY > > sh-4.2# ls -ail *dns* > > ls: cannot access *dns*: No such file or directory > > sh-4.2# ls -ail *base* > > ls: cannot access *base*: No such file or directory > > sh-4.2# ls -ail __init* > > 540159 -rw-r--r--. 1 root root 1169 Jun 13 2015 __init__.py > > 540248 -rw-r--r--. 2 root root 602 Aug 3 2017 __init__.pyc > > 540248 -rw-r--r--. 2 root root 602 Aug 3 2017 __init__.pyo > > sh-4.2# yum list | grep dnspython > > python-dnspython.noarch 1:1.10.0-1 > @ORB-extras > > sfdc-python27-dnspython.noarch 1.15.0-2019.10.311854.7.el7 > strata_sfdc-python > > sfdc-python35-dnspython.noarch 1.15.0-2019.04.081624.7.el7 > strata_sfdc-python > > sfdc-python36-dnspython.noarch 1.15.0-2021.05.122008.34.el7 > > sh-4.2# yum list installed | grep dnspython > > python-dnspython.noarch 1:1.10.0-1 @ORB-extras > > > sh-4.2# > > Hi Pete, Out of curiosity, what version of Glance are you trying to run? Just wanting to make sure this is not PY27/PY3 thing as I recall us having issues with those dependencies at some point. So there might also be fixes merged for those issues if it's an older release, but obviously recently we have not supported PY27 anymore. - jokke > On Thu, Jun 17, 2021 at 12:19 PM Clark Boylan wrote: > >> On Thu, Jun 17, 2021, at 11:58 AM, Pete Zhang wrote: >> > Clark, >> > >> > I adjusted the version of greenlet/eventlet as required by other >> > modules. here is the output: >> > >> > ImportError: No module named dnskeybase >> > >> > sh-4.2# yum list installed | grep "greenlet\|eventlet\|gevent" >> > >> > python2-eventlet.noarch 0.25.1-1.el7 >> @local_openstack-tnrp >> > >> > python2-gevent.x86_64 1.1.2-2.el7 >> @local_openstack-tnrp >> > >> > python2-greenlet.x86_64 0.4.12-1.el7 >> @local_openstack-tnrp >> > >> > sh-4.2# python2 >> > >> > Python 2.7.5 (default, Oct 30 2018, 23:45:53) >> > >> > [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2 >> > >> > Type "help", "copyright", "credits" or "license" for more information. >> > >> > >>> from eventlet import greenpool >> > >> > Traceback (most recent call last): >> > >> > File "", line 1, in >> > >> > File "/usr/lib/python2.7/site-packages/eventlet/__init__.py", line >> > 10, in >> > >> > from eventlet import convenience >> > >> > File "/usr/lib/python2.7/site-packages/eventlet/convenience.py", line >> > 7, in >> > >> > from eventlet.green import socket >> > >> > File "/usr/lib/python2.7/site-packages/eventlet/green/socket.py", >> > line 21, in >> > >> > from eventlet.support import greendns >> > >> > File "/usr/lib/python2.7/site-packages/eventlet/support/greendns.py", >> > line 67, in >> > >> > setattr(dns.rdtypes, pkg, import_patched('dns.rdtypes.' + pkg)) >> > >> > File "/usr/lib/python2.7/site-packages/eventlet/support/greendns.py", >> > line 59, in import_patched >> > >> > return patcher.import_patched(module_name, **modules) >> > >> > File "/usr/lib/python2.7/site-packages/eventlet/patcher.py", line >> > 126, in import_patched >> > >> > *additional_modules + tuple(kw_additional_modules.items())) >> > >> > File "/usr/lib/python2.7/site-packages/eventlet/patcher.py", line >> > 100, in inject >> > >> > module = __import__(module_name, {}, {}, >> module_name.split('.')[:-1]) >> > >> > ImportError: No module named dnskeybase >> > >> > >>> >> >> The internet indicates [0] this is a problem with your dnspython >> installation. That post uses pip, but you are using distro packages so you >> may need to map things a bit to do further debugging. Hopefully, that helps >> get things sorted though. >> >> [0] >> https://stackoverflow.com/questions/55152733/eventlet-importerror-no-module-named-dnskeybase >> >> > >> > >> > On Thu, Jun 17, 2021 at 10:44 AM Clark Boylan >> wrote: >> > > On Thu, Jun 17, 2021, at 10:03 AM, Pete Zhang wrote: >> > > > Not sure if my previous email went through or not. Just resend it. >> > > > >> > > > We hit this error during "glance-manage db_sync": >> > > > *ImportError: cannot import name greenpool.* >> > > > Any idea what the root cause is and how to fix it? >> > > > We have the following rpms installed (thought related). >> > > > >> > > > `python2-greenlet-0.4.9-1.el7.x86_64.rpm` >> > > > `python2-eventlet-0.18.4-2.el7.noarch.rpm` >> > > > `python2-gevent-1.1.2-2.el7.x86_64.rpm` >> > > > >> > > > >> > > >> > > snip >> > > >> > > > Notice: >> /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: >> > > > from eventlet import greenpool >> > > > >> > > > Notice: >> /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: >> > > > ImportError: cannot import name greenpool >> > > >> > > I'm not familiar with the CentOS7 packaging, but as a sanity check I >> ran `pip install greenlet==0.4.9 eventlet==0.18.4` in a python2 virtualenv >> then in a python2 interpreter `from eventlet import greenpool` runs >> successfully. I would try running this import by hand on your system to see >> if you can get any more information. Could be a packaging issue or >> potentially some sort of name collision between script names? >> > > >> > > Clark >> > > > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at garloff.de Fri Jun 18 10:05:41 2021 From: openstack at garloff.de (Kurt Garloff) Date: Fri, 18 Jun 2021 12:05:41 +0200 Subject: [nova] SCS standardized flavor naming In-Reply-To: References: <5730c898-d979-f95c-5905-425dddab5e65@garloff.de> Message-ID: <71c81fb2-822a-2e4a-5ccb-92fdae0e1853@garloff.de> Hi Thomas, > On 6/17/21 7:54 PM, Kurt Garloff wrote: [...] >> We would like to seek your input and feedback into our attempt of >> defining a standardized naming scheme for flavors and a list of >> standard flavors available in all clouds that deliver SCS-compliant IaaS. >> >> Find the draft proposal at >> https://github.com/SovereignCloudStack/Operational-Docs/blob/main/flavor-naming-draft.MD [...] >> (2) Please look at the proposal itself. When looking into the details >>     how we specify how to optionally(!) encode a number of details into >>     flavor names, please keep in mind that this is indeed optional. We >>     expect most flavor names to be as simple as SCS-4V:8:20 or even >>     SCS-4V:8, even though complicated SCS-8C:32:2x200S-bms-i2-GNa:64-ib >>     [4] is possible for clouds that provide that level of differentiation >>     and want/need to expose this via the flavor name. >> >> Of course, input from existing providers of OpenStack infrastructure is >> particularly valuable. >> >> Feedback welcome! >> >> [1] https://gaia-x.eu/ >> [2] https://scs.community/ >> [3] https://osism.de/ >> [4] In case you wonder: 8 dedicated cores, 32GiB RAM, 2x200GB SSD disks >>     on bare metal sys, intel Cascade Lake, nVidia GPU with 64 Ampere SMs >>     and InfiniBand. >> >> PS: Cc'ing some folks who have contributed to this. On 18/06/2021 00:40, Thomas Goirand wrote: > Hi, > > While I do like the idea of a standard for flavor naming, I don't like > at all what I've seen as your example. IMO, it's best if more explicit. > I wouldn't be able to read one of your flavor names, and immediately > know what it is made of. To the contrary... > > We're using naming scheme like this: > nvt4-a8-ram24-disk50-perf2 > > This means: > - nvt4: nvidia T4 GPU > - a8: AMD VCPU 8 (we also have i4 for example, for Intel) > - ram24: 24 GB of RAM > - disk50: 50 GB of local system disk > - perf2: level 2 of IOps / IO bandwidth We could argue whether the ram and disk prefixes should be used to improve human parsing. I have seen many naming schemes that always had CPU:RAM[:DISK] in this order with both RAM and DISK in GB. This is contained at the beginning of every single name, so you get used to very very quickly. So we compressed this piece more strongly than the optional pieces which you don't see so often, such as -xen/kvm/bms/vmw/hyv or -ib. The rest gets a tiny bit of time to get used to, agreed. Most SCS flavors in our scheme would just be SCS-8V:24:50, your flavor could be SCS-8C:24:50S-a-GNt:40 or so. (We have not yet written down a way to count Tensor Cores, something that we shouldadd; reporting the 40SMs here is not so relevant as the 320tensor cores.) This assumes that the 8 vCPUs are real cores (otherwise 8Vinstead of 8C), that your perf2 is a SSD type of performance and that the GPU is a pass-through device (and not a virtualized vGPU). What I dislike about your scheme is that you don't put the nonstandard-pieces (the GPU) to the end -- I think it's easier to keep an overview over your flavors if they always start the same way. Also, you can't see whether the 8 vCPUs are dedicated cores, HTs or oversubscribed things; you have not specified the CPU generation (which in my opinion makes the amd vs intel spec somewhat useless -- somecustomers really want to know whether they have Cascade Lake or Ice Lake or Zen1 vs Zen2) nor a way to express generic x86 (x?). Is your scheme used by a lot of providers? > Having explicit and full "ram", "disk" and "perf" in the name helps a > lot to understand. I think it's much nicer than then cryptic: > > "SCS-16T:64:200s-GNa:64-ib" > > which I would never be able to decode without a document on the side. I > understand that you're attempting to make the flavor name smaller, but > IMO that's a bad idea. I don't see any problem with an explicit and > longer flavor name. If I take your scheme, the flavor would become "scs-x16thr-ram64-disk200-perf2-nva30-ib" or so. This assumes I stick to my ordering, use x for generic x86, thr for dedicated hyperthreads, perf2=ssd and we're using an Nvidia A30 (which has 56SMs not 64 so the match is not perfect) and -ib is verbose enough for infiniband ... It's a bit longish, but no real roadblock. I would argue that readibility is not better once you have spent some minutes with the spec. Again, most flavors would be just SCS-16T:64:200, which does not take a lot of training. Thanks for your feedback -- let's have this discussion! Happy to see others weighing in as well. -- Kurt Garloff CTO Sovereign Cloud Stack OSB Alliance e.V. From xxxcloudlearner at gmail.com Fri Jun 18 10:50:33 2021 From: xxxcloudlearner at gmail.com (cloud learner) Date: Fri, 18 Jun 2021 16:20:33 +0530 Subject: packstack stuck at Testing if puppet apply is finished: x.x.x.x_compute.pp Message-ID: Dear Experts, I have installed the single node all in one using victoria, i have to add the new node in current setup, as documented at RDO site, we have to made changes in EXCLUDE_SERVER=controllerip CONFIG_COMPUTE_HOST=newcompute ip, i edited the answer file and run on the controller node but it stuck at Testing if puppet apply is finished: X.X.X.X_compute.pp And it does not show any error, kindly suggest. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From darumaseye at protonmail.com Fri Jun 18 11:09:28 2021 From: darumaseye at protonmail.com (darumaseye) Date: Fri, 18 Jun 2021 11:09:28 +0000 Subject: [ patrole ] Understanding RBAC Permission Exceptions In-Reply-To: <0FgKqtoBQoUfjFo7ENOMDaUJasoBgCoZEdkeuxNxIkpyASy8o9I49Y3QrZfGY89NLn2aiFwC6x3Bkh_5lOvxzN3snERP4k8tgUrT35No9PI=@protonmail.com> References: <0FgKqtoBQoUfjFo7ENOMDaUJasoBgCoZEdkeuxNxIkpyASy8o9I49Y3QrZfGY89NLn2aiFwC6x3Bkh_5lOvxzN3snERP4k8tgUrT35No9PI=@protonmail.com> Message-ID: Hello, I am a researcher studying security of the Openstack's Policies. I'm very interested in the Patrole project and I would like to ask you some questions. In the documentation it is stated that Patrole compares two different types of results: - an expected one ( derived from oslo.policy ) - an actual one ( derived from an actual request to the API ). Given that, Patrole's tests can return 3 different values: 1) "Success" if actual result and expected result are both True or both False; 2) "RbacOverPermissionException" if actual result is True but expected result is False; 3) "RbacUnderPermissionException" in the other case. I can't understand in which cases the tests can return a value different from "Success". As far as I know, an API call should always be validated internally by oslo.policy's rules, before being allowed. So, in order for an API call to be accepted, oslo.policy's rules must allow that API call. It seems to me that the "expected" result ( derived from oslo.policy ) is always included in the actual result. Hoping that everything I said is correct, I would like to ask you: What issue is allowing such a strange behavior in Openstack APIs ? Why the expected results can be different from the actual ones? Are there publicly available examples showing "Failure" values? In general, are there any publicly available test cases that i can study to understand this? I Would like to thank you very much in advance; Best Regards, Jacopo -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Fri Jun 18 13:06:57 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 18 Jun 2021 13:06:57 +0000 (UTC) Subject: Allowing custom role to add users References: <1146251394.2312481.1624021617831.ref@mail.yahoo.com> Message-ID: <1146251394.2312481.1624021617831@mail.yahoo.com> Hi all, I have created a role (project_admin) and have given access to anyone with that role to be allowed delete VM's in their group/project. I am trying to allow that user to now add or remove users in the domain but I can't seem to figure it out. I have edited the /etc/keystone/policy.yaml and added role:project_admin to the create user rule. Is this the way I should be doing it? Or does anyone have any advice? Thanks in advance and happy Friday :) Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Fri Jun 18 15:10:38 2021 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 18 Jun 2021 15:10:38 +0000 (UTC) Subject: Allowing custom role to add users In-Reply-To: <1146251394.2312481.1624021617831@mail.yahoo.com> References: <1146251394.2312481.1624021617831.ref@mail.yahoo.com> <1146251394.2312481.1624021617831@mail.yahoo.com> Message-ID: <689136416.2407255.1624029038150@mail.yahoo.com> Hi again, I should add to this that I have created an admin for the domain but my issue with that is they can see all other networks, instances, etc belonging to other projects. Regards,Derek On Friday 18 June 2021, 14:13:13 IST, Derek O keeffe wrote: Hi all, I have created a role (project_admin) and have given access to anyone with that role to be allowed delete VM's in their group/project. I am trying to allow that user to now add or remove users in the domain but I can't seem to figure it out. I have edited the /etc/keystone/policy.yaml and added role:project_admin to the create user rule. Is this the way I should be doing it? Or does anyone have any advice? Thanks in advance and happy Friday :) Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Jun 18 16:12:35 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 18 Jun 2021 18:12:35 +0200 Subject: [nova][neutron][deployment-projects] Re secbug #1734320 Message-ID: Hello Folks! I am writing this because a recent patch proposed to DevStack [1] mentioned "when using ml2/ovs vif isolation should always be used to prevent cross tenant traffic during a live migration" which is related to secbug #1734320 "Eavesdropping private traffic" [2]. However, I've found that none of the publicly-available deployment projects seem to be using ``isolate_vif``. [3] [4] Should this be corrected? PS: I used the deployment-projects tag as a collective tag to avoid mentioning all the projects (as it is too long to write :-) ). I hope that relevant people see this if need be or someone passes the information to them. For now, I am curious whether this should actually be enforced by default with ML2/OVS. [1] https://review.opendev.org/c/openstack/devstack/+/796826 [2] https://bugs.launchpad.net/neutron/+bug/1734320 [3] https://codesearch.opendev.org/?q=%5Cbisolate_vif%5Cb&i=nope&files=&excludeFiles=&repos= [4] https://github.com/search?p=1&q=isolate_vif&type=Code -yoctozepto From radoslaw.piliszek at gmail.com Fri Jun 18 16:20:41 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 18 Jun 2021 18:20:41 +0200 Subject: [masakari] Deprecating process monitor - querying the community in this thread Message-ID: Hello users of Masakari, As the Masakari core team, we have discussed the deprecation/retirement of one of the monitors - the process monitor. The reason is simple - we consider its mission is better handled by other software such as Kubernetes, Docker, or systemd, depending on how you deploy your OpenStack cluster. I believe the process monitor might have made sense in the pre-containerisation, pre-systemd world but it does not seem to any longer. Please let us know, by replying to this mail, if you have a use case for the process monitor that cannot be handled by the aforementioned software. -yoctozepto From DHilsbos at performair.com Fri Jun 18 16:53:22 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 18 Jun 2021 16:53:22 +0000 Subject: [ops][nova][spice][victoria] Switch from VNC to SPICE Message-ID: <0670B960225633449A24709C291A5252511E0B23@COM01.performair.local> All; We have a Victoria cluster, and I'd like to switch from VNC to SPICE. Cluster is installed with packages (RDO), and configured manually. I have located https://docs.openstack.org/nova/victoria/admin/remote-console-access.html, but this doesn't tell me which services need to be installed on which servers. Looking at packages, I'm fairly certain nova-spicehtml5proxy (openstack-nova-spicehtml5proxy on CentOS 8) needs to be installed where the nova-novncproxy is currently. I also suspect that qemu-kvm-ui-spice needs to be installed on the nova-compute nodes. Is spice-server needed on the nova-compute nodes? Thank you, Dominic L. Hilsbos, MBA Vice President - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From akanevsk at redhat.com Fri Jun 18 17:05:47 2021 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Fri, 18 Jun 2021 12:05:47 -0500 Subject: [interop] Preping up for June Board meeting Message-ID: Team, as I prepare for the June OIF board meeting we need to do 3 things: 1. Need review and land https://review.opendev.org/c/osf/interop/+/796312 to remove requirement that one of Interop WG co-chair is from the board and nominated by the board. 2. Need to review and land https://review.opendev.org/c/osf/interop/+/784622 that has been discussed several times and is needed to match https://opendev.org/osf/interop/src/branch/master/doc/source/process/2021A.rst . 3. Final comments on presentation slides for the board - https://docs.google.com/presentation/d/1-9H1cTXZxW0vCSTzfBe0aMKbd7nggd8SOHQOT987nFs/ We need to complete it by Monday June 21 so I can send it to the board 1 week before the meeting. Thanks, -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Fri Jun 18 17:33:14 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 18 Jun 2021 18:33:14 +0100 Subject: [ops][nova][spice][victoria] Switch from VNC to SPICE In-Reply-To: <0670B960225633449A24709C291A5252511E0B23@COM01.performair.local> References: <0670B960225633449A24709C291A5252511E0B23@COM01.performair.local> Message-ID: <7136e3401a5587cef034b81b971406481966fef5.camel@redhat.com> On Fri, 2021-06-18 at 16:53 +0000, DHilsbos at performair.com wrote: > All; > > We have a Victoria cluster, and I'd like to switch from VNC to SPICE. Cluster is installed with packages (RDO), and configured manually. > > I have located https://docs.openstack.org/nova/victoria/admin/remote-console-access.html, but this doesn't tell me which services need to be installed on which servers. > > Looking at packages, I'm fairly certain nova-spicehtml5proxy (openstack-nova-spicehtml5proxy on CentOS 8) needs to be installed where the nova-novncproxy is currently. > > I also suspect that qemu-kvm-ui-spice needs to be installed on the nova-compute nodes. Is spice-server needed on the nova-compute nodes? Not an answer, but I'd be very careful about building solutions based on SPICE. It has been deprecated in RHEL 8.3 and recent versions of Fedora and is slated for removal in RHEL 9, as this bug [1] points out. It is also receives very little attention in nova as some deployments tooling (such as Red Hat OSP) has not supported it for some time. There's a non-zero chance support for this console type will be dropped entirely in some future release. Stephen [1] https://bugzilla.redhat.com/show_bug.cgi?id=1946938 > Thank you, > > Dominic L. Hilsbos, MBA > Vice President - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From DHilsbos at performair.com Fri Jun 18 17:53:09 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 18 Jun 2021 17:53:09 +0000 Subject: [ops][nova][spice][victoria] Switch from VNC to SPICE In-Reply-To: <7136e3401a5587cef034b81b971406481966fef5.camel@redhat.com> References: <0670B960225633449A24709C291A5252511E0B23@COM01.performair.local> <7136e3401a5587cef034b81b971406481966fef5.camel@redhat.com> Message-ID: <0670B960225633449A24709C291A5252511E0EF0@COM01.performair.local> Stephen; Thank you for the information. What's the replacement? Is VNC getting improvements to allow higher resolutions, and multi-monitor, in the guest? We've already decided to transition our OpenStack cluster away from CentOS, as RDO doesn't package some of the OpenStack projects we'd like to use, and RedHat has lost our trust. Thank you, Dominic L. Hilsbos, MBA Vice President – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Stephen Finucane [mailto:stephenfin at redhat.com] Sent: Friday, June 18, 2021 10:33 AM To: Dominic Hilsbos; openstack-discuss at lists.openstack.org Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE On Fri, 2021-06-18 at 16:53 +0000, DHilsbos at performair.com wrote: > All; > > We have a Victoria cluster, and I'd like to switch from VNC to SPICE. Cluster is installed with packages (RDO), and configured manually. > > I have located https://docs.openstack.org/nova/victoria/admin/remote-console-access.html, but this doesn't tell me which services need to be installed on which servers. > > Looking at packages, I'm fairly certain nova-spicehtml5proxy (openstack-nova-spicehtml5proxy on CentOS 8) needs to be installed where the nova-novncproxy is currently. > > I also suspect that qemu-kvm-ui-spice needs to be installed on the nova-compute nodes. Is spice-server needed on the nova-compute nodes? Not an answer, but I'd be very careful about building solutions based on SPICE. It has been deprecated in RHEL 8.3 and recent versions of Fedora and is slated for removal in RHEL 9, as this bug [1] points out. It is also receives very little attention in nova as some deployments tooling (such as Red Hat OSP) has not supported it for some time. There's a non-zero chance support for this console type will be dropped entirely in some future release. Stephen [1] https://bugzilla.redhat.com/show_bug.cgi?id=1946938 > Thank you, > > Dominic L. Hilsbos, MBA > Vice President - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From stephenfin at redhat.com Fri Jun 18 18:02:21 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 18 Jun 2021 19:02:21 +0100 Subject: [ops][nova][spice][victoria] Switch from VNC to SPICE In-Reply-To: <0670B960225633449A24709C291A5252511E0EF0@COM01.performair.local> References: <0670B960225633449A24709C291A5252511E0B23@COM01.performair.local> <7136e3401a5587cef034b81b971406481966fef5.camel@redhat.com> <0670B960225633449A24709C291A5252511E0EF0@COM01.performair.local> Message-ID: On Fri, 2021-06-18 at 17:53 +0000, DHilsbos at performair.com wrote: > Stephen; > > Thank you for the information. > > What's the replacement? Is VNC getting improvements to allow higher resolutions, and multi-monitor, in the guest? I'm not sure, though I suspect VNC is "good enough" for most use cases and no replacement is planned. This is speculation though and this question would be best directed at your distro or the libvirt maintainers. There are no plans to introduce another video console option in nova at this time. > We've already decided to transition our OpenStack cluster away from CentOS, as RDO doesn't package some of the OpenStack projects we'd like to use, and RedHat has lost our trust. In that case, this might matter less as I don't know what other distros' plans are for SPICE support. To the best of my knowledge it's not being deprecated or removed from upstream QEMU or libvirt yet. Whether that remains the case with more limited investment in the technology remains to be seen. Stephen > Thank you, > > Dominic L. Hilsbos, MBA > Vice President – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -----Original Message----- > From: Stephen Finucane [mailto:stephenfin at redhat.com] > Sent: Friday, June 18, 2021 10:33 AM > To: Dominic Hilsbos; openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE > > On Fri, 2021-06-18 at 16:53 +0000, DHilsbos at performair.com wrote: > > All; > > > > We have a Victoria cluster, and I'd like to switch from VNC to SPICE. Cluster is installed with packages (RDO), and configured manually. > > > > I have located https://docs.openstack.org/nova/victoria/admin/remote-console-access.html, but this doesn't tell me which services need to be installed on which servers. > > > > Looking at packages, I'm fairly certain nova-spicehtml5proxy (openstack-nova-spicehtml5proxy on CentOS 8) needs to be installed where the nova-novncproxy is currently. > > > > I also suspect that qemu-kvm-ui-spice needs to be installed on the nova-compute nodes. Is spice-server needed on the nova-compute nodes? > > Not an answer, but I'd be very careful about building solutions based on SPICE. > It has been deprecated in RHEL 8.3 and recent versions of Fedora and is slated > for removal in RHEL 9, as this bug [1] points out. It is also receives very > little attention in nova as some deployments tooling (such as Red Hat OSP) has > not supported it for some time. There's a non-zero chance support for this > console type will be dropped entirely in some future release. > > Stephen > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1946938 > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Vice President - Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com > > www.PerformAir.com > > > > > > > > > From laurentfdumont at gmail.com Fri Jun 18 19:39:07 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 18 Jun 2021 15:39:07 -0400 Subject: [ops][nova][spice][victoria] Switch from VNC to SPICE In-Reply-To: <0670B960225633449A24709C291A5252511E0EF0@COM01.performair.local> References: <0670B960225633449A24709C291A5252511E0B23@COM01.performair.local> <7136e3401a5587cef034b81b971406481966fef5.camel@redhat.com> <0670B960225633449A24709C291A5252511E0EF0@COM01.performair.local> Message-ID: Are you using the Openstack console itself? You might have more flexibility with a dedicated VNC server inside the VM and connect directly to it. So you would not be tied to the Openstack VNC support which I dont think was ever designed for graphical usage. More of a "it's 2AM, server is crashed and I need a way in!". On Fri, Jun 18, 2021 at 1:58 PM wrote: > Stephen; > > Thank you for the information. > > What's the replacement? Is VNC getting improvements to allow higher > resolutions, and multi-monitor, in the guest? > > We've already decided to transition our OpenStack cluster away from > CentOS, as RDO doesn't package some of the OpenStack projects we'd like to > use, and RedHat has lost our trust. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -----Original Message----- > From: Stephen Finucane [mailto:stephenfin at redhat.com] > Sent: Friday, June 18, 2021 10:33 AM > To: Dominic Hilsbos; openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE > > On Fri, 2021-06-18 at 16:53 +0000, DHilsbos at performair.com wrote: > > All; > > > > We have a Victoria cluster, and I'd like to switch from VNC to SPICE. > Cluster is installed with packages (RDO), and configured manually. > > > > I have located > https://docs.openstack.org/nova/victoria/admin/remote-console-access.html, > but this doesn't tell me which services need to be installed on which > servers. > > > > Looking at packages, I'm fairly certain nova-spicehtml5proxy > (openstack-nova-spicehtml5proxy on CentOS 8) needs to be installed where > the nova-novncproxy is currently. > > > > I also suspect that qemu-kvm-ui-spice needs to be installed on the > nova-compute nodes. Is spice-server needed on the nova-compute nodes? > > Not an answer, but I'd be very careful about building solutions based on > SPICE. > It has been deprecated in RHEL 8.3 and recent versions of Fedora and is > slated > for removal in RHEL 9, as this bug [1] points out. It is also receives very > little attention in nova as some deployments tooling (such as Red Hat OSP) > has > not supported it for some time. There's a non-zero chance support for this > console type will be dropped entirely in some future release. > > Stephen > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1946938 > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Vice President - Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com > > www.PerformAir.com > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Fri Jun 18 20:44:05 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Fri, 18 Jun 2021 20:44:05 +0000 Subject: [ops][nova][spice][victoria] Switch from VNC to SPICE In-Reply-To: References: <0670B960225633449A24709C291A5252511E0B23@COM01.performair.local> <7136e3401a5587cef034b81b971406481966fef5.camel@redhat.com> <0670B960225633449A24709C291A5252511E0EF0@COM01.performair.local> Message-ID: <0670B960225633449A24709C291A5252511E1152@COM01.performair.local> We're trying to virtualize desktops for remote workers. Guests will be Windows. Remote connection method will be GoToMyPC. As such, I need the OS to believe either 1) it has 2 1920 x 1080 monitors, or 2) it has a single 3840 x 1080 monitor. Neither of which have I found a way to do in VNC. Thank you, Dominic L. Hilsbos, MBA Vice President – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com From: Laurent Dumont [mailto:laurentfdumont at gmail.com] Sent: Friday, June 18, 2021 12:39 PM To: Dominic Hilsbos Cc: stephenfin at redhat.com; openstack-discuss Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE Are you using the Openstack console itself? You might have more flexibility with a dedicated VNC server inside the VM and connect directly to it. So you would not be tied to the Openstack VNC support which I dont think was ever designed for graphical usage. More of a "it's 2AM, server is crashed and I need a way in!". On Fri, Jun 18, 2021 at 1:58 PM wrote: Stephen; Thank you for the information. What's the replacement?  Is VNC getting improvements to allow higher resolutions, and multi-monitor, in the guest? We've already decided to transition our OpenStack cluster away from CentOS, as RDO doesn't package some of the OpenStack projects we'd like to use, and RedHat has lost our trust. Thank you, Dominic L. Hilsbos, MBA Vice President – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Stephen Finucane [mailto:stephenfin at redhat.com] Sent: Friday, June 18, 2021 10:33 AM To: Dominic Hilsbos; openstack-discuss at lists.openstack.org Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE On Fri, 2021-06-18 at 16:53 +0000, DHilsbos at performair.com wrote: > All; > > We have a Victoria cluster, and I'd like to switch from VNC to SPICE.  Cluster is installed with packages (RDO), and configured manually. > > I have located https://docs.openstack.org/nova/victoria/admin/remote-console-access.html, but this doesn't tell me which services need to be installed on which servers. > > Looking at packages, I'm fairly certain nova-spicehtml5proxy (openstack-nova-spicehtml5proxy on CentOS 8) needs to be installed where the nova-novncproxy is currently. > > I also suspect that qemu-kvm-ui-spice needs to be installed on the nova-compute nodes.  Is spice-server needed on the nova-compute nodes? Not an answer, but I'd be very careful about building solutions based on SPICE. It has been deprecated in RHEL 8.3 and recent versions of Fedora and is slated for removal in RHEL 9, as this bug [1] points out. It is also receives very little attention in nova as some deployments tooling (such as Red Hat OSP) has not supported it for some time. There's a non-zero chance support for this console type will be dropped entirely in some future release. Stephen [1] https://bugzilla.redhat.com/show_bug.cgi?id=1946938 > Thank you, > > Dominic L. Hilsbos, MBA > Vice President - Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > From gmann at ghanshyammann.com Sat Jun 19 00:02:20 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 18 Jun 2021 19:02:20 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 18th June, 21: Reading: 5 min Message-ID: <17a21934fc5.12599574527045.3761785323156975883@ghanshyammann.com> Hello Everyone, Here is last week's summary of the Technical Committee activities. 1. What we completed this week: ========================= * Deprecated devstack-gate project[1] * Updated project-team-guide for the meeting channel preference[2] * TC voted the Y release proposed names and forwarded it for the next step of trademark checks. 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - https://meetings.opendev.org/meetings/tc/2021/tc.2021-06-17-15.00.log.html * We will have next week's meeting on June 24th, Thursday 15:00 UTC[3]. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ TC is using the etherpad[4] for Xena cycle working item. We will be checking and updating the status biweekly in the same etherpad. Open Reviews ----------------- * One open review for ongoing activities[5]. Migration from Freenode to OFTC ----------------------------------------- * We are in 'Communicate with community' work where all projects need to update all contributor doc etc. Please finish this in your project and mark the progress in etherpad[6]. * I wrote a small blog in OpenStack blog[7]. * All the required work for this migration is tracked in this etherpad[6] 'Y' release naming process ------------------------------- * Y release naming election is closed now. As a last step, the foundation is doing trademark checks on elected ranking. Retiring panko ----------------- * panko project is in process of retirement[8] Test support for TLS default: ---------------------------------- Rico has started a separate email thread over testing with tls-proxy enabled[9], we encourage projects to participate in that testing and help to enable the tls-proxy in gate testing. Retiring governance's in-active repos ------------------------------------------- The Technical Committee retiring the governance's in-active repos which are not required in current structure[10]. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[11]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [12] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [13] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://review.opendev.org/c/openstack/governance/+/795385 [2] https://docs.openstack.org/project-team-guide/open-community.html#public-meetings-on-irc [3] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [4] https://etherpad.opendev.org/p/tc-xena-tracker [5] https://review.opendev.org/q/project:openstack/governance+status:open [6] https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc [7] https://www.openstack.org/blog/the-openstack-community-irc-network-moved-to-oftc/ [8] https://review.opendev.org/c/openstack/governance/+/796408 [9] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023000.html [10] https://etherpad.opendev.org/p/governance-repos-cleanup [11] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [12] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [13] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From zigo at debian.org Sat Jun 19 14:41:08 2021 From: zigo at debian.org (Thomas Goirand) Date: Sat, 19 Jun 2021 16:41:08 +0200 Subject: [nova] SCS standardized flavor naming In-Reply-To: <71c81fb2-822a-2e4a-5ccb-92fdae0e1853@garloff.de> References: <5730c898-d979-f95c-5905-425dddab5e65@garloff.de> <71c81fb2-822a-2e4a-5ccb-92fdae0e1853@garloff.de> Message-ID: Hi Kurt, Thanks for this discussion, this is interesting. On 6/18/21 12:05 PM, Kurt Garloff wrote: ]> Also, you can't see whether the 8 vCPUs are dedicated cores, HTs > or oversubscribed things; you have not specified the CPU generation > (which in my opinion makes the amd vs intel spec somewhat useless -- > somecustomers really want to know whether they have Cascade Lake > or Ice Lake or Zen1 vs Zen2) nor a way to express generic x86 (x?). This probably makes sense if a cluster has many types of CPU, though you wont find that often. In our case, we only have one type of CPU per cluster: 2x AMD EPYC 7452 32-Cores on every compute for that brand new public cloud we're about to release (so 128 threads on each compute): so we use the EPYC-v2 CPU model of Qemu everywhere. IMO, there's no need to express what type of CPU if there's only one available anyways... > Is your scheme used by a lot of providers? I don't know, but it's used by us! :) By the way, in our case, perf1 vs perf2 doesn't express a change of backend. Both are using NVMe (4th gen, so REALLY fast...) on a Ceph cluster, but it express a different IOps and bandwidth limiting. So maybe we should find a better way to express I/O perfs than just a lasting number? Maybe nvmeperf1 vs ssdperf1? > If I take your scheme, the flavor would become > "scs-x16thr-ram64-disk200-perf2-nva30-ib" or so. > This assumes I stick to my ordering, use x for generic x86, thr for > dedicated hyperthreads, perf2=ssd and we're using an Nvidia A30 (which > has 56SMs not 64 so the match is not perfect) and -ib is verbose enough > for infiniband ... Why would someone want to use infiniband these days? We get Mellanox cards at 2x 25Gbits/s for a very cheap price these days... And then, same thing: why would you express the speed of your network in the flavor, when most likely, it's going to be the same on all flavors (and you cluster will most likely have a single type of NIC speed...)? > It's a bit longish, but no real roadblock. IMO, it's a way more readable this way. Long isn't a problem, really, if it adds redability. > I would argue that > readibility is not better once you have spent some minutes with the > spec. The point is, a new customer will *not* spend time reading the spec. Typically, they will want to just fire-up a VM quickly without reading too much docs... Cheers, Thomas Goirand (zigo) From ramerama at tataelxsi.co.in Mon Jun 21 01:05:36 2021 From: ramerama at tataelxsi.co.in (Ramesh Ramanathan B) Date: Mon, 21 Jun 2021 01:05:36 +0000 Subject: Nova shows incorrect VM status when compute is down. In-Reply-To: <2805adacfd746e0356ba61e3dedbe75b7034055d.camel@redhat.com> References: , <2805adacfd746e0356ba61e3dedbe75b7034055d.camel@redhat.com> Message-ID: Hi Sean, Thank you for the response. I understand the rationale you have discussed, but for us this is a problem since we are building a monitoring system and with this behavior it is impossible for us to know if a service is down or not (during a compute failure). Any suggestions here on how this situation can be handled? Thanks Regards, Ramesh ________________________________ From: Sean Mooney Sent: Thursday, June 17, 2021 10:48 PM To: Ramesh Ramanathan B ; openstack-discuss at lists.openstack.org ; Melanie Witt Subject: Re: Nova shows incorrect VM status when compute is down. ________________________________ **This is an external email. Please check the sender’s full email address (not just the sender name) and exercise caution before you respond or click any embedded link/attachment.** ________________________________ On Thu, 2021-06-17 at 14:24 +0000, Ramesh Ramanathan B wrote: > Dear All, > > One observation we have while using Open Stack Rocky is, when a > compute node goes down, the VM status still shows active (the servers > running in the compute node that went down). Is this the expected > behavior? Any configurations required to get the right status. yes this is expected behavior when the compute agent heartbeat is missed and we do not know the status of the vms we continue to report them in the last state we knew of. wedicussed adding an unknow state at onepoint to the api. im not sure if that has been added yet melanie i think you reviewd or worked on that? there was concern about exposing this as it is exposing info about the backend hosts for exampel if a cell db connection goes down but the vm is still active it woudl be incorrect to report the vm state as down because it actully unknown and in this case the vm is still active. in the case were the comptue agent was stopped for mainatnce we also do not want to set the vms state as down as again stoping the agent will not prevent the vms form working. in either case of the cell connection being tempory disrupted or the compute agent being stopped reporting the vm as downs in the api could lead to data currption if you evacuated the vm or a user deleted it and tried to resue its data voluems for a new vms so ingeneral it incorrect to assuem that the vm status in the db refect the state of the vm on the host if the compute agent is down and its not correct to udpate the status in the db to down. making it as unkonw coudl be valide but some operator objected to that as it was leaking information about there data ceneter(such as they are currently doing an upgrade/matainece and hvae stopped the agent) to custoemr that they seee as a security issue. > > In the attached image the compute is down, but the VM status still > shows active. We are running a data center so it is not practical to > run nova reset-state for all the servers. reset-state is not intended to be used for this. infact reset-state should almost never be used. you should treat every invocation of reset state as running an arbiraty sql update query and avoid it unless absolute nessisary. > Is there an API to force update Nova to show the correct status? Or > any configurations missing that is causing this? > > Thanks > > Regards, > Ramesh > > ________________________________ > Disclaimer: This email and any files transmitted with it are > confidential and intended solely for the use of the individual or > entity to whom they are addressed. If you are not the intended > recipient of this message , or if this message has been addressed to > you in error, please immediately alert the sender by reply email and > then delete this message and any attachments. If you are not the > intended recipient, you are hereby notified that any use, > dissemination, copying, or storage of this message or its attachments > is strictly prohibited. Email transmission cannot be guaranteed to be > secure or error-free, as information could be intercepted, corrupted, > lost, destroyed, arrive late or incomplete, or contain viruses. The > sender, therefore, does not accept liability for any errors, > omissions or contaminations in the contents of this message which > might have occurred as a result of email transmission. If > verification is required, please request for a hard-copy version. > ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From songwenping at inspur.com Mon Jun 21 02:25:51 2021 From: songwenping at inspur.com (=?gb2312?B?QWxleCBTb25nICjLzs7Exr0p?=) Date: Mon, 21 Jun 2021 02:25:51 +0000 Subject: cyborg-tempest-plugin test failed due to the zuul server has no accelerators Message-ID: Hello Everyone, Is the Zuul server env changed recently? There are no accelerators and our cyborg-tempest-plugin test failed. Please help. Thanks. Best Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3774 bytes Desc: not available URL: From tonyppe at gmail.com Mon Jun 21 05:51:07 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Mon, 21 Jun 2021 13:51:07 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: Hi, me again :) I tested this again Friday and today (Monday) using a centos Ansible Control Host as well as different installation methods of the openstack host (such as minimal OS install and "server with gui"). Essentially, the deployment of Openstack Victora fails during "kayobe overcloud service deploy" because of the: TASK [openvswitch : Ensuring OVS bridge is properly setup] . I investigated this, comparing it with a Train version. On Victoria, the host is missing: - ifcfg-p-bond0-ovs - ifcfg-p-bond0-phy And these are not visible in the bridge config as seen with "ovs-vsctl show". I tried to manually add the ifcfg and add to the bridge but I inadvertently created a bridging loop. Are you guys aware of this? I am not sure what else I can do to try and either help the kayobe/kolla-ansible teams or; resolve this to allow a successful Victoria install - please let me know? Regards, Tony Pearce On Thu, 17 Jun 2021 at 16:13, Tony Pearce wrote: > Hi Mark, > > I made some time to test this again today with Victoria on a different > ACH. During host configure, it fails not finding python: > > TASK [Verify that a command can be executed] > ********************************************************************************************************************************** > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": > "Shared connection to 192.168.29.235 closed.\r\n", "module_stdout": > "/bin/sh: /usr/bin/python3: No such file or directory\r\n", "msg": "The > module failed to execute correctly, you probably need to set the > interpreter.\nSee stdout/stderr for the exact error", "rc": 127} > > PLAY RECAP > ******************************************************************************************************************************************************************** > juc-ucsb-5-p : ok=4 changed=1 unreachable=0 > failed=1 skipped=2 rescued=0 ignored=0 > > The task you mentioned previously, was ran but was not run against the > host because no hosts matched: > > PLAY [Ensure python is installed] > ********************************************************************************************************************************************* > skipping: no hosts matched > > I looked at `venvs/kayobe/share/kayobe/ansible/kayobe-ansible-user.yml` > and a comment in there says it's only run if the kayobe user account is > inaccessible. In my deployment I have "#kayobe_ansible_user:" which is not > defined by me. Previously, I defined it as my management user and it caused > an issue with the password. So I'm unsure why this is an issue. > > To work around, I manually installed python and the host configure was > successful this time around. I tried this twice and same experience both > times. > > Then later, during service deploy it fails here: > > RUNNING HANDLER [common : Restart fluentd container] > ************************************************************************************************************************** > fatal: [juc-ucsb-5-p]: FAILED! => {"changed": true, "msg": "'Traceback > (most recent call last):\\n File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/client.py\", > line 259, in _raise_for_status\\n response.raise_for_status()\\n File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/requests/models.py\", > line 941, in raise_for_status\\n raise HTTPError(http_error_msg, > response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal > Server Error for url: > http+docker://localhost/v1.41/containers/fluentd/start\\n\\nDuring handling > of the above exception, another exception occurred:\\n\\nTraceback (most > recent call last):\\n File > \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", > line 1131, in main\\n File > \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", > line 785, in recreate_or_restart_container\\n File > \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", > line 817, in start_container\\n File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/utils/decorators.py\", > line 19, in wrapped\\n return f(self, resource_id, *args, **kwargs)\\n > File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/container.py\", > line 1108, in start\\n self._raise_for_status(res)\\n File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/client.py\", > line 261, in _raise_for_status\\n raise > create_api_error_from_http_exception(e)\\n File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/errors.py\", > line 31, in create_api_error_from_http_exception\\n raise cls(e, > response=response, explanation=explanation)\\ndocker.errors.APIError: 500 > Server Error: Internal Server Error (\"error while creating mount source > path \\'/etc/localtime\\': mkdir /etc/lo > > The error says that the file exists. So the first time I just renamed the > symlink file and then this was successful in terms of allowing the deploy > process to proceed past this point of failure. The 2nd time around, the > rename was not good enough because there's a check to make sure that the > file is present there. So the 2nd time around I issued "touch > /etc/localtime" after renaming the existing and then this passed. > > Lastly, the deploy fails with a blocking action that I cannot resolve > myself: > > TASK [openvswitch : Ensuring OVS bridge is properly setup] > ******************************************************************************************************************** > changed: [juc-ucsb-5-p] => (item=['enp6s0-ovs', 'enp6s0']) > > This step breaks networking on the host. Looking at the openvswitchdb, I > think this could be something similar to the issue seen before with > Wallaby. The first time I tried this was with enp6s0 configured as a bond0 > as desired. I then tried without a bond0 and both times got the same > result. > If I reboot the host then I can get successful ping replies for a short > while before they stop again. Same experience as previous. I believe the > pings stop when the bridge config is applied from the container shortly > after host boot up. Ovs-vsctl show output: [1] > > I took a look at the logs [2] but to me I dont see anything alarming or > that could point to the issue. I've previously tried turning off IPv6 and > this did not have success in this part, although the log message about IPv6 > went away. > > I tried removing the physical interface from the bridge "ovs-vsctl > del-port..." and as soon as I do this, I can ping the host once again. Once > I re-add the port back to the bridge, I can no longer connect to the host. > There's no errors from ovs-vsctl at this point, either. > > [1] ovs-vsctl output screenshot > [2] ovs logs screenshot > > BTW I am trimming the rest of the mail off because it exceeds 40kb size > for the group. > > Kind regards, > > Tony Pearce > >> ... >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Mon Jun 21 08:17:48 2021 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 21 Jun 2021 13:47:48 +0530 Subject: [glance] nominating Cyril Roelandt for glance core In-Reply-To: References: Message-ID: I'm not a glance core but in my experience working on glance cinder store, Cyril has helped me quite a few times so +1 from my side too. On Thu, Jun 17, 2021 at 8:25 PM Abhishek Kekane wrote: > Hi All, > > I am nominating Cyril Roelandt (cyril-roelandt LP and Steap on IRC) to > be a Glance core. Cyril has been around the Glance community for > a long time and is familiar with the architecture and design patterns > used in Glance and its related projects. He's contributed code, > triaged bugs, provided bug fixes, and did quality reviews for Glance. He > is also helping me in reducing our bug backlogs. > > Considering the current situation with the project, however, it would be > an enormous help to have someone as knowledgeable about Glance as Cyril > to have +2 abilities. I discussed this with cyril, he's agreed to be a > core reviewer. > > In any case, I'd like to put Cyril to work as soon as possible! So > please reply to this message with comments or concerns before 23:59 > UTC on Monday 21 June. I'd like to confirm Cyril as a core on Tuesday 22 > June. > > Thanks and Regards, > > Abhishek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Jun 21 08:36:07 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 21 Jun 2021 09:36:07 +0100 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: On Mon, 21 Jun 2021 at 06:51, Tony Pearce wrote: > > Hi, me again :) > > I tested this again Friday and today (Monday) using a centos Ansible Control Host as well as different installation methods of the openstack host (such as minimal OS install and "server with gui"). Essentially, the deployment of Openstack Victora fails during "kayobe overcloud service deploy" because of the: TASK [openvswitch : Ensuring OVS bridge is properly setup] . > > I investigated this, comparing it with a Train version. On Victoria, the host is missing: > - ifcfg-p-bond0-ovs > - ifcfg-p-bond0-phy > > And these are not visible in the bridge config as seen with "ovs-vsctl show". I tried to manually add the ifcfg and add to the bridge but I inadvertently created a bridging loop. > > Are you guys aware of this? I am not sure what else I can do to try and either help the kayobe/kolla-ansible teams or; resolve this to allow a successful Victoria install - please let me know? One relevant thing that changed between Train and Victoria is that Kayobe supports plugging a non-bridge interface directly into OVS, without the veth pairs. So if your bond0 interface is not a bridge (I assume it's a bond), then you would no longer get the veth links. I'm not sure how it would have worked without a bridge previously though. Mark > > Regards, > > Tony Pearce > > > On Thu, 17 Jun 2021 at 16:13, Tony Pearce wrote: >> >> Hi Mark, >> >> I made some time to test this again today with Victoria on a different ACH. During host configure, it fails not finding python: >> >> TASK [Verify that a command can be executed] ********************************************************************************************************************************** >> fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 192.168.29.235 closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python3: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127} >> >> PLAY RECAP ******************************************************************************************************************************************************************** >> juc-ucsb-5-p : ok=4 changed=1 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 >> >> The task you mentioned previously, was ran but was not run against the host because no hosts matched: >> >> PLAY [Ensure python is installed] ********************************************************************************************************************************************* >> skipping: no hosts matched >> >> I looked at `venvs/kayobe/share/kayobe/ansible/kayobe-ansible-user.yml` and a comment in there says it's only run if the kayobe user account is inaccessible. In my deployment I have "#kayobe_ansible_user:" which is not defined by me. Previously, I defined it as my management user and it caused an issue with the password. So I'm unsure why this is an issue. >> >> To work around, I manually installed python and the host configure was successful this time around. I tried this twice and same experience both times. >> >> Then later, during service deploy it fails here: >> >> RUNNING HANDLER [common : Restart fluentd container] ************************************************************************************************************************** >> fatal: [juc-ucsb-5-p]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/client.py\", line 259, in _raise_for_status\\n response.raise_for_status()\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/requests/models.py\", line 941, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/fluentd/start\\n\\nDuring handling of the above exception, another exception occurred:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", line 1131, in main\\n File \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", line 785, in recreate_or_restart_container\\n File \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", line 817, in start_container\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/utils/decorators.py\", line 19, in wrapped\\n return f(self, resource_id, *args, **kwargs)\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/container.py\", line 1108, in start\\n self._raise_for_status(res)\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/client.py\", line 261, in _raise_for_status\\n raise create_api_error_from_http_exception(e)\\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/errors.py\", line 31, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation)\\ndocker.errors.APIError: 500 Server Error: Internal Server Error (\"error while creating mount source path \\'/etc/localtime\\': mkdir /etc/lo >> >> The error says that the file exists. So the first time I just renamed the symlink file and then this was successful in terms of allowing the deploy process to proceed past this point of failure. The 2nd time around, the rename was not good enough because there's a check to make sure that the file is present there. So the 2nd time around I issued "touch /etc/localtime" after renaming the existing and then this passed. >> >> Lastly, the deploy fails with a blocking action that I cannot resolve myself: >> >> TASK [openvswitch : Ensuring OVS bridge is properly setup] ******************************************************************************************************************** >> changed: [juc-ucsb-5-p] => (item=['enp6s0-ovs', 'enp6s0']) >> >> This step breaks networking on the host. Looking at the openvswitchdb, I think this could be something similar to the issue seen before with Wallaby. The first time I tried this was with enp6s0 configured as a bond0 as desired. I then tried without a bond0 and both times got the same result. >> If I reboot the host then I can get successful ping replies for a short while before they stop again. Same experience as previous. I believe the pings stop when the bridge config is applied from the container shortly after host boot up. Ovs-vsctl show output: [1] >> >> I took a look at the logs [2] but to me I dont see anything alarming or that could point to the issue. I've previously tried turning off IPv6 and this did not have success in this part, although the log message about IPv6 went away. >> >> I tried removing the physical interface from the bridge "ovs-vsctl del-port..." and as soon as I do this, I can ping the host once again. Once I re-add the port back to the bridge, I can no longer connect to the host. There's no errors from ovs-vsctl at this point, either. >> >> [1] ovs-vsctl output screenshot >> [2] ovs logs screenshot >> >> BTW I am trimming the rest of the mail off because it exceeds 40kb size for the group. >> >> Kind regards, >> >> Tony Pearce >>> >>> ... From mark at stackhpc.com Mon Jun 21 08:42:25 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 21 Jun 2021 09:42:25 +0100 Subject: [kayobe][train] kolla_copy_ca_into_containers variable In-Reply-To: References: Message-ID: On Wed, 16 Jun 2021 at 10:11, Tony Pearce wrote: > > I have deployed Train with Kayobe. I'd like to enable SSL using a cert which is signed but NOT by a public CA. This means I need to add the CA cert to the containers. > > I came across this doc [1] and I wanted to ask / discover when this variable comes into play "kolla_copy_ca_into_containers"? > Does this variable work only from Victoria onwards or will it work in Train? The kolla_copy_ca_into_containers variable was added to Kolla Ansible in Ussuri. > Do I require to have a "seed" to build containers, to enable this cert copy into containers? (kayobe overcloud container image build). > OR if I do "kayobe overcloud container image pull" will the cert be copied at that point? The certs are copied at runtime, not when the images are built. > > [1] OpenStack Docs: TLS > > Thanks and regards, > > Tony Pearce > From tonyppe at gmail.com Mon Jun 21 08:45:23 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Mon, 21 Jun 2021 16:45:23 +0800 Subject: [kayobe][train] kolla_copy_ca_into_containers variable In-Reply-To: References: Message-ID: Thanks for the info Mark! Kind regards, Tony Pearce On Mon, 21 Jun 2021 at 16:42, Mark Goddard wrote: > On Wed, 16 Jun 2021 at 10:11, Tony Pearce wrote: > > > > I have deployed Train with Kayobe. I'd like to enable SSL using a cert > which is signed but NOT by a public CA. This means I need to add the CA > cert to the containers. > > > > I came across this doc [1] and I wanted to ask / discover when this > variable comes into play "kolla_copy_ca_into_containers"? > > Does this variable work only from Victoria onwards or will it work in > Train? > The kolla_copy_ca_into_containers variable was added to Kolla Ansible > in Ussuri. > > Do I require to have a "seed" to build containers, to enable this cert > copy into containers? (kayobe overcloud container image build). > > OR if I do "kayobe overcloud container image pull" will the cert be > copied at that point? > The certs are copied at runtime, not when the images are built. > > > > [1] OpenStack Docs: TLS > > > > Thanks and regards, > > > > Tony Pearce > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyppe at gmail.com Mon Jun 21 08:58:43 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Mon, 21 Jun 2021 16:58:43 +0800 Subject: Wallaby install via kayobe onto ubuntu 20 all in one host In-Reply-To: References: Message-ID: I see. I probably should have explained better, I tried resolving this by using a console connection to the host - this is outside of "kayobe" commands, because the host network gets broken during "kayobe service deploy" which causes the deployment to fail. I tried manually creating the interfaces and adding the ports. In short, the "host configure" gets completed without issue (apart from the localtime and the python3 issue already mentioned that I am able to work around). After the host is configured, I run "service deploy" which halts when the host is no longer reachable over IP, which is the task where it tries to check that the bridge is set up. I have tried over the past week or so to do enough testing to either confirm or rule out a local issue on my side, having tried multiples of systems and getting the same result. I think there may be a bug with kayobe / kolla-ansible which is causing the deployment failure. Are you aware of anything else that I could try to be certain? Kind regards, Tony Pearce On Mon, 21 Jun 2021 at 16:36, Mark Goddard wrote: > On Mon, 21 Jun 2021 at 06:51, Tony Pearce wrote: > > > > Hi, me again :) > > > > I tested this again Friday and today (Monday) using a centos Ansible > Control Host as well as different installation methods of the openstack > host (such as minimal OS install and "server with gui"). Essentially, the > deployment of Openstack Victora fails during "kayobe overcloud service > deploy" because of the: TASK [openvswitch : Ensuring OVS bridge is > properly setup] . > > > > I investigated this, comparing it with a Train version. On Victoria, the > host is missing: > > - ifcfg-p-bond0-ovs > > - ifcfg-p-bond0-phy > > > > And these are not visible in the bridge config as seen with "ovs-vsctl > show". I tried to manually add the ifcfg and add to the bridge but I > inadvertently created a bridging loop. > > > > Are you guys aware of this? I am not sure what else I can do to try and > either help the kayobe/kolla-ansible teams or; resolve this to allow a > successful Victoria install - please let me know? > > One relevant thing that changed between Train and Victoria is that > Kayobe supports plugging a non-bridge interface directly into OVS, > without the veth pairs. So if your bond0 interface is not a bridge (I > assume it's a bond), then you would no longer get the veth links. I'm > not sure how it would have worked without a bridge previously though. > Mark > > > > > Regards, > > > > Tony Pearce > > > > > > On Thu, 17 Jun 2021 at 16:13, Tony Pearce wrote: > >> > >> Hi Mark, > >> > >> I made some time to test this again today with Victoria on a different > ACH. During host configure, it fails not finding python: > >> > >> TASK [Verify that a command can be executed] > ********************************************************************************************************************************** > >> fatal: [juc-ucsb-5-p]: FAILED! => {"changed": false, "module_stderr": > "Shared connection to 192.168.29.235 closed.\r\n", "module_stdout": > "/bin/sh: /usr/bin/python3: No such file or directory\r\n", "msg": "The > module failed to execute correctly, you probably need to set the > interpreter.\nSee stdout/stderr for the exact error", "rc": 127} > >> > >> PLAY RECAP > ******************************************************************************************************************************************************************** > >> juc-ucsb-5-p : ok=4 changed=1 unreachable=0 > failed=1 skipped=2 rescued=0 ignored=0 > >> > >> The task you mentioned previously, was ran but was not run against the > host because no hosts matched: > >> > >> PLAY [Ensure python is installed] > ********************************************************************************************************************************************* > >> skipping: no hosts matched > >> > >> I looked at `venvs/kayobe/share/kayobe/ansible/kayobe-ansible-user.yml` > and a comment in there says it's only run if the kayobe user account is > inaccessible. In my deployment I have "#kayobe_ansible_user:" which is not > defined by me. Previously, I defined it as my management user and it caused > an issue with the password. So I'm unsure why this is an issue. > >> > >> To work around, I manually installed python and the host configure was > successful this time around. I tried this twice and same experience both > times. > >> > >> Then later, during service deploy it fails here: > >> > >> RUNNING HANDLER [common : Restart fluentd container] > ************************************************************************************************************************** > >> fatal: [juc-ucsb-5-p]: FAILED! => {"changed": true, "msg": "'Traceback > (most recent call last):\\n File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/client.py\", > line 259, in _raise_for_status\\n response.raise_for_status()\\n File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/requests/models.py\", > line 941, in raise_for_status\\n raise HTTPError(http_error_msg, > response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal > Server Error for url: > http+docker://localhost/v1.41/containers/fluentd/start\\n\\nDuring handling > of the above exception, another exception occurred:\\n\\nTraceback (most > recent call last):\\n File > \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", > line 1131, in main\\n File > \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", > line 785, in recreate_or_restart_container\\n File > \"/tmp/ansible_kolla_docker_payload_34omrn2y/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", > line 817, in start_container\\n File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/utils/decorators.py\", > line 19, in wrapped\\n return f(self, resource_id, *args, **kwargs)\\n > File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/container.py\", > line 1108, in start\\n self._raise_for_status(res)\\n File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/api/client.py\", > line 261, in _raise_for_status\\n raise > create_api_error_from_http_exception(e)\\n File > \"/opt/kayobe/venvs/kolla-ansible/lib/python3.6/site-packages/docker/errors.py\", > line 31, in create_api_error_from_http_exception\\n raise cls(e, > response=response, explanation=explanation)\\ndocker.errors.APIError: 500 > Server Error: Internal Server Error (\"error while creating mount source > path \\'/etc/localtime\\': mkdir /etc/lo > >> > >> The error says that the file exists. So the first time I just renamed > the symlink file and then this was successful in terms of allowing the > deploy process to proceed past this point of failure. The 2nd time around, > the rename was not good enough because there's a check to make sure that > the file is present there. So the 2nd time around I issued "touch > /etc/localtime" after renaming the existing and then this passed. > >> > >> Lastly, the deploy fails with a blocking action that I cannot resolve > myself: > >> > >> TASK [openvswitch : Ensuring OVS bridge is properly setup] > ******************************************************************************************************************** > >> changed: [juc-ucsb-5-p] => (item=['enp6s0-ovs', 'enp6s0']) > >> > >> This step breaks networking on the host. Looking at the openvswitchdb, > I think this could be something similar to the issue seen before with > Wallaby. The first time I tried this was with enp6s0 configured as a bond0 > as desired. I then tried without a bond0 and both times got the same result. > >> If I reboot the host then I can get successful ping replies for a short > while before they stop again. Same experience as previous. I believe the > pings stop when the bridge config is applied from the container shortly > after host boot up. Ovs-vsctl show output: [1] > >> > >> I took a look at the logs [2] but to me I dont see anything alarming or > that could point to the issue. I've previously tried turning off IPv6 and > this did not have success in this part, although the log message about IPv6 > went away. > >> > >> I tried removing the physical interface from the bridge "ovs-vsctl > del-port..." and as soon as I do this, I can ping the host once again. Once > I re-add the port back to the bridge, I can no longer connect to the host. > There's no errors from ovs-vsctl at this point, either. > >> > >> [1] ovs-vsctl output screenshot > >> [2] ovs logs screenshot > >> > >> BTW I am trimming the rest of the mail off because it exceeds 40kb size > for the group. > >> > >> Kind regards, > >> > >> Tony Pearce > >>> > >>> ... > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Mon Jun 21 08:59:43 2021 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 21 Jun 2021 10:59:43 +0200 Subject: [neutron] Bug deputy report for week of June 14th Message-ID: Hi, I was last week's bug deputy for Neutron, Below is a short summary of last week's bugs: Critical --------- - https://bugs.launchpad.net/neutron/+bug/1932093 - "oslo_config.cfg.DuplicateOptError: duplicate option: host" using OVN Octavia provider on stable/train - https://review.opendev.org/c/openstack/networking-ovn/+/796517 - - https://bugs.launchpad.net/neutron/+bug/1932483 - CI neutron.tests.functional.services.l3_router.test_l3_dvr get failed frequently - *Unassigned* High ------- - https://bugs.launchpad.net/neutron/+bug/1932421 - ovn-neutron-db-sync deletes legitimate metadata ports - https://review.opendev.org/q/I78673b6a85f1c872e70026da82124d1ba2326562 - https://bugs.launchpad.net/neutron/+bug/1933026 - stack.sh fail with ovs_source: No such file or directory - CI of some project fails with: "/opt/stack/neutron/devstack/plugin.sh: line 23: /opt/stack/devstack/lib/neutron_plugins/ovs_source: No such file or directory" after https://review.opendev.org/c/openstack/neutron/+/793470 is merged. - *Unassigned* Medium ----------- - Low ------ - https://bugs.launchpad.net/neutron/+bug/1932016 - Quality of Service (QoS) in Neutron, error in permissions - Doc fix: https://review.opendev.org/c/openstack/neutron/+/796457 - https://bugs.launchpad.net/neutron/+bug/1932373 - DB migration is interrupted and next execution will fail - Upgrade from Rocky to Victoria fails on db migration, and traceback points to binding index ( https://review.opendev.org/c/openstack/neutron/+/692285 ) but I can't reproduce the issue RFE ------- - https://bugs.launchpad.net/neutron/+bug/1931953 - [RFE] Openflow-based DVR L3 - Agreement on Drivers meeting was that the final goal should be to replace current DVR solution to have only one driver maintained. - https://bugs.launchpad.net/neutron/+bug/1932154 - [rfe] Off-path SmartNIC Port Binding - spec: https://review.opendev.org/c/openstack/neutron-specs/+/788821 Won't fix ------------ - https://bugs.launchpad.net/neutron/+bug/1931844 - Can't ping router with packet size greater than 1476 when ovs datapath_type set to netdev - "ovs datapath_type netdev" should only be used for VM, neutron router related virtual network devices are not compatible with it -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Jun 21 09:18:52 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 21 Jun 2021 11:18:52 +0200 Subject: [interop] Preping up for June Board meeting In-Reply-To: References: Message-ID: Arkady Kanevsky wrote: > Team, > as I prepare for the June OIF board meeting we need to do 3 things: > 1. Need review and land > https://review.opendev.org/c/osf/interop/+/796312 > to remove > requirement that one of Interop WG co-chair is from the board and > nominated by the board. > 2. Need to review and land > https://review.opendev.org/c/osf/interop/+/784622 > that has been > discussed several times and is needed to match > https://opendev.org/osf/interop/src/branch/master/doc/source/process/2021A.rst > . > 3. Final comments on presentation slides for the board - > https://docs.google.com/presentation/d/1-9H1cTXZxW0vCSTzfBe0aMKbd7nggd8SOHQOT987nFs/ > > > We need to complete it by Monday June 21 so I can send it to the board 1 > week before the meeting. Approved both reviews. One correction required on slide 3: The part in red says: "Approval Committee consisting of Interop WG, Refstack, TC and Foundation Marketplace Manager." This is incorrect as the Foundation staff involved would be the Marketplace product owner (Wes Wilson). The bullet point under that is correct. Since details are given in the bullet points, I propose the red part to just say: "Approval Committee consisting of Interop WG, Refstack, TC and Foundation staff." -- Thierry Carrez From thierry at openstack.org Mon Jun 21 09:34:19 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 21 Jun 2021 11:34:19 +0200 Subject: [largescale-sig] Next meeting: June 23, 15utc on #openstack-operators Message-ID: <21ffcd3a-eea8-6aaa-82cf-c6dc71fe03d0@openstack.org> Hi everyone, Our next Large Scale SIG meeting will be this Wednesday in #openstack-operators on OFTC IRC, at 15UTC. Please note the new channel to be used! You can doublecheck how that time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210623T15 A number of topics have already been added to the agenda, including discussing our next OpenInfra.Live show. Feel free to add other topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From christian.rohmann at inovex.de Mon Jun 21 10:39:13 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Mon, 21 Jun 2021 12:39:13 +0200 Subject: Ceph RADOSGW Keystone integration - S3 bucket policies targeting not just whole projects but particular users? Message-ID: Hallo Openstack-Users, (this is somewhat of a cross-port with the ceph-users ML, I just did not know where to ask about this best) I've been wondering about the state of OpenStack Keystone Auth in RADOSGW, especially in regards to the abilities to utilize bucket policies restricting access to only those users and only those objects which are required. 1) Even though the general documentation on RADOSGW S3 bucket policies is a little "misleading" https://docs.ceph.com/en/latest/radosgw/bucketpolicy/#creation-and-removal in showing users being referred as Principal, the documentation about Keystone integration at https://docs.ceph.com/en/latest/radosgw/keystone/#integrating-with-openstack-keystone clearly states, that "A Ceph Object Gateway user is mapped into a Keystone "||. In the keystone authentication code it strictly only takes the project from the authenticating user:  * https://github.com/ceph/ceph/blob/6ce6874bae8fbac8921f0bdfc3931371fc61d4ff/src/rgw/rgw_auth_keystone.cc#L127  * https://github.com/ceph/ceph/blob/6ce6874bae8fbac8921f0bdfc3931371fc61d4ff/src/rgw/rgw_auth_keystone.cc#L515 This is rather unfortunate as this renders the usually powerful S3 bucket policies to be rather basic with granting access to all users (with a certain role) of a project or more importantly all users of another project / tenant, as in using   arn:aws:iam::$OS_REMOTE_PROJECT_ID:root as principal. Or am I just misreading anything here or is this really all that can be done if using native keystone auth? Apparently I was not the only one wondering ... https://lists.ceph.io/hyperkitty/list/ceph-users at ceph.io/thread/7MXUZ63DEH7EQIZNXOYGZ5QDJ36EATYO/ 2) There is a PR open implementing generic external authentication https://github.com/ceph/ceph/pull/34093 Apparently this seems to also address the lack of support for subusers for Keystone - if I understand this correctly I could then grant access to users   arn:aws:iam::$OS_REMOTE_PROJECT_ID:$user * Are there any plans on the roadmap to extend the functionality in regards to keystone as authentication backend? * Is anybody using another (custom) solution to allow a more fine-grained user and access management when utilizing Ceph for their object storage? Are you potentially not using Keystone directly and use a central database such as an LDAP and have Ceph and Keystone use that independently? Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Jun 21 11:55:31 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 21 Jun 2021 06:55:31 -0500 Subject: [glance] nominating Cyril Roelandt for glance core In-Reply-To: References: Message-ID: <20210621115531.GA59704@sm-workstation> On Thu, Jun 17, 2021 at 08:22:29PM +0530, Abhishek Kekane wrote: > Hi All, > > I am nominating Cyril Roelandt (cyril-roelandt LP and Steap on IRC) to > be a Glance core. Cyril has been around the Glance community for > a long time and is familiar with the architecture and design patterns > used in Glance and its related projects. He's contributed code, > triaged bugs, provided bug fixes, and did quality reviews for Glance. He > is also helping me in reducing our bug backlogs. > > Considering the current situation with the project, however, it would be > an enormous help to have someone as knowledgeable about Glance as Cyril > to have +2 abilities. I discussed this with cyril, he's agreed to be a > core reviewer. > > In any case, I'd like to put Cyril to work as soon as possible! So > please reply to this message with comments or concerns before 23:59 > UTC on Monday 21 June. I'd like to confirm Cyril as a core on Tuesday 22 > June. > > Thanks and Regards, > > Abhishek +2 - Cyril would be great to have! From smooney at redhat.com Mon Jun 21 11:57:08 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 21 Jun 2021 12:57:08 +0100 Subject: [ops][nova][spice][victoria] Switch from VNC to SPICE In-Reply-To: <0670B960225633449A24709C291A5252511E1152@COM01.performair.local> References: <0670B960225633449A24709C291A5252511E0B23@COM01.performair.local> <7136e3401a5587cef034b81b971406481966fef5.camel@redhat.com> <0670B960225633449A24709C291A5252511E0EF0@COM01.performair.local> <0670B960225633449A24709C291A5252511E1152@COM01.performair.local> Message-ID: <7df660943b94da07ca7ca295a3e4d74c51099c1c.camel@redhat.com> On Fri, 2021-06-18 at 20:44 +0000, DHilsbos at performair.com wrote: > We're trying to virtualize desktops for remote workers.  Guests will > be Windows.  Remote connection method will be GoToMyPC.  As such, I > need the OS to believe either 1) it has 2 1920 x 1080 monitors, or 2) > it has a single 3840 x 1080 monitor.  Neither of which have I found a > way to do in VNC. i am not sure the resoltution that rdp or whatever GoToMyPC uses will be affected by the use of vnc or spcice on the host. I think this is related to the video model used in the guets unless GoToMyPC is running on the host and connecting the the qemu process. thats not normally something we would recommend but if its directly conecting to the qemu process then the use of vnc or spice might be a factor have you tried changing the video model to virtio for example, that should enabel more resolutions in the guest. i know that usign RDP in a windows guest with hw_video_model=virtio in the image does allow higher resolutions and i think it enables multi monitor support too. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com  > www.PerformAir.com > > From: Laurent Dumont [mailto:laurentfdumont at gmail.com] > Sent: Friday, June 18, 2021 12:39 PM > To: Dominic Hilsbos > Cc: stephenfin at redhat.com; openstack-discuss > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE > > Are you using the Openstack console itself? You might have more > flexibility with a dedicated VNC server inside the VM and connect > directly to it. So you would not be tied to the Openstack VNC support > which I dont think was ever designed for graphical usage. More of a > "it's 2AM, server is crashed and I need a way in!". > > On Fri, Jun 18, 2021 at 1:58 PM wrote: > Stephen; > > Thank you for the information. > > What's the replacement?  Is VNC getting improvements to allow higher > resolutions, and multi-monitor, in the guest? > > We've already decided to transition our OpenStack cluster away from > CentOS, as RDO doesn't package some of the OpenStack projects we'd > like to use, and RedHat has lost our trust. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com  > www.PerformAir.com > > > -----Original Message----- > From: Stephen Finucane [mailto:stephenfin at redhat.com] > Sent: Friday, June 18, 2021 10:33 AM > To: Dominic Hilsbos; openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE > > On Fri, 2021-06-18 at 16:53 +0000, DHilsbos at performair.com wrote: > > All; > > > > We have a Victoria cluster, and I'd like to switch from VNC to > > SPICE.  Cluster is installed with packages (RDO), and configured > > manually. > > > > I have located > > https://docs.openstack.org/nova/victoria/admin/remote-console-access.html > > , but this doesn't tell me which services need to be installed on > > which servers. > > > > Looking at packages, I'm fairly certain nova-spicehtml5proxy > > (openstack-nova-spicehtml5proxy on CentOS 8) needs to be installed > > where the nova-novncproxy is currently. > > > > I also suspect that qemu-kvm-ui-spice needs to be installed on the > > nova-compute nodes.  Is spice-server needed on the nova-compute > > nodes? > > Not an answer, but I'd be very careful about building solutions based > on SPICE. > It has been deprecated in RHEL 8.3 and recent versions of Fedora and > is slated > for removal in RHEL 9, as this bug [1] points out. It is also > receives very > little attention in nova as some deployments tooling (such as Red Hat > OSP) has > not supported it for some time. There's a non-zero chance support for > this > console type will be dropped entirely in some future release. > > Stephen > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1946938 > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Vice President - Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com  > > www.PerformAir.com > > > > > > > > From hberaud at redhat.com Mon Jun 21 11:57:07 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 21 Jun 2021 13:57:07 +0200 Subject: [cinder][kolla][OpenstackAnsible] Wallaby Cycle-Trailing Release Deadline Message-ID: Hello teams with trailing projects, Next week is the Wallaby cycle-trailing release deadline [1], and all projects following the cycle-trailing release model must release their Wallaby deliverables by 02 July, 2021. The following trailing projects haven't been released yet for Wallaby (aside the release candidates versions). Cinder team's deliverables: - cinderlib OSA team's deliverables: - openstack-ansible-roles - openstack-ansible Kolla team's deliverables: - kayobe - kolla - kolla-ansible This is just a friendly reminder to allow you to release these projects in time. Do not hesitate to ping us if you have any questions or concerns. [1] https://releases.openstack.org/xena/schedule.html#x-cycle-trail -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jun 21 12:49:18 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 21 Jun 2021 12:49:18 +0000 Subject: [infra][cyborg] cyborg-tempest-plugin test failed due to the zuul server has no accelerators In-Reply-To: References: Message-ID: <20210621124918.quowbojuwnnnqmh2@yuggoth.org> On 2021-06-21 02:25:51 +0000 (+0000), Alex Song (宋文平) wrote: [...] > Is the Zuul server env changed recently? There are no accelerators > and our cyborg-tempest-plugin test failed. [...] Please provide a link to an example build which is failing and was previously succeeding. There's not enough information in your message to begin trying to troubleshoot whatever situation you're running into, and I'd rather not guess. An example will tell us what node type you've configured and whether it's some sort of specialty flavor at one of our providers, for example ubuntu-bionic-gpu, since our standard labels don't guarantee the presence (nor absence) of any specialized accelerator hardware. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dms at danplanet.com Mon Jun 21 13:10:33 2021 From: dms at danplanet.com (Dan Smith) Date: Mon, 21 Jun 2021 06:10:33 -0700 Subject: [glance] nominating Cyril Roelandt for glance core In-Reply-To: (Abhishek Kekane's message of "Thu, 17 Jun 2021 20:22:29 +0530") References: Message-ID: > Hi All, > > I am nominating Cyril Roelandt (cyril-roelandt LP and Steap on IRC) to > be a Glance core. Cyril has been around the Glance community for > a long time and is familiar with the architecture and design patterns > used in Glance and its related projects. He's contributed code, > triaged bugs, provided bug fixes, and did quality reviews for Glance. He > is also helping me in reducing our bug backlogs. > > Considering the current situation with the project, however, it would be > an enormous help to have someone as knowledgeable about Glance as Cyril > to have +2 abilities. I discussed this with cyril, he's agreed to be a > core reviewer. > > In any case, I'd like to put Cyril to work as soon as possible! So > please reply to this message with comments or concerns before 23:59 > UTC on Monday 21 June. I'd like to confirm Cyril as a core on Tuesday 22 June. Very glad to have Cyril help out. +1 from me. --Dan From akanevsk at redhat.com Mon Jun 21 13:51:51 2021 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Mon, 21 Jun 2021 08:51:51 -0500 Subject: [interop] Preping up for June Board meeting In-Reply-To: References: Message-ID: Thanks Thierry. I had updated slide 3 per your suggestion. Thanks, On Mon, Jun 21, 2021 at 4:18 AM Thierry Carrez wrote: > Arkady Kanevsky wrote: > > Team, > > as I prepare for the June OIF board meeting we need to do 3 things: > > 1. Need review and land > > https://review.opendev.org/c/osf/interop/+/796312 > > to remove > > requirement that one of Interop WG co-chair is from the board and > > nominated by the board. > > 2. Need to review and land > > https://review.opendev.org/c/osf/interop/+/784622 > > that has been > > discussed several times and is needed to match > > > https://opendev.org/osf/interop/src/branch/master/doc/source/process/2021A.rst > > < > https://opendev.org/osf/interop/src/branch/master/doc/source/process/2021A.rst > >. > > 3. Final comments on presentation slides for the board - > > > https://docs.google.com/presentation/d/1-9H1cTXZxW0vCSTzfBe0aMKbd7nggd8SOHQOT987nFs/ > > < > https://docs.google.com/presentation/d/1-9H1cTXZxW0vCSTzfBe0aMKbd7nggd8SOHQOT987nFs/ > > > > > > We need to complete it by Monday June 21 so I can send it to the board 1 > > week before the meeting. > > Approved both reviews. One correction required on slide 3: > > The part in red says: "Approval Committee consisting of Interop WG, > Refstack, TC and Foundation Marketplace Manager." > > This is incorrect as the Foundation staff involved would be the > Marketplace product owner (Wes Wilson). The bullet point under that is > correct. > > Since details are given in the bullet points, I propose the red part to > just say: "Approval Committee consisting of Interop WG, Refstack, TC and > Foundation staff." > > -- > Thierry Carrez > > -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Mon Jun 21 14:17:48 2021 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Mon, 21 Jun 2021 16:17:48 +0200 Subject: [tc][all] Test support for TLS default In-Reply-To: References: Message-ID: <53b2006e30334f8c7562a39fc12cedaa8cfb666c.camel@redhat.com> On Fri, 2021-06-11 at 01:35 +0800, Rico Lin wrote: > > Dear all > In short, > can you help to enable tls-proxy for your test jobs and fix/report > the issue in [4]? Or it makes no sense for you? > Here's all repositories contains jobs with tls-proxy disabled: >  * neutron >  * neutron-tempest-plugin >  * cinder-tempest-plugin >  * cyborg-tempest-plugin >  * ec2api-tempest-plugin >  * freezer-tempest-plugin >  * grenade >  * heat >  * js-openstack-lib >  * keystone >  * kuryr-kubernetes >  * masakari >  * murano >  * networking-odl >  * networking-sfc >  * python-brick-cinderclient-ext >  * python-neutronclient >  * python-zaqarclient >  * sahara >  * sahara-dashboard >  * sahara-tests >  * solum >  * tacker >  * telemetry-tempest-plugin >  * trove >  * trove-tempest-plugin >  * vitrage-tempest-plugin >  * watcher > As I'm looking for y-cycle potential goals, I found the tls-proxy > support is not actually ready OpenStack wide (you can find some > discussion in [3]).We have multiple projects that disable tls-proxy > in test jobs [1] (and stay that way for a long time). > For security concerns, I'm currently collecting the missing part for > this. And try to figure out if there is any infra issue for current > jobs. > After I attempt to enable tls-proxy for some projects to check the > status. > And from the test result shows ([2]), We might have bugs/test infra > issues in projects. > So I invite projects who still have not switched to TLS default. > Please do, and help to fix/report the issue you're facing. > As we definitely need some more help on figuring out the actual > situation on each project. > So I created an etherpad [4] to track actions or related information. > > Meanwhile, I will attempt to enable tls-proxy on more test jobs (and > you will be able to find it in [2]). Which gives us a good chance to > review the logs and see how we might get chances to fix it and enable > TLS by default. Hi, In kuryr-kubernetes we deliberately disable tls-proxy on multinode gate as I'm not sure how the certificates are shared between the controller and the subnode. Can you elaborate on that? > [1] > https://codesearch.opendev.org/?q=tls-proxy%3A%20false&i=nope&files=&excludeFiles=&repos= > [2]  > https://review.opendev.org/q/topic:%22exame-tls-proxy%22+(status:open%20OR%20status:merged) > [3] https://etherpad.opendev.org/p/community-goals > [4] https://etherpad.opendev.org/p/support-tls-default > > Rico LinOIF Board director, OpenStack TC, Multi-arch SIG chair, Heat > PTL,  > Senior Software Engineer at EasyStack From xin-ran.wang at intel.com Mon Jun 21 15:21:59 2021 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Mon, 21 Jun 2021 15:21:59 +0000 Subject: [infra][cyborg] cyborg-tempest-plugin test failed due to the zuul server has no accelerators In-Reply-To: <20210621124918.quowbojuwnnnqmh2@yuggoth.org> References: <20210621124918.quowbojuwnnnqmh2@yuggoth.org> Message-ID: Hi Jeremy, Thanks for your reply, I have drafted a doc[1] describing the issue we met, could you please help check it. We didn't specify any label before as I remembered, it seems we do need that now. Could you please check the document, and is there any guidance to help us to add such label for cyborg tempest plugin? Thanks in advance. [1] https://docs.google.com/document/d/1dP3s24VugOb5ppvcO-sDkFL6Svzetxk7aiaul3ay7L0/edit?usp=sharing Thanks, Xin-Ran -----Original Message----- From: Jeremy Stanley Sent: Monday, June 21, 2021 8:49 PM To: openstack-discuss at lists.openstack.org Subject: Re: [infra][cyborg] cyborg-tempest-plugin test failed due to the zuul server has no accelerators On 2021-06-21 02:25:51 +0000 (+0000), Alex Song (宋文平) wrote: [...] > Is the Zuul server env changed recently? There are no accelerators and > our cyborg-tempest-plugin test failed. [...] Please provide a link to an example build which is failing and was previously succeeding. There's not enough information in your message to begin trying to troubleshoot whatever situation you're running into, and I'd rather not guess. An example will tell us what node type you've configured and whether it's some sort of specialty flavor at one of our providers, for example ubuntu-bionic-gpu, since our standard labels don't guarantee the presence (nor absence) of any specialized accelerator hardware. -- Jeremy Stanley From hberaud at redhat.com Mon Jun 21 15:36:54 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 21 Jun 2021 17:36:54 +0200 Subject: [oslo] TaCT SIG volunteer Message-ID: Hi everyone, We are looking for volunteers to endorse the TaCT SIG liaison role for Oslo. Our previous volunteers are no longer active. Indeed, Moisés officially announced his retirement from Oslo two weeks ago, and Sébastien did not attend our meetings for some time. Here are the details about this role: https://governance.openstack.org/sigs/tact-sig.html Let us know if you're interested in this role, by replying to this thread or by joining us during our meetings (next meeting: July 05). Thanks for your attention. -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon Jun 21 16:46:21 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 22 Jun 2021 01:46:21 +0900 Subject: [neutron] Bug deputy report for week of June 14th In-Reply-To: References: Message-ID: There are some updates after the bug deputy report was sent. On Mon, Jun 21, 2021 at 6:04 PM Lajos Katona wrote: > > Hi, > > I was last week's bug deputy for Neutron, Below is a short summary of last week's bugs: > > Critical > --------- > > https://bugs.launchpad.net/neutron/+bug/1932093 > > "oslo_config.cfg.DuplicateOptError: duplicate option: host" using OVN Octavia provider on stable/train > https://review.opendev.org/c/openstack/networking-ovn/+/796517 > > https://bugs.launchpad.net/neutron/+bug/1932483 > > CI neutron.tests.functional.services.l3_router.test_l3_dvr get failed frequently > Unassigned https://review.opendev.org/c/openstack/neutron/+/797280 from Slawek will fix it. > > High > ------- > > https://bugs.launchpad.net/neutron/+bug/1932421 > > ovn-neutron-db-sync deletes legitimate metadata ports > https://review.opendev.org/q/I78673b6a85f1c872e70026da82124d1ba2326562 > > https://bugs.launchpad.net/neutron/+bug/1933026 > > stack.sh fail with ovs_source: No such file or directory > CI of some project fails with: "/opt/stack/neutron/devstack/plugin.sh: line 23: /opt/stack/devstack/lib/neutron_plugins/ovs_source: No such file or directory" after https://review.opendev.org/c/openstack/neutron/+/793470 is merged. > Unassigned https://review.opendev.org/c/openstack/neutron/+/797128 will fix this, but we need to wait till https://review.opendev.org/c/openstack/neutron/+/797280 lands as neutron-functional-with-uwsgi is failing now. > > Medium > ----------- > - > Low > ------ > > https://bugs.launchpad.net/neutron/+bug/1932016 > > Quality of Service (QoS) in Neutron, error in permissions > Doc fix: https://review.opendev.org/c/openstack/neutron/+/796457 > > https://bugs.launchpad.net/neutron/+bug/1932373 > > DB migration is interrupted and next execution will fail > Upgrade from Rocky to Victoria fails on db migration, and traceback points to binding index (https://review.opendev.org/c/openstack/neutron/+/692285 ) but I can't reproduce the issue > > RFE > ------- > > https://bugs.launchpad.net/neutron/+bug/1931953 > > [RFE] Openflow-based DVR L3 > Agreement on Drivers meeting was that the final goal should be to replace current DVR solution to have only one driver maintained. > > https://bugs.launchpad.net/neutron/+bug/1932154 > > [rfe] Off-path SmartNIC Port Binding > spec: https://review.opendev.org/c/openstack/neutron-specs/+/788821 > > Won't fix > ------------ > > https://bugs.launchpad.net/neutron/+bug/1931844 > > Can't ping router with packet size greater than 1476 when ovs datapath_type set to netdev > "ovs datapath_type netdev" should only be used for VM, neutron router related virtual network devices are not compatible with it > > > From gmann at ghanshyammann.com Mon Jun 21 17:31:17 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 21 Jun 2021 12:31:17 -0500 Subject: [interop] Preping up for June Board meeting In-Reply-To: References: Message-ID: <17a2fa06077.d842378a112545.1798536295034269874@ghanshyammann.com> Thanks Arkday for preparing this, one slide to fix and rest all lgtm: - slide#5 "Tests are in Tempest or project’s tempest.conf" - Test are present in tempest plugins not tempest.conf, you can change this to "Tests are in Tempest or Tempest Plugins" -gmann ---- On Mon, 21 Jun 2021 08:51:51 -0500 Arkady Kanevsky wrote ---- > Thanks Thierry.I had updated slide 3 per your suggestion.Thanks, > On Mon, Jun 21, 2021 at 4:18 AM Thierry Carrez wrote: > Arkady Kanevsky wrote: > > Team, > > as I prepare for the June OIF board meeting we need to do 3 things: > > 1. Need review and land > > https://review.opendev.org/c/osf/interop/+/796312 > > to remove > > requirement that one of Interop WG co-chair is from the board and > > nominated by the board. > > 2. Need to review and land > > https://review.opendev.org/c/osf/interop/+/784622 > > that has been > > discussed several times and is needed to match > > https://opendev.org/osf/interop/src/branch/master/doc/source/process/2021A.rst > > . > > 3. Final comments on presentation slides for the board - > > https://docs.google.com/presentation/d/1-9H1cTXZxW0vCSTzfBe0aMKbd7nggd8SOHQOT987nFs/ > > > > > > We need to complete it by Monday June 21 so I can send it to the board 1 > > week before the meeting. > > Approved both reviews. One correction required on slide 3: > > The part in red says: "Approval Committee consisting of Interop WG, > Refstack, TC and Foundation Marketplace Manager." > > This is incorrect as the Foundation staff involved would be the > Marketplace product owner (Wes Wilson). The bullet point under that is > correct. > > Since details are given in the bullet points, I propose the red part to > just say: "Approval Committee consisting of Interop WG, Refstack, TC and > Foundation staff." > > -- > Thierry Carrez > > > > -- > Arkady Kanevsky, Ph.D.Phone: 972 707-6456Corporate Phone: 919 729-5744 ext. 8176456 > From fungi at yuggoth.org Mon Jun 21 18:19:24 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 21 Jun 2021 18:19:24 +0000 Subject: [infra][cyborg] cyborg-tempest-plugin test failed due to the zuul server has no accelerators In-Reply-To: References: <20210621124918.quowbojuwnnnqmh2@yuggoth.org> Message-ID: <20210621181923.krqu7zbyjqmp7scg@yuggoth.org> On 2021-06-21 15:21:59 +0000 (+0000), Wang, Xin-ran wrote: > Thanks for your reply, I have drafted a doc[1] describing the > issue we met, could you please help check it. We didn't specify > any label before as I remembered, it seems we do need that now. > > Could you please check the document, and is there any guidance to > help us to add such label for cyborg tempest plugin? [...] I managed to work out how to open the Google Doc, but the information there just shows a Python traceback for a specific Tempest test, which doesn't really help me to understand which Zuul job is failing to build. Can you please point to a failing build of a job so we can find out what nodeset or node label it's trying to use? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From lucioseki at gmail.com Mon Jun 21 18:20:13 2021 From: lucioseki at gmail.com (Lucio Seki) Date: Mon, 21 Jun 2021 15:20:13 -0300 Subject: [cinder] Stepping down from cinder core Message-ID: Hi, I've been totally absent from the community in the last cycle, and unfortunately I'm not able to review Cinder patches in the next few cycles either. For this reason, I'm stepping down from the core team. Thank you all for the patience and attention to teach me so many things. It was a great opportunity for me to learn and grow professionally and personally. Hopefully sometime in the future I'll come back to the community again. Kind regards and stay safe, Lucio Seki -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Jun 21 18:34:42 2021 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 21 Jun 2021 13:34:42 -0500 Subject: [cinder] Stepping down from cinder core In-Reply-To: References: Message-ID: <9463b852-45c3-166a-4660-4bcc0a1fb24c@gmail.com> Lucio, Thank you for your efforts in the past.  They were greatly appreciated! Sorry to see you go, but hope we will continue to see you in the community! Best wishes, Jay On 6/21/2021 1:20 PM, Lucio Seki wrote: > Hi, > > I've been totally absent from the community in the last cycle, and > unfortunately I'm not able to review Cinder patches in the next few > cycles either. > For this reason, I'm stepping down from the core team. > > Thank you all for the patience and attention to teach me so many things. > It was a great opportunity for me to learn and grow professionally and > personally. > > Hopefully sometime in the future I'll come back to the community again. > > Kind regards and stay safe, > Lucio Seki From ignaziocassano at gmail.com Mon Jun 21 19:06:29 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 21 Jun 2021 21:06:29 +0200 Subject: [Wallaby][kolla][masakari] hacluser issues Message-ID: Hello everyone, I wonder if this is a good place for discussing about some issues I am facing with instance HA with masakari hacluster kolla wallaby. Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From akanevsk at redhat.com Mon Jun 21 19:20:14 2021 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Mon, 21 Jun 2021 14:20:14 -0500 Subject: [interop] Preping up for June Board meeting In-Reply-To: <17a2fa06077.d842378a112545.1798536295034269874@ghanshyammann.com> References: <17a2fa06077.d842378a112545.1798536295034269874@ghanshyammann.com> Message-ID: Thanks. Updated On Mon, Jun 21, 2021 at 12:31 PM Ghanshyam Mann wrote: > Thanks Arkday for preparing this, one slide to fix and rest all lgtm: > > - slide#5 "Tests are in Tempest or project’s tempest.conf" - Test are > present in tempest plugins not tempest.conf, you can change this to > "Tests are in Tempest or Tempest Plugins" > > -gmann > > ---- On Mon, 21 Jun 2021 08:51:51 -0500 Arkady Kanevsky < > akanevsk at redhat.com> wrote ---- > > Thanks Thierry.I had updated slide 3 per your suggestion.Thanks, > > On Mon, Jun 21, 2021 at 4:18 AM Thierry Carrez > wrote: > > Arkady Kanevsky wrote: > > > Team, > > > as I prepare for the June OIF board meeting we need to do 3 things: > > > 1. Need review and land > > > https://review.opendev.org/c/osf/interop/+/796312 > > > to remove > > > requirement that one of Interop WG co-chair is from the board and > > > nominated by the board. > > > 2. Need to review and land > > > https://review.opendev.org/c/osf/interop/+/784622 > > > that has been > > > discussed several times and is needed to match > > > > https://opendev.org/osf/interop/src/branch/master/doc/source/process/2021A.rst > > > < > https://opendev.org/osf/interop/src/branch/master/doc/source/process/2021A.rst > >. > > > 3. Final comments on presentation slides for the board - > > > > https://docs.google.com/presentation/d/1-9H1cTXZxW0vCSTzfBe0aMKbd7nggd8SOHQOT987nFs/ > > > < > https://docs.google.com/presentation/d/1-9H1cTXZxW0vCSTzfBe0aMKbd7nggd8SOHQOT987nFs/ > > > > > > > > We need to complete it by Monday June 21 so I can send it to the > board 1 > > > week before the meeting. > > > > Approved both reviews. One correction required on slide 3: > > > > The part in red says: "Approval Committee consisting of Interop WG, > > Refstack, TC and Foundation Marketplace Manager." > > > > This is incorrect as the Foundation staff involved would be the > > Marketplace product owner (Wes Wilson). The bullet point under that is > > correct. > > > > Since details are given in the bullet points, I propose the red part to > > just say: "Approval Committee consisting of Interop WG, Refstack, TC > and > > Foundation staff." > > > > -- > > Thierry Carrez > > > > > > > > -- > > Arkady Kanevsky, Ph.D.Phone: 972 707-6456Corporate Phone: 919 729-5744 > ext. 8176456 > > > > -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Mon Jun 21 20:57:45 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Mon, 21 Jun 2021 20:57:45 +0000 Subject: [ops][victoria][nova] Add hardware to guest (was: [ops][nova][spice][victoria] Switch from VNC to SPICE) Message-ID: <0670B960225633449A24709C291A5252511E4D53@COM01.performair.local> Just realized that I sent this to Sean, but not the list... Sean; Thank you for that pointer to hw_video_model=virtio, that addressed the resolution issue! Any chance you could point me at a metadata option that would tell libvirt to provide 2 monitor devices? It looks like I need to send the max_outputs=2 to qemu, or libvirt or directly to virtio-vga (I'm not sure which) [1], though I don't see a way to pass that through in nova.virt.libvirt.driver [2]. As to how GoToMyPC works... I don't know, but I believe it captures the framebuffer. [1] https://www.kraxel.org/blog/2019/09/display-devices-in-qemu/ [2] https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py Thank you, Dominic L. Hilsbos, MBA Vice President – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Sean Mooney [mailto:smooney at redhat.com] Sent: Monday, June 21, 2021 4:57 AM To: Dominic Hilsbos; laurentfdumont at gmail.com; openstack-discuss at lists.openstack.org Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE On Fri, 2021-06-18 at 20:44 +0000, DHilsbos at performair.com wrote: > We're trying to virtualize desktops for remote workers.  Guests will > be Windows.  Remote connection method will be GoToMyPC.  As such, I > need the OS to believe either 1) it has 2 1920 x 1080 monitors, or 2) > it has a single 3840 x 1080 monitor.  Neither of which have I found a > way to do in VNC. i am not sure the resoltution that rdp or whatever GoToMyPC uses will be affected by the use of vnc or spcice on the host. I think this is related to the video model used in the guets unless GoToMyPC is running on the host and connecting the the qemu process. thats not normally something we would recommend but if its directly conecting to the qemu process then the use of vnc or spice might be a factor have you tried changing the video model to virtio for example, that should enabel more resolutions in the guest. i know that usign RDP in a windows guest with hw_video_model=virtio in the image does allow higher resolutions and i think it enables multi monitor support too. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com  > www.PerformAir.com > > From: Laurent Dumont [mailto:laurentfdumont at gmail.com] > Sent: Friday, June 18, 2021 12:39 PM > To: Dominic Hilsbos > Cc: stephenfin at redhat.com; openstack-discuss > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE > > Are you using the Openstack console itself? You might have more > flexibility with a dedicated VNC server inside the VM and connect > directly to it. So you would not be tied to the Openstack VNC support > which I dont think was ever designed for graphical usage. More of a > "it's 2AM, server is crashed and I need a way in!". > > On Fri, Jun 18, 2021 at 1:58 PM wrote: > Stephen; > > Thank you for the information. > > What's the replacement?  Is VNC getting improvements to allow higher > resolutions, and multi-monitor, in the guest? > > We've already decided to transition our OpenStack cluster away from > CentOS, as RDO doesn't package some of the OpenStack projects we'd > like to use, and RedHat has lost our trust. > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com  > www.PerformAir.com > > > -----Original Message----- > From: Stephen Finucane [mailto:stephenfin at redhat.com] > Sent: Friday, June 18, 2021 10:33 AM > To: Dominic Hilsbos; openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE > > On Fri, 2021-06-18 at 16:53 +0000, DHilsbos at performair.com wrote: > > All; > > > > We have a Victoria cluster, and I'd like to switch from VNC to > > SPICE.  Cluster is installed with packages (RDO), and configured > > manually. > > > > I have located > > https://docs.openstack.org/nova/victoria/admin/remote-console-access.html > > , but this doesn't tell me which services need to be installed on > > which servers. > > > > Looking at packages, I'm fairly certain nova-spicehtml5proxy > > (openstack-nova-spicehtml5proxy on CentOS 8) needs to be installed > > where the nova-novncproxy is currently. > > > > I also suspect that qemu-kvm-ui-spice needs to be installed on the > > nova-compute nodes.  Is spice-server needed on the nova-compute > > nodes? > > Not an answer, but I'd be very careful about building solutions based > on SPICE. > It has been deprecated in RHEL 8.3 and recent versions of Fedora and > is slated > for removal in RHEL 9, as this bug [1] points out. It is also > receives very > little attention in nova as some deployments tooling (such as Red Hat > OSP) has > not supported it for some time. There's a non-zero chance support for > this > console type will be dropped entirely in some future release. > > Stephen > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1946938 > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Vice President - Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com  > > www.PerformAir.com > > > > > > > > From rosmaita.fossdev at gmail.com Mon Jun 21 20:58:04 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 21 Jun 2021 16:58:04 -0400 Subject: [cinder] Stepping down from cinder core In-Reply-To: <9463b852-45c3-166a-4660-4bcc0a1fb24c@gmail.com> References: <9463b852-45c3-166a-4660-4bcc0a1fb24c@gmail.com> Message-ID: <2d64ec4d-3577-b933-1c9e-99aa51998105@gmail.com> On 6/21/21 2:34 PM, Jay Bryant wrote: > Lucio, > > Thank you for your efforts in the past.  They were greatly appreciated! > > Sorry to see you go, but hope we will continue to see you in the community! Agree completely with Jay in both respects. Hope to see you again in the cinder community not too far in the future! > > Best wishes, > > Jay > > > On 6/21/2021 1:20 PM, Lucio Seki wrote: >> Hi, >> >> I've been totally absent from the community in the last cycle, and >> unfortunately I'm not able to review Cinder patches in the next few >> cycles either. >> For this reason, I'm stepping down from the core team. >> >> Thank you all for the patience and attention to teach me so many things. >> It was a great opportunity for me to learn and grow professionally and >> personally. >> >> Hopefully sometime in the future I'll come back to the community again. >> >> Kind regards and stay safe, >> Lucio Seki > From zhangbailin at inspur.com Tue Jun 22 00:23:32 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 22 Jun 2021 00:23:32 +0000 Subject: =?gb2312?B?tPC4tDogW2xpc3RzLm9wZW5zdGFjay5vcme0+reiXVJlOiBbaW5mcmFdW2N5?= =?gb2312?B?Ym9yZ10gY3lib3JnLXRlbXBlc3QtcGx1Z2luIHRlc3QgZmFpbGVkIGR1ZSB0?= =?gb2312?Q?o_the_zuul_server_has_no_accelerators?= In-Reply-To: <20210621181923.krqu7zbyjqmp7scg@yuggoth.org> References: <2421b3cc85bfef36533bc72ce3e04d8a@sslemail.net> <20210621181923.krqu7zbyjqmp7scg@yuggoth.org> Message-ID: <365696d293454ca38b7abc198204c82e@inspur.com> Hi Jeremy Stanley, There is a patch you can check https://review.opendev.org/c/openstack/cyborg/+/790937 , tempest failed https://050bde8a54f119be7071-8157e9570cd7007a824b373cbf52d06c.ssl.cf2.rackcdn.com/790937/6/check/cyborg-tempest/82fd3ce/testr_results.html Recently there is always report "no valid host" when create an accelerator server, as below, that out of our control :(, """ tempest.exceptions.BuildErrorException: Server feef6015-5211-481b-813f-c5924cdf6931 failed to build and is in ERROR status Details: {'code': 500, 'created': '2021-06-21T01:13:52Z', 'message': 'No valid host was found. '} """ Thanks Jeremy. brinzhang Inspur Electronic Information Industry Co.,Ltd. -----邮件原件----- 发件人: Jeremy Stanley [mailto:fungi at yuggoth.org] 发送时间: 2021年6月22日 2:19 收件人: openstack-discuss at lists.openstack.org 主题: [lists.openstack.org代发]Re: [infra][cyborg] cyborg-tempest-plugin test failed due to the zuul server has no accelerators On 2021-06-21 15:21:59 +0000 (+0000), Wang, Xin-ran wrote: > Thanks for your reply, I have drafted a doc[1] describing the issue we > met, could you please help check it. We didn't specify any label > before as I remembered, it seems we do need that now. > > Could you please check the document, and is there any guidance to help > us to add such label for cyborg tempest plugin? [...] I managed to work out how to open the Google Doc, but the information there just shows a Python traceback for a specific Tempest test, which doesn't really help me to understand which Zuul job is failing to build. Can you please point to a failing build of a job so we can find out what nodeset or node label it's trying to use? -- Jeremy Stanley From suzhengwei at inspur.com Tue Jun 22 01:23:00 2021 From: suzhengwei at inspur.com (=?utf-8?B?U2FtIFN1ICjoi4/mraPkvJ8p?=) Date: Tue, 22 Jun 2021 01:23:00 +0000 Subject: =?utf-8?B?UmVwbHkgdG86IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj5Y+RXVtXYWxsYWJ5?= =?utf-8?Q?][kolla][masakari]_hacluser_issues?= Message-ID: <24445872f2ae4b6994084a3bf968d6e9@inspur.com> Hi, Ignazio You can discuss about instance HA issue with others on the channel ‘openstack-masakari’in OFTC. They will have a weekly meeting this Tuesday, 4:00-5:00 UTC on the IRC channel. Welcomed. 发件人: Ignazio Cassano [mailto:ignaziocassano at gmail.com] 发送时间: 2021年6月22日 3:06 收件人: openstack-discuss 主题: [lists.openstack.org代发][Wallaby][kolla][masakari] hacluser issues Hello everyone, I wonder if this is a good place for discussing about some issues I am facing with instance HA with masakari hacluster kolla wallaby. Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3606 bytes Desc: not available URL: From fungi at yuggoth.org Tue Jun 22 03:44:28 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 22 Jun 2021 03:44:28 +0000 Subject: [infra][cyborg] cyborg-tempest-plugin test failed due to the zuul server has no accelerators In-Reply-To: <365696d293454ca38b7abc198204c82e@inspur.com> References: <2421b3cc85bfef36533bc72ce3e04d8a@sslemail.net> <20210621181923.krqu7zbyjqmp7scg@yuggoth.org> <365696d293454ca38b7abc198204c82e@inspur.com> Message-ID: <20210622034427.zrqkocldbpqqankk@yuggoth.org> On 2021-06-22 00:23:32 +0000 (+0000), Brin Zhang(张百林) wrote: > There is a patch you can check > https://review.opendev.org/c/openstack/cyborg/+/790937 , tempest > failed > https://050bde8a54f119be7071-8157e9570cd7007a824b373cbf52d06c.ssl.cf2.rackcdn.com/790937/6/check/cyborg-tempest/82fd3ce/testr_results.html Thanks, that helps. The build history indicates that the job was succeeding for openstack/cyborg up through 2021-06-09 08:22 UTC, but was failing consistently as of 2021-06-10 09:26 UTC, so something probably changed in that 24 hour period to affect the job: https://zuul.opendev.org/t/openstack/builds?job_name=cyborg-tempest&project=openstack/cyborg Broadening that query to other projects, I can see it succeeded as recently as 2021-06-10 01:34 for an openstack/nova change in check. What's more interesting is that it's continuing to succeed consistently for stable branches, even stable/wallaby, just not master. Both the succeeding and failing builds for master ran on regular ubuntu-focal nodes in a number of different cloud providers which don't have any specialized accelerator hardware, so I have to assume what's changed has nothing to do with the underlying test environment. > Recently there is always report "no valid host" when create an > accelerator server, as below, that out of our control :(, > """ > tempest.exceptions.BuildErrorException: Server feef6015-5211-481b-813f-c5924cdf6931 failed to build and is in ERROR status > Details: {'code': 500, 'created': '2021-06-21T01:13:52Z', 'message': 'No valid host was found. '} > """ [...] This is when scheduling an accelerator within DevStack, right? Were you maybe using some sort of mock/fake accelerator for testing purposes? Because there wouldn't have been actual accelerators exposed to that environment even back when the job was still succeeding. Regardless, I suspect something merged early UTC on 2021-06-10 to the master branch of one of the services or tools with which Cyborg interacting to cause this error to begin appearing. The fact that the same job is running fine for stable/wallaby also indicates it's probably some behavior which hasn't been backported yet. Hopefully that helps narrow it down. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gagehugo at gmail.com Tue Jun 22 04:56:41 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 21 Jun 2021 23:56:41 -0500 Subject: [openstack-helm] No meeting tomorrow Message-ID: Hey team, Since there are no agenda items [0] for the IRC meeting tomorrow, the meeting is cancelled. Our next meeting will be June 29th. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Tue Jun 22 04:56:23 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 22 Jun 2021 10:26:23 +0530 Subject: [glance] nominating Cyril Roelandt for glance core In-Reply-To: References: Message-ID: Having heard only affirmative responses, I have added Cyril Roelandt to the Glance core group. Welcome to the Glance core team, Cyril! Thanks & Best Regards, Abhishek Kekane On Mon, Jun 21, 2021 at 6:40 PM Dan Smith wrote: > > Hi All, > > > > I am nominating Cyril Roelandt (cyril-roelandt LP and Steap on IRC) to > > be a Glance core. Cyril has been around the Glance community for > > a long time and is familiar with the architecture and design patterns > > used in Glance and its related projects. He's contributed code, > > triaged bugs, provided bug fixes, and did quality reviews for Glance. He > > is also helping me in reducing our bug backlogs. > > > > Considering the current situation with the project, however, it would be > > an enormous help to have someone as knowledgeable about Glance as Cyril > > to have +2 abilities. I discussed this with cyril, he's agreed to be a > > core reviewer. > > > > In any case, I'd like to put Cyril to work as soon as possible! So > > please reply to this message with comments or concerns before 23:59 > > UTC on Monday 21 June. I'd like to confirm Cyril as a core on Tuesday 22 > June. > > Very glad to have Cyril help out. +1 from me. > > --Dan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Jun 22 05:03:19 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 22 Jun 2021 07:03:19 +0200 Subject: =?UTF-8?B?UmU6IFJlcGx5IHRvOiBbbGlzdHMub3BlbnN0YWNrLm9yZ+S7o+WPkV1bV2FsbGFieV1baw==?= =?UTF-8?B?b2xsYV1bbWFzYWthcmldIGhhY2x1c2VyIGlzc3Vlcw==?= In-Reply-To: <24445872f2ae4b6994084a3bf968d6e9@inspur.com> References: <24445872f2ae4b6994084a3bf968d6e9@inspur.com> Message-ID: Many thks, Sam Il Mar 22 Giu 2021, 03:24 Sam Su (苏正伟) ha scritto: > Hi, Ignazio > > You can discuss about instance HA issue with others on the > channel ‘openstack-masakari’in OFTC. > > They will have a weekly meeting this Tuesday, 4:00-5:00 UTC on > the IRC channel. > > Welcomed. > > *发件人:* Ignazio Cassano [mailto:ignaziocassano at gmail.com] > *发送时间:* 2021年6月22日 3:06 > *收件人:* openstack-discuss > *主题:* [lists.openstack.org代发][Wallaby][kolla][masakari] hacluser issues > > > > Hello everyone, I wonder if this is a good place for discussing about some > issues I am facing with instance HA with masakari hacluster kolla wallaby. > > Thanks > > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Jun 22 06:12:34 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 22 Jun 2021 08:12:34 +0200 Subject: =?UTF-8?B?UmU6IFJlcGx5IHRvOiBbbGlzdHMub3BlbnN0YWNrLm9yZ+S7o+WPkV1bV2FsbGFieV1baw==?= =?UTF-8?B?b2xsYV1bbWFzYWthcmldIGhhY2x1c2VyIGlzc3Vlcw==?= In-Reply-To: <24445872f2ae4b6994084a3bf968d6e9@inspur.com> References: <24445872f2ae4b6994084a3bf968d6e9@inspur.com> Message-ID: Just a small correction - the meeting is 06:00-07:00 UTC. The current info can be found on the official website for meetings: https://meetings.opendev.org/#Masakari_Team_Meeting -yoctozepto On Tue, Jun 22, 2021 at 3:24 AM Sam Su (苏正伟) wrote: > > Hi, Ignazio > > You can discuss about instance HA issue with others on the channel ‘openstack-masakari’in OFTC. > > They will have a weekly meeting this Tuesday, 4:00-5:00 UTC on the IRC channel. > > Welcomed. > > 发件人: Ignazio Cassano [mailto:ignaziocassano at gmail.com] > 发送时间: 2021年6月22日 3:06 > 收件人: openstack-discuss > 主题: [lists.openstack.org代发][Wallaby][kolla][masakari] hacluser issues > > > > Hello everyone, I wonder if this is a good place for discussing about some issues I am facing with instance HA with masakari hacluster kolla wallaby. > > Thanks > > Ignazio From radoslaw.piliszek at gmail.com Tue Jun 22 06:16:27 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 22 Jun 2021 08:16:27 +0200 Subject: [Wallaby][kolla][masakari] hacluser issues In-Reply-To: References: Message-ID: Hi Ignazio, The place is perfect. We talked yesterday on IRC. It's likely better for asynchronous communication to use the discussion list for sure. I saw your question regarding logs: you should use other services and link the logs in your messages. Some people use oldschool pastebins, such as: https://paste.ubuntu.com/ others prefer gist: https://gist.github.com/ -yoctozepto On Mon, Jun 21, 2021 at 9:07 PM Ignazio Cassano wrote: > > Hello everyone, I wonder if this is a good place for discussing about some issues I am facing with instance HA with masakari hacluster kolla wallaby. > Thanks > Ignazio From ignaziocassano at gmail.com Tue Jun 22 06:22:46 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 22 Jun 2021 08:22:46 +0200 Subject: [Wallaby][kolla][masakari] hacluser issues In-Reply-To: References: Message-ID: Hello Radoslaw, I am doing a clean installation. When it finish I will retry and I will post post logs on https://paste.ubuntu.com/ Thanks Ignazio Il giorno mar 22 giu 2021 alle ore 08:16 Radosław Piliszek < radoslaw.piliszek at gmail.com> ha scritto: > Hi Ignazio, > > The place is perfect. > We talked yesterday on IRC. > It's likely better for asynchronous communication to use the > discussion list for sure. > I saw your question regarding logs: you should use other services and > link the logs in your messages. > Some people use oldschool pastebins, such as: https://paste.ubuntu.com/ > others prefer gist: https://gist.github.com/ > > -yoctozepto > > On Mon, Jun 21, 2021 at 9:07 PM Ignazio Cassano > wrote: > > > > Hello everyone, I wonder if this is a good place for discussing about > some issues I am facing with instance HA with masakari hacluster kolla > wallaby. > > Thanks > > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Jun 22 08:12:47 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 22 Jun 2021 10:12:47 +0200 Subject: [Wallaby][kolla][masakari] hacluser issues In-Reply-To: References: Message-ID: Hello Radoslaw, We made a clean install. At 10:02 we stopped a node and and the masakari host monitor reported some python error you can see at https://paste.ubuntu.com/p/84H7nmn7Bz/ Pacemaker and remote work fine . We can see remote online nodes with crm command and remote offline when we stopped the node. Thanks Ignazio Il giorno mar 22 giu 2021 alle ore 08:16 Radosław Piliszek < radoslaw.piliszek at gmail.com> ha scritto: > Hi Ignazio, > > The place is perfect. > We talked yesterday on IRC. > It's likely better for asynchronous communication to use the > discussion list for sure. > I saw your question regarding logs: you should use other services and > link the logs in your messages. > Some people use oldschool pastebins, such as: https://paste.ubuntu.com/ > others prefer gist: https://gist.github.com/ > > -yoctozepto > > On Mon, Jun 21, 2021 at 9:07 PM Ignazio Cassano > wrote: > > > > Hello everyone, I wonder if this is a good place for discussing about > some issues I am facing with instance HA with masakari hacluster kolla > wallaby. > > Thanks > > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Jun 22 10:07:59 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 22 Jun 2021 12:07:59 +0200 Subject: [Kolla][nova] /var/lib/nova/instances on nfs Message-ID: Hello Stackers, is there any configuration parameter in kolla for sharing nova on nfs between compute nodes ? Or I must insert an entry in fstab ? Thanks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Jun 22 10:59:08 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 22 Jun 2021 12:59:08 +0200 Subject: [Wallaby][kolla][masakari] hacluser issues In-Reply-To: References: Message-ID: Dear Ignazio, The logs have helped. I reported two bugs for you: https://bugs.launchpad.net/kolla-ansible/+bug/1933209 https://bugs.launchpad.net/masakari-monitors/+bug/1933203 The workaround you can try is included in comments of the first one. -yoctozepto On Tue, Jun 22, 2021 at 10:12 AM Ignazio Cassano wrote: > > Hello Radoslaw, > We made a clean install. > At 10:02 we stopped a node and and the masakari host monitor reported some python error you can see at https://paste.ubuntu.com/p/84H7nmn7Bz/ > Pacemaker and remote work fine . > We can see remote online nodes with crm command and remote offline when we stopped the node. > Thanks > Ignazio > > > > Il giorno mar 22 giu 2021 alle ore 08:16 Radosław Piliszek ha scritto: >> >> Hi Ignazio, >> >> The place is perfect. >> We talked yesterday on IRC. >> It's likely better for asynchronous communication to use the >> discussion list for sure. >> I saw your question regarding logs: you should use other services and >> link the logs in your messages. >> Some people use oldschool pastebins, such as: https://paste.ubuntu.com/ >> others prefer gist: https://gist.github.com/ >> >> -yoctozepto >> >> On Mon, Jun 21, 2021 at 9:07 PM Ignazio Cassano >> wrote: >> > >> > Hello everyone, I wonder if this is a good place for discussing about some issues I am facing with instance HA with masakari hacluster kolla wallaby. >> > Thanks >> > Ignazio From radoslaw.piliszek at gmail.com Tue Jun 22 11:02:15 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 22 Jun 2021 13:02:15 +0200 Subject: [Kolla][nova] /var/lib/nova/instances on nfs In-Reply-To: References: Message-ID: Hello Ignazio, If you are not using Cinder NFS backend already, you need to set: enable_shared_var_lib_nova_mnt: yes And yes, you need to manage fstab yourself, mounting the shared nfs at /var/lib/nova/mnt It must happen before the containers are started (so before deploy or redeploy). -yoctozepto On Tue, Jun 22, 2021 at 12:11 PM Ignazio Cassano wrote: > > Hello Stackers, is there any configuration parameter in kolla for sharing nova on nfs between compute nodes ? Or I must insert an entry in fstab ? > Thanks > Ignazio > From smooney at redhat.com Tue Jun 22 11:14:08 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 22 Jun 2021 12:14:08 +0100 Subject: [ops][victoria][nova] Add hardware to guest (was: [ops][nova][spice][victoria] Switch from VNC to SPICE) In-Reply-To: <0670B960225633449A24709C291A5252511E4D53@COM01.performair.local> References: <0670B960225633449A24709C291A5252511E4D53@COM01.performair.local> Message-ID: <958019c0053d0e7386e2cdc5f1abb7739057391e.camel@redhat.com> On Mon, 2021-06-21 at 20:57 +0000, DHilsbos at performair.com wrote: > Just realized that I sent this to Sean, but not the list... > > Sean; > > Thank you for that pointer to hw_video_model=virtio, that addressed the > resolution issue! > > Any chance you could point me at a metadata option that would tell > libvirt to provide 2 monitor devices?  It looks like I need to send the > max_outputs=2 to qemu, or libvirt or directly to virtio-vga (I'm not > sure which) [1], though I don't see a way to pass that through in > nova.virt.libvirt.driver [2]. what you really need is https://github.com/openstack/nova/blob/master/nova/api/validation/extra_specs/os.py#L57-L68 but unfortunetly os:monitors is only supported by hyperv. we could add support for multiple monitors as a feature but that would not be backportable. really this should be in the hw: namespace and typecialy this should be done with an image property not an flavor extra spec but i guess we could support both. this likely shoudl be handeled as a small specless blueprint or short spec. i would suggest using "hw_virtual_displays=#" and "hw:virtual_displays=#" personally as the image and flavor extra specs to have something easy to understand. we could then use that to set the display heads on the the video device. https://libvirt.org/formatdomain.html#video-devices unfortunetly i dont know of ay way to do this in nova today with libvirt that you can use but if you have time to work on it i woudl be supprotive of adding this functionality. if you did end up working on this and had a spec i wouls also consider wether you wanted to enable accel3d feature to expose opengel to the guest. that requires a little more setup on the host to get working but it may or may not be useful for your vdi usecase. from a nova point of view exposing accel3d is relitivly trivial but you likely woudl want to be able to schdule to a host that supports it so we woudl want to consider if a standard trait shoudl be exposed for that or if it should be left to the operator to do manuyally with a custom trait. anyway i hope that helps at least somewhat. i guess for now you will have to use the larger resolution. > > As to how GoToMyPC works... I don't know, but I believe it captures the > framebuffer. > > [1] https://www.kraxel.org/blog/2019/09/display-devices-in-qemu/ > [2] > https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com  > www.PerformAir.com > > > -----Original Message----- > From: Sean Mooney [mailto:smooney at redhat.com] > Sent: Monday, June 21, 2021 4:57 AM > To: Dominic Hilsbos; laurentfdumont at gmail.com; > openstack-discuss at lists.openstack.org > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE > > On Fri, 2021-06-18 at 20:44 +0000, DHilsbos at performair.com wrote: > > We're trying to virtualize desktops for remote workers.  Guests will > > be Windows.  Remote connection method will be GoToMyPC.  As such, I > > need the OS to believe either 1) it has 2 1920 x 1080 monitors, or 2) > > it has a single 3840 x 1080 monitor.  Neither of which have I found a > > way to do in VNC. > i am not sure the resoltution that rdp or whatever GoToMyPC uses will > be affected by the use of vnc or spcice on the host. I think this is > related to the video model used in the guets unless GoToMyPC is running > on the host and connecting the the qemu process. thats not normally > something we would recommend but if its directly conecting to the qemu > process then the use of vnc or spice might be a factor > > have you tried changing the video model to virtio for example, that > should enabel more resolutions in the guest. i know that usign RDP in a > windows guest with hw_video_model=virtio in the image does allow higher > resolutions and i think it enables multi monitor support too. > > > > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Vice President – Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com  > > www.PerformAir.com > > > > From: Laurent Dumont [mailto:laurentfdumont at gmail.com] > > Sent: Friday, June 18, 2021 12:39 PM > > To: Dominic Hilsbos > > Cc: stephenfin at redhat.com; openstack-discuss > > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE > > > > Are you using the Openstack console itself? You might have more > > flexibility with a dedicated VNC server inside the VM and connect > > directly to it. So you would not be tied to the Openstack VNC support > > which I dont think was ever designed for graphical usage. More of a > > "it's 2AM, server is crashed and I need a way in!". > > > > On Fri, Jun 18, 2021 at 1:58 PM wrote: > > Stephen; > > > > Thank you for the information. > > > > What's the replacement?  Is VNC getting improvements to allow higher > > resolutions, and multi-monitor, in the guest? > > > > We've already decided to transition our OpenStack cluster away from > > CentOS, as RDO doesn't package some of the OpenStack projects we'd > > like to use, and RedHat has lost our trust. > > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Vice President – Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com  > > www.PerformAir.com > > > > > > -----Original Message----- > > From: Stephen Finucane [mailto:stephenfin at redhat.com] > > Sent: Friday, June 18, 2021 10:33 AM > > To: Dominic Hilsbos; openstack-discuss at lists.openstack.org > > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE > > > > On Fri, 2021-06-18 at 16:53 +0000, DHilsbos at performair.com wrote: > > > All; > > > > > > We have a Victoria cluster, and I'd like to switch from VNC to > > > SPICE.  Cluster is installed with packages (RDO), and configured > > > manually. > > > > > > I have located > > > https://docs.openstack.org/nova/victoria/admin/remote-console-access.html > > > , but this doesn't tell me which services need to be installed on > > > which servers. > > > > > > Looking at packages, I'm fairly certain nova-spicehtml5proxy > > > (openstack-nova-spicehtml5proxy on CentOS 8) needs to be installed > > > where the nova-novncproxy is currently. > > > > > > I also suspect that qemu-kvm-ui-spice needs to be installed on the > > > nova-compute nodes.  Is spice-server needed on the nova-compute > > > nodes? > > > > Not an answer, but I'd be very careful about building solutions based > > on SPICE. > > It has been deprecated in RHEL 8.3 and recent versions of Fedora and > > is slated > > for removal in RHEL 9, as this bug [1] points out. It is also > > receives very > > little attention in nova as some deployments tooling (such as Red Hat > > OSP) has > > not supported it for some time. There's a non-zero chance support for > > this > > console type will be dropped entirely in some future release. > > > > Stephen > > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1946938 > > > > > Thank you, > > > > > > Dominic L. Hilsbos, MBA > > > Vice President - Information Technology > > > Perform Air International Inc. > > > DHilsbos at PerformAir.com  > > > www.PerformAir.com > > > > > > > > > > > > > > > > From smooney at redhat.com Tue Jun 22 11:43:14 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 22 Jun 2021 12:43:14 +0100 Subject: [Kolla][nova] /var/lib/nova/instances on nfs In-Reply-To: References: Message-ID: <6d32e2936ac9cc88c8f6148a375446cace13b0b8.camel@redhat.com> On Tue, 2021-06-22 at 13:02 +0200, Radosław Piliszek wrote: > Hello Ignazio, > > If you are not using Cinder NFS backend already, you need to set: > >   enable_shared_var_lib_nova_mnt: yes > > And yes, you need to manage fstab yourself, mounting the shared nfs > at > /var/lib/nova/mnt > > It must happen before the containers are started (so before deploy or > redeploy). i dont think they were refering to cinder nfs. we have support for deploying novas state directory and libvirts stroage on nfs in nvoa when usign the raw/qcow image backend. in general i advise against that but it is supported. you should ensure that you use nfs v4 preferable nfs v4.2 or newer with my downstream hat on we droped supprot for nfs v3 many years ago and the last lts release we hadd that supported it was based on newton. technially we dont have a min nfs version requirement ustream but at some point i think we shoudl enforce at least nfs v4 upstream too. there are several known locking issues with nfs v3 that make it generally problematic to use at scale with nova that manifest intermietnly during move operations. the same may or may not be true with nfs via cinder but that is one of the less well tested and hardened cinder backends to use with nova. > > -yoctozepto > > On Tue, Jun 22, 2021 at 12:11 PM Ignazio Cassano > wrote: > > > > Hello Stackers, is there any configuration parameter in kolla for > > sharing nova on nfs between compute nodes ? Or I must insert an > > entry in fstab ? > > Thanks > > Ignazio > > > From smooney at redhat.com Tue Jun 22 11:50:56 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 22 Jun 2021 12:50:56 +0100 Subject: [ops][victoria][nova] Add hardware to guest (was: [ops][nova][spice][victoria] Switch from VNC to SPICE) In-Reply-To: <958019c0053d0e7386e2cdc5f1abb7739057391e.camel@redhat.com> References: <0670B960225633449A24709C291A5252511E4D53@COM01.performair.local> <958019c0053d0e7386e2cdc5f1abb7739057391e.camel@redhat.com> Message-ID: <26787fad63cba347474c22ce6b4f2238dddd9179.camel@redhat.com> oh by the way spcie support is not going away in upstrem nova qemu or libvirt by the way. as stephen mentioned it is going away in rhel and as a result many of the redhat peopel that used to work on it have already moved to other things but i expect it to contiue to be supported in qemu and in other distros. he is correct however that spcie is less well tested and while it can support multipel monitors you simarly can enabel multiple dispaly heads in the libvirt xml so i woudl expect the behavior to be the same as vnc today. On Tue, 2021-06-22 at 12:14 +0100, Sean Mooney wrote: > On Mon, 2021-06-21 at 20:57 +0000, DHilsbos at performair.com wrote: > > Just realized that I sent this to Sean, but not the list... > > > > Sean; > > > > Thank you for that pointer to hw_video_model=virtio, that addressed > > the > > resolution issue! > > > > Any chance you could point me at a metadata option that would tell > > libvirt to provide 2 monitor devices?  It looks like I need to send > > the > > max_outputs=2 to qemu, or libvirt or directly to virtio-vga (I'm > > not > > sure which) [1], though I don't see a way to pass that through in > > nova.virt.libvirt.driver [2]. > what you really need is > https://github.com/openstack/nova/blob/master/nova/api/validation/extra_specs/os.py#L57-L68 > but unfortunetly > > os:monitors is only supported by hyperv. > > we could add support for multiple monitors as a feature but that > would > not be backportable. > really this should be in the hw: namespace and typecialy this should > be > done with an image property not an flavor extra spec but i guess we > could support both. > > this likely shoudl be handeled as a small specless blueprint or short > spec. > > i would suggest using "hw_virtual_displays=#" and > "hw:virtual_displays=#" personally as the image and flavor extra > specs > to have something easy to understand. > > we could then use that to set the display heads on the the video > device. https://libvirt.org/formatdomain.html#video-devices > > unfortunetly i dont know of ay way to do this in nova today with > libvirt that you can use but if you have time to work on it i woudl > be > supprotive of adding this functionality. > > if you did  end up working on this and had a spec i wouls also > consider > wether you wanted to enable accel3d feature to expose opengel to the > guest. that requires a little more setup on the host to get working > but > it may or may not be useful for your vdi usecase. from a nova point > of > view exposing accel3d is relitivly trivial but you likely woudl want > to > be able to schdule to a host that supports it so we woudl want to > consider if a standard trait shoudl be exposed for that or if it > should > be left to the operator to do manuyally with a custom trait. > > anyway i hope that helps at least somewhat. i guess for now you will > have to use the larger resolution. > > > > > As to how GoToMyPC works... I don't know, but I believe it captures > > the > > framebuffer. > > > > [1] https://www.kraxel.org/blog/2019/09/display-devices-in-qemu/ > > [2]   > > https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py > > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Vice President – Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com  > > www.PerformAir.com > > > > > > -----Original Message----- > > From: Sean Mooney [mailto:smooney at redhat.com] > > Sent: Monday, June 21, 2021 4:57 AM > > To: Dominic Hilsbos; laurentfdumont at gmail.com;   > > openstack-discuss at lists.openstack.org > > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to SPICE > > > > On Fri, 2021-06-18 at 20:44 +0000, DHilsbos at performair.com wrote: > > > We're trying to virtualize desktops for remote workers.  Guests > > > will > > > be Windows.  Remote connection method will be GoToMyPC.  As such, > > > I > > > need the OS to believe either 1) it has 2 1920 x 1080 monitors, > > > or 2) > > > it has a single 3840 x 1080 monitor.  Neither of which have I > > > found a > > > way to do in VNC. > > i am not sure the resoltution that rdp or whatever GoToMyPC uses > > will > > be affected by the use of vnc or spcice on the host. I think this > > is > > related to the video model used in the guets unless GoToMyPC is > > running > > on the host and connecting the the qemu process. thats not normally > > something we would recommend but if its directly conecting to the > > qemu > > process then the use of vnc or spice might be a factor > > > > have you tried changing the video model to virtio for example, that > > should enabel more resolutions in the guest. i know that usign RDP > > in a > > windows guest with hw_video_model=virtio in the image does allow > > higher > > resolutions and i think it enables multi monitor support too. > > > > > > > > > > Thank you, > > > > > > Dominic L. Hilsbos, MBA > > > Vice President – Information Technology > > > Perform Air International Inc. > > > DHilsbos at PerformAir.com  > > > www.PerformAir.com > > > > > > From: Laurent Dumont [mailto:laurentfdumont at gmail.com] > > > Sent: Friday, June 18, 2021 12:39 PM > > > To: Dominic Hilsbos > > > Cc: stephenfin at redhat.com; openstack-discuss > > > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to > > > SPICE > > > > > > Are you using the Openstack console itself? You might have more > > > flexibility with a dedicated VNC server inside the VM and connect > > > directly to it. So you would not be tied to the Openstack VNC > > > support > > > which I dont think was ever designed for graphical usage. More of > > > a > > > "it's 2AM, server is crashed and I need a way in!". > > > > > > On Fri, Jun 18, 2021 at 1:58 PM wrote: > > > Stephen; > > > > > > Thank you for the information. > > > > > > What's the replacement?  Is VNC getting improvements to allow > > > higher > > > resolutions, and multi-monitor, in the guest? > > > > > > We've already decided to transition our OpenStack cluster away > > > from > > > CentOS, as RDO doesn't package some of the OpenStack projects > > > we'd > > > like to use, and RedHat has lost our trust. > > > > > > Thank you, > > > > > > Dominic L. Hilsbos, MBA > > > Vice President – Information Technology > > > Perform Air International Inc. > > > DHilsbos at PerformAir.com  > > > www.PerformAir.com > > > > > > > > > -----Original Message----- > > > From: Stephen Finucane [mailto:stephenfin at redhat.com] > > > Sent: Friday, June 18, 2021 10:33 AM > > > To: Dominic Hilsbos; openstack-discuss at lists.openstack.org > > > Subject: Re: [ops][nova][spice][victoria] Switch from VNC to > > > SPICE > > > > > > On Fri, 2021-06-18 at 16:53 +0000, DHilsbos at performair.com wrote: > > > > All; > > > > > > > > We have a Victoria cluster, and I'd like to switch from VNC to > > > > SPICE.  Cluster is installed with packages (RDO), and > > > > configured > > > > manually. > > > > > > > > I have located > > > > https://docs.openstack.org/nova/victoria/admin/remote-console-access.html > > > > , but this doesn't tell me which services need to be installed > > > > on > > > > which servers. > > > > > > > > Looking at packages, I'm fairly certain nova-spicehtml5proxy > > > > (openstack-nova-spicehtml5proxy on CentOS 8) needs to be > > > > installed > > > > where the nova-novncproxy is currently. > > > > > > > > I also suspect that qemu-kvm-ui-spice needs to be installed on > > > > the > > > > nova-compute nodes.  Is spice-server needed on the nova-compute > > > > nodes? > > > > > > Not an answer, but I'd be very careful about building solutions > > > based > > > on SPICE. > > > It has been deprecated in RHEL 8.3 and recent versions of Fedora > > > and > > > is slated > > > for removal in RHEL 9, as this bug [1] points out. It is also > > > receives very > > > little attention in nova as some deployments tooling (such as Red > > > Hat > > > OSP) has > > > not supported it for some time. There's a non-zero chance support > > > for > > > this > > > console type will be dropped entirely in some future release. > > > > > > Stephen > > > > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1946938 > > > > > > > Thank you, > > > > > > > > Dominic L. Hilsbos, MBA > > > > Vice President - Information Technology > > > > Perform Air International Inc. > > > > DHilsbos at PerformAir.com  > > > > www.PerformAir.com > > > > > > > > > > > > > > > > > > > > > > > > > > From ignaziocassano at gmail.com Tue Jun 22 11:57:04 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 22 Jun 2021 13:57:04 +0200 Subject: [Kolla][nova] /var/lib/nova/instances on nfs In-Reply-To: <6d32e2936ac9cc88c8f6148a375446cace13b0b8.camel@redhat.com> References: <6d32e2936ac9cc88c8f6148a375446cace13b0b8.camel@redhat.com> Message-ID: Hello , I am using cinder with netapp driver but if I do not mount a share under /var/lib/docker/volumes/nova_compute/_data live migration does not work because an error is disblayed: shared storage in needes or something like that. I do not understand why is does not notice that volumes are shared Ignazio Il giorno mar 22 giu 2021 alle ore 13:43 Sean Mooney ha scritto: > On Tue, 2021-06-22 at 13:02 +0200, Radosław Piliszek wrote: > > Hello Ignazio, > > > > If you are not using Cinder NFS backend already, you need to set: > > > > enable_shared_var_lib_nova_mnt: yes > > > > And yes, you need to manage fstab yourself, mounting the shared nfs > > at > > /var/lib/nova/mnt > > > > It must happen before the containers are started (so before deploy or > > redeploy). > i dont think they were refering to cinder nfs. > we have support for deploying novas state directory and libvirts > stroage on nfs in nvoa when usign the raw/qcow image backend. > > in general i advise against that but it is supported. > you should ensure that you use nfs v4 preferable nfs v4.2 or newer > > with my downstream hat on we droped supprot for nfs v3 many years ago > and the last lts release we hadd that supported it was based on newton. > technially we dont have a min nfs version requirement ustream but at > some point i think we shoudl enforce at least nfs v4 upstream too. > there are several known locking issues with nfs v3 that make it > generally problematic to use at scale with nova that manifest > intermietnly during move operations. > > the same may or may not be true with nfs via cinder but that is one of > the less well tested and hardened cinder backends to use with nova. > > > > > > -yoctozepto > > > > On Tue, Jun 22, 2021 at 12:11 PM Ignazio Cassano > > wrote: > > > > > > Hello Stackers, is there any configuration parameter in kolla for > > > sharing nova on nfs between compute nodes ? Or I must insert an > > > entry in fstab ? > > > Thanks > > > Ignazio > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Jun 22 11:59:48 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 22 Jun 2021 13:59:48 +0200 Subject: [Wallaby][kolla][masakari] hacluser issues In-Reply-To: References: Message-ID: I will try soon. Many thanks Il giorno mar 22 giu 2021 alle ore 12:59 Radosław Piliszek < radoslaw.piliszek at gmail.com> ha scritto: > Dear Ignazio, > > The logs have helped. > I reported two bugs for you: > https://bugs.launchpad.net/kolla-ansible/+bug/1933209 > https://bugs.launchpad.net/masakari-monitors/+bug/1933203 > > The workaround you can try is included in comments of the first one. > > -yoctozepto > > On Tue, Jun 22, 2021 at 10:12 AM Ignazio Cassano > wrote: > > > > Hello Radoslaw, > > We made a clean install. > > At 10:02 we stopped a node and and the masakari host monitor reported > some python error you can see at https://paste.ubuntu.com/p/84H7nmn7Bz/ > > Pacemaker and remote work fine . > > We can see remote online nodes with crm command and remote offline when > we stopped the node. > > Thanks > > Ignazio > > > > > > > > Il giorno mar 22 giu 2021 alle ore 08:16 Radosław Piliszek < > radoslaw.piliszek at gmail.com> ha scritto: > >> > >> Hi Ignazio, > >> > >> The place is perfect. > >> We talked yesterday on IRC. > >> It's likely better for asynchronous communication to use the > >> discussion list for sure. > >> I saw your question regarding logs: you should use other services and > >> link the logs in your messages. > >> Some people use oldschool pastebins, such as: https://paste.ubuntu.com/ > >> others prefer gist: https://gist.github.com/ > >> > >> -yoctozepto > >> > >> On Mon, Jun 21, 2021 at 9:07 PM Ignazio Cassano > >> wrote: > >> > > >> > Hello everyone, I wonder if this is a good place for discussing about > some issues I am facing with instance HA with masakari hacluster kolla > wallaby. > >> > Thanks > >> > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Jun 22 12:18:19 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 22 Jun 2021 14:18:19 +0200 Subject: [Wallaby][kolla][masakari] hacluser issues In-Reply-To: References: Message-ID: Hello Radoslaw, now the error disappeared in log file bust the instance is not restarted on the the other node. Here there is the host monitor log https://paste.ubuntu.com/p/GyP2qD2C8W/ Ignazio Il giorno mar 22 giu 2021 alle ore 12:59 Radosław Piliszek < radoslaw.piliszek at gmail.com> ha scritto: > Dear Ignazio, > > The logs have helped. > I reported two bugs for you: > https://bugs.launchpad.net/kolla-ansible/+bug/1933209 > https://bugs.launchpad.net/masakari-monitors/+bug/1933203 > > The workaround you can try is included in comments of the first one. > > -yoctozepto > > On Tue, Jun 22, 2021 at 10:12 AM Ignazio Cassano > wrote: > > > > Hello Radoslaw, > > We made a clean install. > > At 10:02 we stopped a node and and the masakari host monitor reported > some python error you can see at https://paste.ubuntu.com/p/84H7nmn7Bz/ > > Pacemaker and remote work fine . > > We can see remote online nodes with crm command and remote offline when > we stopped the node. > > Thanks > > Ignazio > > > > > > > > Il giorno mar 22 giu 2021 alle ore 08:16 Radosław Piliszek < > radoslaw.piliszek at gmail.com> ha scritto: > >> > >> Hi Ignazio, > >> > >> The place is perfect. > >> We talked yesterday on IRC. > >> It's likely better for asynchronous communication to use the > >> discussion list for sure. > >> I saw your question regarding logs: you should use other services and > >> link the logs in your messages. > >> Some people use oldschool pastebins, such as: https://paste.ubuntu.com/ > >> others prefer gist: https://gist.github.com/ > >> > >> -yoctozepto > >> > >> On Mon, Jun 21, 2021 at 9:07 PM Ignazio Cassano > >> wrote: > >> > > >> > Hello everyone, I wonder if this is a good place for discussing about > some issues I am facing with instance HA with masakari hacluster kolla > wallaby. > >> > Thanks > >> > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Jun 22 12:29:40 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 22 Jun 2021 14:29:40 +0200 Subject: [Wallaby][kolla][masakari] hacluser issues In-Reply-To: References: Message-ID: You have to let it first see the node online, then power off the node. Also, check if the node is not already in maintenance mode in Masakari (as a host in the segment). -yoctozepto On Tue, Jun 22, 2021 at 2:18 PM Ignazio Cassano wrote: > > Hello Radoslaw, now the error disappeared in log file bust the instance is not restarted on the the other node. > Here there is the host monitor log > https://paste.ubuntu.com/p/GyP2qD2C8W/ > Ignazio > > > Il giorno mar 22 giu 2021 alle ore 12:59 Radosław Piliszek ha scritto: >> >> Dear Ignazio, >> >> The logs have helped. >> I reported two bugs for you: >> https://bugs.launchpad.net/kolla-ansible/+bug/1933209 >> https://bugs.launchpad.net/masakari-monitors/+bug/1933203 >> >> The workaround you can try is included in comments of the first one. >> >> -yoctozepto >> >> On Tue, Jun 22, 2021 at 10:12 AM Ignazio Cassano >> wrote: >> > >> > Hello Radoslaw, >> > We made a clean install. >> > At 10:02 we stopped a node and and the masakari host monitor reported some python error you can see at https://paste.ubuntu.com/p/84H7nmn7Bz/ >> > Pacemaker and remote work fine . >> > We can see remote online nodes with crm command and remote offline when we stopped the node. >> > Thanks >> > Ignazio >> > >> > >> > >> > Il giorno mar 22 giu 2021 alle ore 08:16 Radosław Piliszek ha scritto: >> >> >> >> Hi Ignazio, >> >> >> >> The place is perfect. >> >> We talked yesterday on IRC. >> >> It's likely better for asynchronous communication to use the >> >> discussion list for sure. >> >> I saw your question regarding logs: you should use other services and >> >> link the logs in your messages. >> >> Some people use oldschool pastebins, such as: https://paste.ubuntu.com/ >> >> others prefer gist: https://gist.github.com/ >> >> >> >> -yoctozepto >> >> >> >> On Mon, Jun 21, 2021 at 9:07 PM Ignazio Cassano >> >> wrote: >> >> > >> >> > Hello everyone, I wonder if this is a good place for discussing about some issues I am facing with instance HA with masakari hacluster kolla wallaby. >> >> > Thanks >> >> > Ignazio From smooney at redhat.com Tue Jun 22 12:32:13 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 22 Jun 2021 13:32:13 +0100 Subject: [Kolla][nova] /var/lib/nova/instances on nfs In-Reply-To: References: <6d32e2936ac9cc88c8f6148a375446cace13b0b8.camel@redhat.com> Message-ID: <2f34daf733ca9a780d8cb09dd2186e2d3dea8ad2.camel@redhat.com> On Tue, 2021-06-22 at 13:57 +0200, Ignazio Cassano wrote: > Hello , I am using cinder with netapp driver but if I do not mount a > share > under /var/lib/docker/volumes/nova_compute/_data live migration does > not > work because an error is disblayed: shared storage in needes or > something > like that. > I do not understand why is does not notice that volumes are shared so these are cinder boot form volume guests? if you are usign the correct microverion it should detect that its shared storage automaticaly when you do a migration can you confirm the command you are usign to do the migration and that its a boot form volumen guest not an image backed guest with a data volumn. > > Ignazio > > Il giorno mar 22 giu 2021 alle ore 13:43 Sean Mooney < > smooney at redhat.com> > ha scritto: > > > On Tue, 2021-06-22 at 13:02 +0200, Radosław Piliszek wrote: > > > Hello Ignazio, > > > > > > If you are not using Cinder NFS backend already, you need to set: > > > > > >   enable_shared_var_lib_nova_mnt: yes > > > > > > And yes, you need to manage fstab yourself, mounting the shared > > > nfs > > > at > > > /var/lib/nova/mnt > > > > > > It must happen before the containers are started (so before > > > deploy or > > > redeploy). > > i dont think they were refering to cinder nfs. > > we have support for deploying novas state directory and libvirts > > stroage on nfs in nvoa when usign the raw/qcow image backend. > > > > in general i advise against that but it is supported. > > you should ensure that you use nfs v4 preferable nfs v4.2 or newer > > > > with my downstream hat on we droped supprot for nfs v3 many years > > ago > > and the last lts release we hadd that supported it was based on > > newton. > > technially we dont have a min nfs version requirement ustream but > > at > > some point i think we shoudl enforce at least nfs v4 upstream too. > > there are several known locking issues with nfs v3 that make it > > generally problematic to use at scale with nova that manifest > > intermietnly during move operations. > > > > the same may or may not be true with nfs via cinder but that is one > > of > > the less well tested and hardened cinder backends to use with nova. > > > > > > > > > > -yoctozepto > > > > > > On Tue, Jun 22, 2021 at 12:11 PM Ignazio Cassano > > > wrote: > > > > > > > > Hello Stackers, is there any configuration parameter in kolla > > > > for > > > > sharing nova on nfs between compute nodes ? Or I must insert an > > > > entry in fstab ? > > > > Thanks > > > > Ignazio > > > > > > > > > > > > > From ignaziocassano at gmail.com Tue Jun 22 12:37:43 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 22 Jun 2021 14:37:43 +0200 Subject: [Wallaby][kolla][masakari] hacluser issues In-Reply-To: References: Message-ID: Oh, you are right, one node was in maintenance mode. Thank you very much for your help. Now it works as expected Ignazio Il giorno mar 22 giu 2021 alle ore 14:29 Radosław Piliszek < radoslaw.piliszek at gmail.com> ha scritto: > You have to let it first see the node online, then power off the node. > Also, check if the node is not already in maintenance mode in Masakari > (as a host in the segment). > > -yoctozepto > > On Tue, Jun 22, 2021 at 2:18 PM Ignazio Cassano > wrote: > > > > Hello Radoslaw, now the error disappeared in log file bust the instance > is not restarted on the the other node. > > Here there is the host monitor log > > https://paste.ubuntu.com/p/GyP2qD2C8W/ > > Ignazio > > > > > > Il giorno mar 22 giu 2021 alle ore 12:59 Radosław Piliszek < > radoslaw.piliszek at gmail.com> ha scritto: > >> > >> Dear Ignazio, > >> > >> The logs have helped. > >> I reported two bugs for you: > >> https://bugs.launchpad.net/kolla-ansible/+bug/1933209 > >> https://bugs.launchpad.net/masakari-monitors/+bug/1933203 > >> > >> The workaround you can try is included in comments of the first one. > >> > >> -yoctozepto > >> > >> On Tue, Jun 22, 2021 at 10:12 AM Ignazio Cassano > >> wrote: > >> > > >> > Hello Radoslaw, > >> > We made a clean install. > >> > At 10:02 we stopped a node and and the masakari host monitor reported > some python error you can see at https://paste.ubuntu.com/p/84H7nmn7Bz/ > >> > Pacemaker and remote work fine . > >> > We can see remote online nodes with crm command and remote offline > when we stopped the node. > >> > Thanks > >> > Ignazio > >> > > >> > > >> > > >> > Il giorno mar 22 giu 2021 alle ore 08:16 Radosław Piliszek < > radoslaw.piliszek at gmail.com> ha scritto: > >> >> > >> >> Hi Ignazio, > >> >> > >> >> The place is perfect. > >> >> We talked yesterday on IRC. > >> >> It's likely better for asynchronous communication to use the > >> >> discussion list for sure. > >> >> I saw your question regarding logs: you should use other services and > >> >> link the logs in your messages. > >> >> Some people use oldschool pastebins, such as: > https://paste.ubuntu.com/ > >> >> others prefer gist: https://gist.github.com/ > >> >> > >> >> -yoctozepto > >> >> > >> >> On Mon, Jun 21, 2021 at 9:07 PM Ignazio Cassano > >> >> wrote: > >> >> > > >> >> > Hello everyone, I wonder if this is a good place for discussing > about some issues I am facing with instance HA with masakari hacluster > kolla wallaby. > >> >> > Thanks > >> >> > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril at redhat.com Tue Jun 22 13:43:33 2021 From: cyril at redhat.com (Cyril Roelandt) Date: Tue, 22 Jun 2021 15:43:33 +0200 Subject: [glance] nominating Cyril Roelandt for glance core In-Reply-To: References: Message-ID: Hey! On 2021-06-22 10:26, Abhishek Kekane wrote: > Having heard only affirmative responses, I have added Cyril Roelandt to the > Glance core group. > > Welcome to the Glance core team, Cyril! > Thanks & Best Regards, Thanks a lot everyone! Cyril. From balazs.gibizer at est.tech Tue Jun 22 16:01:37 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 22 Jun 2021 18:01:37 +0200 Subject: [cinder] CI coverage for SPDK Message-ID: Hi Cinder Team, I'm wondering what CI coverage cinder has for SPDK driver. The code mentions that Mellanox CI[1] but I'm not able to find a run from that CI in the recent cinder patches. Is this driver still covered somewhere? Cheers, gibi [1] https://github.com/openstack/cinder/blob/393c2e4ad90c05ebf28cc3a2c65811d7e1e0bc18/cinder/volume/drivers/spdk.py#L41 From stephenfin at redhat.com Tue Jun 22 16:39:42 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 22 Jun 2021 17:39:42 +0100 Subject: [nova][osc][api-sig] How strict should our clients be? Message-ID: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> Hey, We have an interesting problem that I wanted to poll opinions on. In OSC 5.5.0, we closed most of the gaps between novaclient and openstackclient. As part of these changes, we introduced validation of a number of requests such as validating enum-style values. For example, [1][2][3]. This validation already occurs on the server side, but by adding it to the client side we prevent users sending invalid requests to the server in the first place and allow users to discover the correct API behaviour from the client rather than having to read the API docs or use trial and error. Now, an issue has been opened against OSC. Apparently someone has been relying on a bug in Nova to pass a different value to the API that what the schema should have allowed, and they are dismayed that the client no longer allows them to do this. They have asked [4][5] that we relax the client-side validation to allow them to continue relying on this bug. As you can probably tell from my comments, this seems to me to be an open and shut case: you shouldn't fork an OpenStack API and you shouldn't side-step validation. However, I wanted to see if anyone disagreed and thought there was merit in loose or no validation of API requests made via our clients. Let me know what you think, Stephen [1] https://github.com/openstack/python-openstackclient/blob/5.5.0/openstackclient/compute/v2/server.py#L1789-L1808 [2] https://github.com/openstack/python-openstackclient/blob/5.5.0/openstackclient/compute/v2/server.py#L1907-L1921 [3] https://github.com/openstack/python-openstackclient/blob/5.5.0/openstackclient/compute/v2/server_group.py#L62-L67 [4] https://storyboard.openstack.org/#!/story/2008975 [5] https://github.com/openstack/python-openstackclient/commit/ab0b1fe885ee0a210a58008b631521025be7f3eb From gmann at ghanshyammann.com Tue Jun 22 17:16:43 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 22 Jun 2021 12:16:43 -0500 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> Message-ID: <17a34b967ea.cca9106561737.750501822805415926@ghanshyammann.com> ---- On Tue, 22 Jun 2021 11:39:42 -0500 Stephen Finucane wrote ---- > Hey, > > We have an interesting problem that I wanted to poll opinions on. In OSC 5.5.0, > we closed most of the gaps between novaclient and openstackclient. As part of > these changes, we introduced validation of a number of requests such as > validating enum-style values. For example, [1][2][3]. This validation already > occurs on the server side, but by adding it to the client side we prevent users > sending invalid requests to the server in the first place and allow users to > discover the correct API behaviour from the client rather than having to read > the API docs or use trial and error. I think this is the one of benefits of having Client so that we can improve the UX where user will get a clear way of right usage of our API instead of debugging the API code/error and correct the request. Protecting APIs from incorrect usage with right validation is good thing to do in Client. > > Now, an issue has been opened against OSC. Apparently someone has been relying > on a bug in Nova to pass a different value to the API that what the schema > should have allowed, and they are dismayed that the client no longer allows them > to do this. They have asked [4][5] that we relax the client-side validation to > allow them to continue relying on this bug. As you can probably tell from my > comments, this seems to me to be an open and shut case: you shouldn't fork an > OpenStack API and you shouldn't side-step validation. However, I wanted to see > if anyone disagreed and thought there was merit in loose or no validation of API > requests made via our clients. Although its modified API case but Nova bug but in case of Nova bug also raise a very good point. If Client is having such validation and protect users for such kind of API/server side bug then it help us to fix the bug without any user impact. Having more kind of such validation are even better for UX perspective. and the modified APIs case (which is the case of story/2008975) is something we want people to avoid and encourage to integrate the changes in upstream as per eligibity. That was whole point of removing the API extensions concept from Nova. IMO, this is right change in Client side and improve the overall UX. -gmann > > Let me know what you think, > Stephen > > [1] https://github.com/openstack/python-openstackclient/blob/5.5.0/openstackclient/compute/v2/server.py#L1789-L1808 > [2] https://github.com/openstack/python-openstackclient/blob/5.5.0/openstackclient/compute/v2/server.py#L1907-L1921 > [3] https://github.com/openstack/python-openstackclient/blob/5.5.0/openstackclient/compute/v2/server_group.py#L62-L67 > [4] https://storyboard.openstack.org/#!/story/2008975 > [5] https://github.com/openstack/python-openstackclient/commit/ab0b1fe885ee0a210a58008b631521025be7f3eb > > > From fungi at yuggoth.org Tue Jun 22 17:17:06 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 22 Jun 2021 17:17:06 +0000 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> Message-ID: <20210622171705.aubgvzkqamcyex4x@yuggoth.org> On 2021-06-22 17:39:42 +0100 (+0100), Stephen Finucane wrote: [...] > Apparently someone has been relying on a bug in Nova to pass a > different value to the API that what the schema should have > allowed, and they are dismayed that the client no longer allows > them to do this. [...] I can't find where they explained what new policy they've implemented in their fork. Perhaps if they elaborated on the use case, it could be it's something the Nova maintainers would accept a patch to officially extend the API to incorporate, allowing that deployment to un-fork? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Tue Jun 22 17:43:55 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 22 Jun 2021 18:43:55 +0100 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <20210622171705.aubgvzkqamcyex4x@yuggoth.org> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> Message-ID: On Tue, 2021-06-22 at 17:17 +0000, Jeremy Stanley wrote: > On 2021-06-22 17:39:42 +0100 (+0100), Stephen Finucane wrote: > [...] > > Apparently someone has been relying on a bug in Nova to pass a > > different value to the API that what the schema should have > > allowed, and they are dismayed that the client no longer allows > > them to do this. > [...] > > I can't find where they explained what new policy they've > implemented in their fork. Perhaps if they elaborated on the use > case, it could be it's something the Nova maintainers would accept a > patch to officially extend the API to incorporate, allowing that > deployment to un-fork? my understandign is that they are trying to model fault domains an have a fault domain aware anti affintiy policy that use host-aggreate or azs to model the fault to doamin. they reasched out to us downstream too about this and all i know so fart is they are implemetneign there own filter to do this which is valid. what is not valid ti extending a seperate api in this case the server group api to then use as an input to the out of tree filter. if they had use a schduler hint which inteionally support out of tree hints or a flaovr extra spec then it would be fine. the use fo a custom server group policy whne the server groups is not a defiend public extion point is the soucce of the confilct. the use case of an host aggrate anti affinti plicy while likely not efficent to implement is at leaset a somewhat resonable one that i could see supporting upstream. although there are many edgcases with regard to host being in mutliple host aggreates. if they are doing this based on avaiablity zone that is simler since a hsot can only be in one az. in anycase it woudl be nice if they brought there usecase upstream or even downstream so we could find a more supprotable way to enable it. From gmann at ghanshyammann.com Tue Jun 22 18:59:30 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 22 Jun 2021 13:59:30 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 24th at 1500 UTC Message-ID: <17a35177fb0.dd0c569265156.6885497306663202773@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for June 24th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, June 23rd, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From rosmaita.fossdev at gmail.com Tue Jun 22 19:48:43 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 22 Jun 2021 15:48:43 -0400 Subject: [cinder] topics for wednesday's cinder meeting Message-ID: Hello Cinder Team, There are two topics on the agenda for tomorrow that you may want to look at in advance. First, the specs deadline is Friday, so everyone should be looking at specs: https://review.opendev.org/q/project:openstack%252Fcinder-specs+status:open Second, Simon put a discussion of allowing `black` to be used as an auto-linter on the agenda. The question is whether this should be allowed as an optional tool for developers. There is an example patch posted: https://review.opendev.org/c/openstack/cinder/+/792462 and here is some general info about black: https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html The choice of a linter can be quite contentious, so the discussion will be timeboxed so that we'll have sufficient time to talk about specs. So I thought I'd sent out notice in advance so that we'll be able to make good use of meeting time. cheers, brian From rosmaita.fossdev at gmail.com Tue Jun 22 20:27:27 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 22 Jun 2021 16:27:27 -0400 Subject: [cinder] CI coverage for SPDK In-Reply-To: References: Message-ID: On 6/22/21 12:01 PM, Balazs Gibizer wrote: > Hi Cinder Team, > > I'm wondering what CI coverage cinder has for SPDK driver. The code > mentions that Mellanox CI[1] but I'm not able to find a run from that CI > in the recent cinder patches. Is this driver still covered somewhere? The Mellanox CI is the correct place to look for SPDK CI results. As recently as March (which, come to think of it, isn't all that recent) the Mellanox CI ran two test jobs, "Cinder-tgtadm" and "SPDK" [0]. Looks like the most recent Gerrit comments are showing only results from the "Cinder-tgtadm" job [1]. (And those are from 8 June, which aren't all that recent either.) I'll reach out to the Mellanox maintainer and maybe he can give the CI machine a kick. [0] https://review.opendev.org/c/openstack/os-brick/+/777086 [1] https://review.opendev.org/c/openstack/cinder/+/760199/ cheers, brian > > Cheers, > gibi > > [1] > https://github.com/openstack/cinder/blob/393c2e4ad90c05ebf28cc3a2c65811d7e1e0bc18/cinder/volume/drivers/spdk.py#L41 > > > > From andr.kurilin at gmail.com Tue Jun 22 20:49:47 2021 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Tue, 22 Jun 2021 23:49:47 +0300 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> Message-ID: Hi folks! As a guy who was top 1 contributor to novaclient at some point, I tried to extend a validation at the client-side as much as possible. I mean, I really like the approach when the user sees a validation error in an ms (or a second depending on the system and plugins) without passing the auth and sending any request to API, so big +1 to leave enum of possible choices there. BUT I have one concern here: does it possible that the number of official policies will be extended or it becomes pluggable(without patching of nova code itself)? In this case, it would be nice to be a bit less strict. вт, 22 июн. 2021 г. в 20:51, Sean Mooney : > On Tue, 2021-06-22 at 17:17 +0000, Jeremy Stanley wrote: > > On 2021-06-22 17:39:42 +0100 (+0100), Stephen Finucane wrote: > > [...] > > > Apparently someone has been relying on a bug in Nova to pass a > > > different value to the API that what the schema should have > > > allowed, and they are dismayed that the client no longer allows > > > them to do this. > > [...] > > > > I can't find where they explained what new policy they've > > implemented in their fork. Perhaps if they elaborated on the use > > case, it could be it's something the Nova maintainers would accept a > > patch to officially extend the API to incorporate, allowing that > > deployment to un-fork? > my understandign is that they are trying to model fault domains an have > a fault domain aware anti affintiy policy that use host-aggreate or azs > to model the fault to doamin. > > they reasched out to us downstream too about this and all i know so > fart is they are implemetneign there own filter to do this which is > valid. what is not valid ti extending a seperate api in this case the > server group api to then use as an input to the out of tree filter. > > if they had use a schduler hint which inteionally support out of tree > hints or a flaovr extra spec then it would be fine. the use fo a custom > server group policy whne the server groups is not a defiend public > extion point is the soucce of the confilct. > > the use case of an host aggrate anti affinti plicy while likely not > efficent to implement is at leaset a somewhat resonable one that i > could see supporting upstream. although there are many edgcases with > regard to host being in mutliple host aggreates. if they are doing this > based on avaiablity zone that is simler since a hsot can only be in one > az. > > in anycase it woudl be nice if they brought there usecase upstream or > even downstream so we could find a more supprotable way to enable it. > > > > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 23 00:31:00 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 23 Jun 2021 01:31:00 +0100 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> Message-ID: <96f80a88c3f387b4f065afc7a1b7295049b38468.camel@redhat.com> On Tue, 2021-06-22 at 23:49 +0300, Andrey Kurilin wrote: > Hi folks! > > As a guy who was top 1 contributor to novaclient at some point, I > tried to > extend a validation at the client-side as much as possible. > I mean, I really like the approach when the user sees a validation > error in > an ms (or a second depending on the system and plugins) without > passing the > auth and sending any request to API, so big +1 to leave enum of > possible > choices there. i agree with keeping the validation in the clinet by the way. more details on the usecase tehy had below. > > BUT I have one concern here: does it possible that the number of > official > policies will be extended or it becomes pluggable(without patching of > nova > code itself)? no that is not possible, any extenions of the policies woudl requrie a new micro version. we have previously added new microversion when we added the soft affintiy policy. as with any new microversion we woudl naturally also extend the clinet to support that new microverison. nova has not supported api extnension for a very long time and i dont forsee us making this plugable or reintoducing api extentions in the sort to near term. i receved more infomation fomr out downstream support engienrs on what the customer is actlly trying to do. our supprot engeienr discorverd this old spec form jay to add aggrate affintiy policies. https://review.opendev.org/c/openstack/nova-specs/+/529135/6/specs/rocky/approved/aggregate-affinity.rst the costomer usecase is rack level affinity/antiafinity to ensure that vms are schduled to differnt top of rack switchs. they model thos tor failure domains as host aggreates and implented a custome filter to implemant that affintiy and anit affinity. they also however modifid the server side vlaidation brackin micorverion compatiablty by intoducing new tor-affintiy policies. skiming the spec it is mainly focused on ironic but im really not sure why we have nto added an agrrate level affinity/antiaffinty policy already. it has application for non ironic host too and woudl provide a way to do geneirc fault domain modeling. granted it woudl eb nice to model this in palcment somehow but supporting it at all would be valueable. we have added affinity rules for anti-affinity in the form of max_server_per_host https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/complex-anti-affinity-policies.html to me sepcifying a failrue domain in the form of an affinity_scope e.g. affinity_scope=host or affintiy_scope=aggreate or max_servers_per_aggreate could be another alternivie to a new policy but the over all the usecase seams valid regardless of how we adress it. doing this as an nova fork/api exteion however is not really a valid reason to remove validation form teh client. the could also patch the client with an osc plugin or create a fork of it also presumable. they woud just have to monkey patch the existing server group command or reimplement it and override the in tree one. > In this case, it would be nice to be a bit less strict. > > вт, 22 июн. 2021 г. в 20:51, Sean Mooney : > > > On Tue, 2021-06-22 at 17:17 +0000, Jeremy Stanley wrote: > > > On 2021-06-22 17:39:42 +0100 (+0100), Stephen Finucane wrote: > > > [...] > > > > Apparently someone has been relying on a bug in Nova to pass a > > > > different value to the API that what the schema should have > > > > allowed, and they are dismayed that the client no longer allows > > > > them to do this. > > > [...] > > > > > > I can't find where they explained what new policy they've > > > implemented in their fork. Perhaps if they elaborated on the use > > > case, it could be it's something the Nova maintainers would accept > > > a > > > patch to officially extend the API to incorporate, allowing that > > > deployment to un-fork? > > my understandign is that they are trying to model fault domains an > > have > > a fault domain aware anti affintiy policy that use host-aggreate or > > azs > > to model the fault to doamin. > > > > they reasched out to us downstream too about this and all i know so > > fart is they are implemetneign there own filter to do this which is > > valid. what is not valid ti extending a seperate api in this case the > > server group api to then use as an input to the out of tree filter. > > > > if they had use a schduler hint which inteionally support out of tree > > hints or a flaovr extra spec then it would be fine. the use fo a > > custom > > server group policy whne the server groups is not a defiend public > > extion point is the soucce of the confilct. > > > > the use case of an host aggrate anti affinti plicy while likely not > > efficent to implement is at leaset a somewhat resonable one that i > > could see supporting upstream. although there are many edgcases with > > regard to host being in mutliple host aggreates. if they are doing > > this > > based on avaiablity zone that is simler since a hsot can only be in > > one > > az. > > > > in anycase it woudl be nice if they brought there usecase upstream or > > even downstream so we could find a more supprotable way to enable it. > > > > > > > > > From balazs.gibizer at est.tech Wed Jun 23 07:27:33 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Wed, 23 Jun 2021 09:27:33 +0200 Subject: [cinder] CI coverage for SPDK In-Reply-To: References: Message-ID: On Tue, Jun 22, 2021 at 16:27, Brian Rosmaita wrote: > On 6/22/21 12:01 PM, Balazs Gibizer wrote: >> Hi Cinder Team, >> >> I'm wondering what CI coverage cinder has for SPDK driver. The code >> mentions that Mellanox CI[1] but I'm not able to find a run from >> that CI in the recent cinder patches. Is this driver still covered >> somewhere? > > The Mellanox CI is the correct place to look for SPDK CI results. As > recently as March (which, come to think of it, isn't all that recent) > the Mellanox CI ran two test jobs, "Cinder-tgtadm" and "SPDK" [0]. > Looks like the most recent Gerrit comments are showing only results > from the "Cinder-tgtadm" job [1]. (And those are from 8 June, which > aren't all that recent either.) > > I'll reach out to the Mellanox maintainer and maybe he can give the > CI machine a kick. Thank you Brian. cheers, gibi > > [0] https://review.opendev.org/c/openstack/os-brick/+/777086 > [1] https://review.opendev.org/c/openstack/cinder/+/760199/ > > cheers, > brian > >> >> Cheers, >> gibi >> >> [1] >> https://github.com/openstack/cinder/blob/393c2e4ad90c05ebf28cc3a2c65811d7e1e0bc18/cinder/volume/drivers/spdk.py#L41 >>  >> >> >> > > From kklimonda at syntaxhighlighted.com Wed Jun 23 08:10:08 2021 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Wed, 23 Jun 2021 10:10:08 +0200 Subject: =?UTF-8?Q?[neutron]_OVS_tunnels_and_VLAN_provider_networks_on_the_same_i?= =?UTF-8?Q?nterface?= Message-ID: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> Hi All, What is the best practice for sharing same interface between OVS tunnels and VLAN-based provider networks? For provider networks to work, I must "bind" entire interface to vswitchd, so that it can handle vlan bits, but this leaves me with a question of how to plug ovs tunnel interface (and os internal used for control<->compute communication, if shared). I have two ideas: 1) I can bind entire interface to ovs-vswitchd (in ip link output it's marked with "master ovs-system") and create vlan interfaces on top of that interface *in the system*. This seems to be working correctly in my lab tests. 2) I can create internal ports in vswitchd and plug them into ovs bridge - this will make the interface show up in the system, and I can configure it afterwards. In this setup I'm concerned with how packets from VMs to other computes will flow through the system - will they leave openvswitch to host system just to go back again to be sent through a tunnel? I've tried looking for some documentation regarding that, but came up empty - are there some links I could look at to get a better understanding of packet flow and best practices? Best Regards, -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com From stephenfin at redhat.com Wed Jun 23 09:16:21 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 23 Jun 2021 10:16:21 +0100 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> Message-ID: <159a157942928d51a615d51ddfc1250b29f87ed9.camel@redhat.com> On Tue, 2021-06-22 at 23:49 +0300, Andrey Kurilin wrote: > Hi folks! > > As a guy who was top 1 contributor to novaclient at some point, I tried to > extend a validation at the client-side as much as possible. > I mean, I really like the approach when the user sees a validation error in an > ms (or a second depending on the system and plugins) without passing the auth > and sending any request to API, so big +1 to leave enum of possible choices > there. Cool, that's my thinking also. > BUT I have one concern here: does it possible that the number of official > policies will be extended or it becomes pluggable(without patching of nova > code itself)? In this case, it would be nice to be a bit less strict. As Sean has said elsewhere, there's no way to extend this without a microversion. I think it's fair to request that users upgrade their client if they wish to support newer microversions. Stephen > > вт, 22 июн. 2021 г. в 20:51, Sean Mooney : > > On Tue, 2021-06-22 at 17:17 +0000, Jeremy Stanley wrote: > > > On 2021-06-22 17:39:42 +0100 (+0100), Stephen Finucane wrote: > > > [...] > > > > Apparently someone has been relying on a bug in Nova to pass a > > > > different value to the API that what the schema should have > > > > allowed, and they are dismayed that the client no longer allows > > > > them to do this. > > > [...] > > > > > > I can't find where they explained what new policy they've > > > implemented in their fork. Perhaps if they elaborated on the use > > > case, it could be it's something the Nova maintainers would accept a > > > patch to officially extend the API to incorporate, allowing that > > > deployment to un-fork? > > my understandign is that they are trying to model fault domains an have > > a fault domain aware anti affintiy policy that use host-aggreate or azs > > to model the fault to doamin. > > > > they reasched out to us downstream too about this and all i know so > > fart is they are implemetneign there own filter to do this which is > > valid. what is not valid ti extending a seperate api in this case the > > server group api to then use as an input to the out of tree filter. > > > > if they had use a schduler hint which inteionally support out of tree > > hints or a flaovr extra spec then it would be fine. the use fo a custom > > server group policy whne the server groups is not a defiend public > > extion point is the soucce of the confilct. > > > > the use case of an host aggrate anti affinti plicy while likely not > > efficent to implement is at leaset a somewhat resonable one that i > > could see supporting upstream. although there are many edgcases with > > regard to host being in mutliple host aggreates. if they are doing this > > based on avaiablity zone that is simler since a hsot can only be in one > > az. > > > > in anycase it woudl be nice if they brought there usecase upstream or > > even downstream so we could find a more supprotable way to enable it. > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Wed Jun 23 09:45:55 2021 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Wed, 23 Jun 2021 10:45:55 +0100 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> Message-ID: On Tue, Jun 22, 2021 at 5:43 PM Stephen Finucane wrote: > Hey, > > We have an interesting problem that I wanted to poll opinions on. In OSC > 5.5.0, > we closed most of the gaps between novaclient and openstackclient. As part > of > these changes, we introduced validation of a number of requests such as > validating enum-style values. For example, [1][2][3]. This validation > already > occurs on the server side, but by adding it to the client side we prevent > users > sending invalid requests to the server in the first place and allow users > to > discover the correct API behaviour from the client rather than having to > read > the API docs or use trial and error. > > Now, an issue has been opened against OSC. Apparently someone has been > relying > on a bug in Nova to pass a different value to the API that what the schema > should have allowed, and they are dismayed that the client no longer > allows them > to do this. They have asked [4][5] that we relax the client-side > validation to > allow them to continue relying on this bug. As you can probably tell from > my > comments, this seems to me to be an open and shut case: you shouldn't fork > an > OpenStack API and you shouldn't side-step validation. However, I wanted to > see > if anyone disagreed and thought there was merit in loose or no validation > of API > requests made via our clients. > > Let me know what you think, > Stephen > > [1] > https://github.com/openstack/python-openstackclient/blob/5.5.0/openstackclient/compute/v2/server.py#L1789-L1808 > [2] > https://github.com/openstack/python-openstackclient/blob/5.5.0/openstackclient/compute/v2/server.py#L1907-L1921 > [3] > https://github.com/openstack/python-openstackclient/blob/5.5.0/openstackclient/compute/v2/server_group.py#L62-L67 > [4] https://storyboard.openstack.org/#!/story/2008975 > [5] > https://github.com/openstack/python-openstackclient/commit/ab0b1fe885ee0a210a58008b631521025be7f3eb > > > Hi all, My quick two cents in perspective of what we have been doing in Glance for multiple years already. Fail as early as possible. We do have checks on the API layer already way before we hit the code that would fail to recognize patterns we know would fail later on. We do extend this to the client as well. Specially as glanceclient may send multiple requests to the API for single user command we try to identify possible issues in advance. Good example of this is during image creation. If a user makes clent call that would result an active image but is missing, say either of the disk or container formats, we do know that activating said image would fail and we fail it to the user already on the client before sending a single request to the API. Makes it fast, we do not create image resources that would not get used in the case the user just reruns the same command with missing information and everyone wins. We have been advocates of extending our "Fail early" attitude to the client for a very long time and I think it's a good practise. - Erno "jokke" Kuvaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabian.wiesel at sap.com Wed Jun 23 10:21:58 2021 From: fabian.wiesel at sap.com (Wiesel, Fabian) Date: Wed, 23 Jun 2021 10:21:58 +0000 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <159a157942928d51a615d51ddfc1250b29f87ed9.camel@redhat.com> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> <159a157942928d51a615d51ddfc1250b29f87ed9.camel@redhat.com> Message-ID: <8519F1B3-DBDB-42A2-BE9F-864D46762BBC@sap.com> Hi, I take a different view, possibly because I am in a similar position as the requestor. I also work on a openstack installation, which we need to patch to our needs. We try to do everything upstream first, but chances are, there will be changes which are not upstreamable. We also have large user-base, and it is a great advantage to be able to point people to the official client, even if the server is not the official one. A strict client policy would require us to fork the client as well, and distribute that to our user-base. With a couple of thousand users, that is not so trivial. In my point-of-view, such a decision would tightly couple the client to the server for a limited benefit (a fraction of seconds earlier error message). As a compromise, I would suggest to make the client validation configurable as in kubectl with --validate=true. Cheers, Fabian From ralonsoh at redhat.com Wed Jun 23 10:30:16 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 23 Jun 2021 12:30:16 +0200 Subject: [neutron] OVS tunnels and VLAN provider networks on the same interface In-Reply-To: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> References: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> Message-ID: Hello Krzysztof: If I understand correctly, what you need is to share a single interface to handle VLAN and tunneled traffic. IMO, you can replicate the same scenario as with OVS-DPDK: https://docs.openvswitch.org/en/latest/howto/userspace-tunneling/ - The VLAN traffic exits the host using the physical bridge that is connected to the external interface. - The tunneled traffic is sent to br-tun. There the traffic is tagged and sent to the physical bridge and then through the physical interface. Regards. On Wed, Jun 23, 2021 at 10:14 AM Krzysztof Klimonda < kklimonda at syntaxhighlighted.com> wrote: > Hi All, > > What is the best practice for sharing same interface between OVS tunnels > and VLAN-based provider networks? For provider networks to work, I must > "bind" entire interface to vswitchd, so that it can handle vlan bits, but > this leaves me with a question of how to plug ovs tunnel interface (and os > internal used for control<->compute communication, if shared). I have two > ideas: > > 1) I can bind entire interface to ovs-vswitchd (in ip link output it's > marked with "master ovs-system") and create vlan interfaces on top of that > interface *in the system*. This seems to be working correctly in my lab > tests. > > 2) I can create internal ports in vswitchd and plug them into ovs bridge - > this will make the interface show up in the system, and I can configure it > afterwards. In this setup I'm concerned with how packets from VMs to other > computes will flow through the system - will they leave openvswitch to host > system just to go back again to be sent through a tunnel? > > I've tried looking for some documentation regarding that, but came up > empty - are there some links I could look at to get a better understanding > of packet flow and best practices? > > Best Regards, > > -- > Krzysztof Klimonda > kklimonda at syntaxhighlighted.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 23 10:45:29 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 23 Jun 2021 11:45:29 +0100 Subject: [neutron] OVS tunnels and VLAN provider networks on the same interface In-Reply-To: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> References: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> Message-ID: <017ee9ab21a22dc2534d0b15668173151d9bcbbf.camel@redhat.com> On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote: > Hi All, > > What is the best practice for sharing same interface between OVS > tunnels and VLAN-based provider networks? For provider networks to > work, I must "bind" entire interface to vswitchd, so that it can handle > vlan bits, but this leaves me with a question of how to plug ovs tunnel > interface (and os internal used for control<->compute communication, if > shared). I have two ideas: you assign the ovs tunnel interface ip to the bridge with the physical interfaces. this is standard practice when using ovs-dpdk for example as otherwise the tunnel traffic will not be dpdk acclerated. i suspect the same requirement exits for hardware offloaded ovs. the bridge local port e.g. br-ex is a interface type internal port. ovs uses a chace of the host routing table to determin what interface to send the (vxlan,gre,geneve) encapsulated packet too based on the next hop interface in the routing table. if you assgign the tunnel local endpoint ip to an ovs bride it enable an internal optimisation that usesa a spescial out_port action that encuse the encapped packet on the bridge port's recive quene then simple mac learing enables it to forward the packet via the physical interface. that is the openflow view a thte dataplant view with ovs-dpctl (or ovs- appctl for dpdk) you will see that the actual datapath flow will just encap the packet and transmit it via physical interface although for this to hapen theere must be a path between the br-tun and tbe br-ex via the br-int that is interconnected via patch ports. creating a patch port via the br-ex and br-int and another pair between the br-tun and br-int can be done automaticaly by the l2 agent wtih teh correct fconfiguration and that allows ovs to collapse the bridge into a singel datapath instnace and execut this optimisation. this has been implemented in the network-ovs-dpdk devstack plugin and then we had it prot to fuel and tripleo depending on you installer it may already support this optimisation but its perfectly valid for kernel ovs also. > > 1) I can bind entire interface to ovs-vswitchd (in ip link output it's > marked with "master ovs-system") and create vlan interfaces on top of > that interface *in the system*. This seems to be working correctly in > my lab tests. that inefficent since it required the packet to be rpcessed by ovs then sent to the kernel networking stack to finally be set via the vlan interface. > > 2) I can create internal ports in vswitchd and plug them into ovs > bridge - this will make the interface show up in the system, and I can > configure it afterwards. In this setup I'm concerned with how packets > from VMs to other computes will flow through the system - will they > leave openvswitch to host system just to go back again to be sent > through a tunnel? this would also work simiar t what i suggested above but its simpelr to just use the bridge local port instead. the packtes shoudl not leave ovs and renter in this case. and you can verify that by looking at the dataplane flows. > > I've tried looking for some documentation regarding that, but came up > empty - are there some links I could look at to get a better > understanding of packet flow and best practices? > > Best Regards, > From smooney at redhat.com Wed Jun 23 11:01:54 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 23 Jun 2021 12:01:54 +0100 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <8519F1B3-DBDB-42A2-BE9F-864D46762BBC@sap.com> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> <159a157942928d51a615d51ddfc1250b29f87ed9.camel@redhat.com> <8519F1B3-DBDB-42A2-BE9F-864D46762BBC@sap.com> Message-ID: <3b2379c94a54dfc7aedd0fbe70f4613909d39168.camel@redhat.com> On Wed, 2021-06-23 at 10:21 +0000, Wiesel, Fabian wrote: > Hi, > > I take a different view, possibly because I am in a similar position > as the requestor. > I also work on a openstack installation, which we need to patch to > our needs. > We try to do everything upstream first, but chances are, there will > be changes which are not upstreamable. > > We also have large user-base, and it is a great advantage to be able > to point people to the official client, even if the server is not the > official one. > A strict client policy would require us to fork the client as well, > and distribute that to our user-base. With a couple of thousand > users, that is not so trivial. > In my point-of-view, such a decision would tightly couple the client > to the server for a limited benefit (a fraction of seconds earlier > error message). > > As a compromise, I would suggest to make the client validation > configurable as in kubectl with --validate=true. kubernets has a very differnet approch to api stablity and extensiblity they have versioned extions and support mutlipe versions fo the same extension over tiem. they alsow have a purly plugable api where you can define new contoler to impelent new behavior allowing any depleyment ot have a complete different set of requests and featucre they develop loacl to be integrated into kubernetes which posses problems for interoperatblity between differnt k8s instalations. if we were to add a new gobal option for this we would have to also ensure it default to validating by default. what i think might be a better UX would be for operator to not ship a forked clinet persay but to ship a plugin to the client that also adds your extensions. my other concern with allowing validation to be disabled is that we likely depend on it in part of the code to ensure code is not run unless it passses the validation. it woudl be ineffiecnt to have code to chekc for our precondition to call a function in addtion to the validation so user might get tracebacks or orhter unfriendly errors if they disabled validation. the client validation we have today i belive only enforce enum for example where the value has a fixed set of values. if the field in the api is an unbounded string then the client woudl not perfrom validation of the value of the argument although if we knwo that that argument is only valid if other flags are set then we might check for those. for example if the argument rquires a minium microversion to be used we may not check the value fo the opaqu string filed but woudl validate the microverion range. if you enxtedn the supported feature set in your installation and want to enable the standrd client to work with that you can simply extend the allowed set with a plugin. > > Cheers, >   Fabian > > From artem.goncharov at gmail.com Wed Jun 23 11:03:13 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 23 Jun 2021 13:03:13 +0200 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <8519F1B3-DBDB-42A2-BE9F-864D46762BBC@sap.com> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> <159a157942928d51a615d51ddfc1250b29f87ed9.camel@redhat.com> <8519F1B3-DBDB-42A2-BE9F-864D46762BBC@sap.com> Message-ID: <29FD1A0B-9861-4DF6-858A-6410E8C8C9FB@gmail.com> Hi > On 23. Jun 2021, at 12:21, Wiesel, Fabian wrote: > > Hi, > > I take a different view, possibly because I am in a similar position as the requestor. > I also work on a openstack installation, which we need to patch to our needs. > We try to do everything upstream first, but chances are, there will be changes which are not upstreamable. > > We also have large user-base, and it is a great advantage to be able to point people to the official client, even if the server is not the official one. > A strict client policy would require us to fork the client as well, and distribute that to our user-base. With a couple of thousand users, that is not so trivial. > In my point-of-view, such a decision would tightly couple the client to the server for a limited benefit (a fraction of seconds earlier error message). You touch a very interesting and slippy pot. I belong also to this unlucky category and need to say: - once forked amount of differences only grows - differences in the beginning may be coverable by compromises, but most likely at some point you will reach dead end and need to consider alternative solutions - delivery of the client (here especially talking about OSC) is not that complex as you think - we have a project that adds plugins into OSC and in some cases overrides native behaviour of it. Delivery is as easy as “pip install openstackclient MY_FOKED_CLOUD_PLUGINS_PROJECT”. It is not that different from initially doing “pip install openstackclient" - fraction of seconds there, fraction in another place and suddenly users are crying: why the heck this tool is so slow (here I mean another side of the coin where simply forcing users to retry invocation with corrected set of parameters with +4s for initialisations, +1s on laggy network, + some more retries with further problems, + cleaning up after really failed attempts, etc are making users mad) - I am personally 100% belonging to "fail early” group. It just take much more efforts explaining to the user what this bloody server response without any message in it means (we all know the sad reality of usefulness of some of the responses). > As a compromise, I would suggest to make the client validation configurable as in kubectl with --validate=true. Sounds really like a reasonable compromise (but I would reverse the flag to allow skipping - I hate possibility to create broken resources), but as I mentioned earlier - sooner or later you will start paying for the fork. So start doing things proper from the beginning. Regards, Artem From ignaziocassano at gmail.com Wed Jun 23 11:24:07 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 23 Jun 2021 13:24:07 +0200 Subject: [Kolla][nova] /var/lib/nova/instances on nfs In-Reply-To: <2f34daf733ca9a780d8cb09dd2186e2d3dea8ad2.camel@redhat.com> References: <6d32e2936ac9cc88c8f6148a375446cace13b0b8.camel@redhat.com> <2f34daf733ca9a780d8cb09dd2186e2d3dea8ad2.camel@redhat.com> Message-ID: Hello Sean, you are right. Setting cinder nfs it works fine and we do not need do share /var/lib/nova. Probably there was some errors in my previous installation. Ignazio Il giorno mar 22 giu 2021 alle ore 14:32 Sean Mooney ha scritto: > On Tue, 2021-06-22 at 13:57 +0200, Ignazio Cassano wrote: > > Hello , I am using cinder with netapp driver but if I do not mount a > > share > > under /var/lib/docker/volumes/nova_compute/_data live migration does > > not > > work because an error is disblayed: shared storage in needes or > > something > > like that. > > I do not understand why is does not notice that volumes are shared > > so these are cinder boot form volume guests? > if you are usign the correct microverion it should detect that its > shared storage automaticaly when you do a migration can you confirm the > command you are usign to do the migration and that its a boot form > volumen guest not an image backed guest with a data volumn. > > > > Ignazio > > > > Il giorno mar 22 giu 2021 alle ore 13:43 Sean Mooney < > > smooney at redhat.com> > > ha scritto: > > > > > On Tue, 2021-06-22 at 13:02 +0200, Radosław Piliszek wrote: > > > > Hello Ignazio, > > > > > > > > If you are not using Cinder NFS backend already, you need to set: > > > > > > > > enable_shared_var_lib_nova_mnt: yes > > > > > > > > And yes, you need to manage fstab yourself, mounting the shared > > > > nfs > > > > at > > > > /var/lib/nova/mnt > > > > > > > > It must happen before the containers are started (so before > > > > deploy or > > > > redeploy). > > > i dont think they were refering to cinder nfs. > > > we have support for deploying novas state directory and libvirts > > > stroage on nfs in nvoa when usign the raw/qcow image backend. > > > > > > in general i advise against that but it is supported. > > > you should ensure that you use nfs v4 preferable nfs v4.2 or newer > > > > > > with my downstream hat on we droped supprot for nfs v3 many years > > > ago > > > and the last lts release we hadd that supported it was based on > > > newton. > > > technially we dont have a min nfs version requirement ustream but > > > at > > > some point i think we shoudl enforce at least nfs v4 upstream too. > > > there are several known locking issues with nfs v3 that make it > > > generally problematic to use at scale with nova that manifest > > > intermietnly during move operations. > > > > > > the same may or may not be true with nfs via cinder but that is one > > > of > > > the less well tested and hardened cinder backends to use with nova. > > > > > > > > > > > > > > -yoctozepto > > > > > > > > On Tue, Jun 22, 2021 at 12:11 PM Ignazio Cassano > > > > wrote: > > > > > > > > > > Hello Stackers, is there any configuration parameter in kolla > > > > > for > > > > > sharing nova on nfs between compute nodes ? Or I must insert an > > > > > entry in fstab ? > > > > > Thanks > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Wed Jun 23 11:31:14 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Wed, 23 Jun 2021 14:31:14 +0300 Subject: [nova] SCS standardized flavor naming Message-ID: <1508311624445726@mail.yandex.ru> Hi! > The point is, a new customer will *not* spend time reading the spec. > Typically, they will want to just fire-up a VM quickly without reading > too much docs... While I find Thomases flavor naming also not really intuitive for customers - out of nvt4-a8-ram24-disk50-perf2 I could guess only disk size and amout of ram (is it in gygabytes?) but "SCS-16T:64:200s-GNa:64-ib" doesn't make any sense to me at all (if I haven't read [1] ofc). I totally agree that no user in Public cloud would read any spec before launching their VM and it would be super hard to force them to do so. So flavor naming should be as explicit and readable as possible and assuming that person who will use cloud has no idea about specs we're making. These specs should be designed for cloud providers to comply and have same standards so users feel comfortable and secure, but don't assume regular users to have special skills in reading what engineers come up to. If regular users would find this hard to use, companies might choose hapiness of customers over some compliance. As nova doesn't have any text description for flavors, so flavor name is everything we have to expose to the customers and it should be clean and readable from the first sight. > nvt4-a8-ram24-disk50-perf2 > > This means: > - nvt4: nvidia T4 GPU > - a8: AMD VCPU 8 (we also have i4 for example, for Intel) > - ram24: 24 GB of RAM > - disk50: 50 GB of local system disk > - perf2: level 2 of IOps / IO bandwidth So what I'd suggest to cover that usecase would be smth like: 8vCPU-24576RAM-50SSD-pGPU:T4-10kIOPS-EPYC4 > SCS-8C:32:2x200S-bms-i2-GNa:64-ib > [4] In case you wonder: 8 dedicated cores, 32GiB RAM, 2x200GB SSD disks > on bare metal sys, intel Cascade Lake, nVidia GPU with 64 Ampere SMs > and InfiniBand. Would be probably smth like: 8pCPU-32768RAM-2x200SSD-2vGPU:A100-IB-Cascade [1] https://github.com/SovereignCloudStack/Operational-Docs/blob/main/flavor-naming-draft.MD -- Kind Regards, Dmitriy Rabotyagov From amy at demarco.com Wed Jun 23 12:13:02 2021 From: amy at demarco.com (Amy Marrich) Date: Wed, 23 Jun 2021 07:13:02 -0500 Subject: [TC] Open Infra Live- Open Source Governance In-Reply-To: References: <187a78ef-0e29-d7dd-5506-73515fb28dbd@gmail.com> Message-ID: Kendall, If you need a third or a back up let me know, Thanks, Amy On Tue, Jun 15, 2021 at 11:56 AM Kendall Nelson wrote: > It would be great to have both of you join! I passed your names onto Erin. > She will reach out at some point soon. > > -Kendall (diablo_rojo) > > On Mon, Jun 14, 2021 at 10:57 AM Jay Bryant wrote: > >> >> On 6/14/2021 9:45 AM, Kendall Nelson wrote: >> >> Hello TC Folks :) >> >> So I have been tasked with helping to collect a couple volunteers for our >> July 29th episode of Open Infra Live (at 14:00 UTC) on open source >> governance. >> >> I am also working on getting a couple members from the k8s steering >> committee to join us that day. >> >> If you are interested in participating, please let me know! I only need >> like two volunteers, but if we have more people than that dying to join in, >> I am sure we can work it out. >> >> I can help if you need another person. Let me know. >> >> Jay >> >> Thanks! >> >> -Kendall Nelson (diablo_rojo) >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 23 13:14:36 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 23 Jun 2021 14:14:36 +0100 Subject: [nova] SCS standardized flavor naming In-Reply-To: <1508311624445726@mail.yandex.ru> References: <1508311624445726@mail.yandex.ru> Message-ID: <47e88e68efe3dad9b7f165e4738fca7708944cde.camel@redhat.com> On Wed, 2021-06-23 at 14:31 +0300, Dmitriy Rabotyagov wrote: > Hi! > > > The point is, a new customer will *not* spend time reading the > > spec. > > Typically, they will want to just fire-up a VM quickly without > > reading > > too much docs... > > While I find Thomases flavor naming also not really intuitive for > customers - out of nvt4-a8-ram24-disk50-perf2 I could guess only disk > size and amout of ram (is it in gygabytes?) but "SCS-16T:64:200s- > GNa:64-ib" doesn't make any sense to me at all (if I haven't read [1] > ofc). > > I totally agree that no user in Public cloud would read any spec > before launching their VM and it would be super hard to force them to > do so. > > So flavor naming should be as explicit and readable as possible and > assuming that person who will use cloud has no idea about specs we're > making. These specs should be designed for cloud providers to comply > and have same standards so users feel comfortable and secure, but > don't assume regular users to have special skills in reading what > engineers come up to. If regular users would find this hard to use, > companies might choose hapiness of customers over some compliance. > > As nova doesn't have any text description for flavors, so flavor name > is everything we have to expose to the customers and it should be > clean and readable from the first sight. > > > nvt4-a8-ram24-disk50-perf2 > > > > This means: > > - nvt4: nvidia T4 GPU > > - a8: AMD VCPU 8 (we also have i4 for example, for Intel) > > - ram24: 24 GB of RAM > > - disk50: 50 GB of local system disk > > - perf2: level 2 of IOps / IO bandwidth > > So what I'd suggest to cover that usecase would be smth like: > 8vCPU-24576RAM-50SSD-pGPU:T4-10kIOPS-EPYC4 that is somewhat readable but in general i dont think we shoudl be advocationg for standarised naming of flavor across clouds in general. we might be able to encode some info but really user shoudl read the extra specs and falvor values not realy on a nameing scheme. > > > SCS-8C:32:2x200S-bms-i2-GNa:64-ib > > [4] In case you wonder: 8 dedicated cores, 32GiB RAM, 2x200GB SSD > > disks > >     on bare metal sys, intel Cascade Lake, nVidia GPU with 64 > > Ampere SMs > >     and InfiniBand. > > Would be probably smth like: > 8pCPU-32768RAM-2x200SSD-2vGPU:A100-IB-Cascade > > [1] > https://github.com/SovereignCloudStack/Operational-Docs/blob/main/flavor-naming-draft.MD > From kklimonda at syntaxhighlighted.com Wed Jun 23 13:50:11 2021 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Wed, 23 Jun 2021 15:50:11 +0200 Subject: =?UTF-8?Q?Re:_[neutron]_OVS_tunnels_and_VLAN_provider_networks_on_the_sa?= =?UTF-8?Q?me_interface?= In-Reply-To: <017ee9ab21a22dc2534d0b15668173151d9bcbbf.camel@redhat.com> References: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> <017ee9ab21a22dc2534d0b15668173151d9bcbbf.camel@redhat.com> Message-ID: <037fe7ca-206e-4ab8-9a1e-7f30e7bbe7f9@www.fastmail.com> Thanks, Does this assume that the ovs tunnel traffic is untagged, and there are no other tagged vlans that we want to direct to the host instead of ovs? What if I want ovs to handle only a subset of VLANs and have other directed to the host? That would probably work with with my second option (modulo possible loss of connectivity if vswitchd goes down?) but I'm not sure how to do that with ovs bridges - with normal bridge, I can make it vlan-aware but I'm not sure how this would work with ovs. Best Regards, Krzysztof On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote: > On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote: > > Hi All, > > > > What is the best practice for sharing same interface between OVS > > tunnels and VLAN-based provider networks? For provider networks to > > work, I must "bind" entire interface to vswitchd, so that it can handle > > vlan bits, but this leaves me with a question of how to plug ovs tunnel > > interface (and os internal used for control<->compute communication, if > > shared). I have two ideas: > > you assign the ovs tunnel interface ip to the bridge with the physical > interfaces. this is standard practice when using ovs-dpdk for example > as otherwise the tunnel traffic will not be dpdk acclerated. i suspect > the same requirement exits for hardware offloaded ovs. > > the bridge local port e.g. br-ex is a interface type internal port. > ovs uses a chace of the host routing table to determin what interface > to send the (vxlan,gre,geneve) encapsulated packet too based on the > next hop interface in the routing table. if you assgign the tunnel > local endpoint ip to an ovs bride it enable an internal optimisation > that usesa a spescial out_port action that encuse the encapped packet > on the bridge port's recive quene then simple mac learing enables it to > forward the packet via the physical interface. > > that is the openflow view a thte dataplant view with ovs-dpctl (or ovs- > appctl for dpdk) you will see that the actual datapath flow will just > encap the packet and transmit it via physical interface although for > this to hapen theere must be a path between the br-tun and tbe br-ex > via the br-int that is interconnected via patch ports. > > creating a patch port via the br-ex and br-int and another pair between > the br-tun and br-int can be done automaticaly by the l2 agent wtih teh > correct fconfiguration and that allows ovs to collapse the bridge into > a singel datapath instnace and execut this optimisation. > > this has been implemented in the network-ovs-dpdk devstack plugin and > then we had it prot to fuel and tripleo depending on you installer it > may already support this optimisation but its perfectly valid for > kernel ovs also. > > > > > > 1) I can bind entire interface to ovs-vswitchd (in ip link output it's > > marked with "master ovs-system") and create vlan interfaces on top of > > that interface *in the system*. This seems to be working correctly in > > my lab tests. > that inefficent since it required the packet to be rpcessed by ovs then > sent to the kernel networking stack to finally be set via the vlan > interface. > > > > 2) I can create internal ports in vswitchd and plug them into ovs > > bridge - this will make the interface show up in the system, and I can > > configure it afterwards. In this setup I'm concerned with how packets > > from VMs to other computes will flow through the system - will they > > leave openvswitch to host system just to go back again to be sent > > through a tunnel? > this would also work simiar t what i suggested above but its simpelr to > just use the bridge local port instead. the packtes shoudl not leave > ovs and renter in this case. and you can verify that by looking at the > dataplane flows. > > > > I've tried looking for some documentation regarding that, but came up > > empty - are there some links I could look at to get a better > > understanding of packet flow and best practices? > > > > Best Regards, > > > > > > -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com From juliaashleykreger at gmail.com Wed Jun 23 13:51:00 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 23 Jun 2021 06:51:00 -0700 Subject: [nova] SCS standardized flavor naming In-Reply-To: <47e88e68efe3dad9b7f165e4738fca7708944cde.camel@redhat.com> References: <1508311624445726@mail.yandex.ru> <47e88e68efe3dad9b7f165e4738fca7708944cde.camel@redhat.com> Message-ID: I'm suddenly reminded of the Sydney summit where the one thing operators seemed to be able to agree upon, was that they would never be able to agree upon standard naming for flavors. In large part because a huge commonality was that some teams ultimately needed highly tuned flavors, be it baremetal or virtual machines, to achieve their jobs. Sometimes these flavors had to have special scheduling as a result. It really sounds like a space we all want to avoid, but humans really need easy to relate to information when starting out. Easy to understand and relate to also likely solves a huge number of the cases until we get into the hyper-scaler deployments with specific needs throughout their business. On Wed, Jun 23, 2021 at 6:21 AM Sean Mooney wrote: > > On Wed, 2021-06-23 at 14:31 +0300, Dmitriy Rabotyagov wrote: > > Hi! > > > > > The point is, a new customer will *not* spend time reading the > > > spec. > > > Typically, they will want to just fire-up a VM quickly without > > > reading > > > too much docs... > > > > While I find Thomases flavor naming also not really intuitive for > > customers - out of nvt4-a8-ram24-disk50-perf2 I could guess only disk > > size and amout of ram (is it in gygabytes?) but "SCS-16T:64:200s- > > GNa:64-ib" doesn't make any sense to me at all (if I haven't read [1] > > ofc). > > > > I totally agree that no user in Public cloud would read any spec > > before launching their VM and it would be super hard to force them to > > do so. > > > > So flavor naming should be as explicit and readable as possible and > > assuming that person who will use cloud has no idea about specs we're > > making. These specs should be designed for cloud providers to comply > > and have same standards so users feel comfortable and secure, but > > don't assume regular users to have special skills in reading what > > engineers come up to. If regular users would find this hard to use, > > companies might choose hapiness of customers over some compliance. > > > > As nova doesn't have any text description for flavors, so flavor name > > is everything we have to expose to the customers and it should be > > clean and readable from the first sight. > > > > > nvt4-a8-ram24-disk50-perf2 > > > > > > This means: > > > - nvt4: nvidia T4 GPU > > > - a8: AMD VCPU 8 (we also have i4 for example, for Intel) > > > - ram24: 24 GB of RAM > > > - disk50: 50 GB of local system disk > > > - perf2: level 2 of IOps / IO bandwidth > > > > So what I'd suggest to cover that usecase would be smth like: > > 8vCPU-24576RAM-50SSD-pGPU:T4-10kIOPS-EPYC4 > that is somewhat readable but in general i dont think we shoudl be > advocationg for standarised naming of flavor across clouds in general. > we might be able to encode some info but really user shoudl read the > extra specs and falvor values not realy on a nameing scheme. > > > > > SCS-8C:32:2x200S-bms-i2-GNa:64-ib > > > [4] In case you wonder: 8 dedicated cores, 32GiB RAM, 2x200GB SSD > > > disks > > > on bare metal sys, intel Cascade Lake, nVidia GPU with 64 > > > Ampere SMs > > > and InfiniBand. > > > > Would be probably smth like: > > 8pCPU-32768RAM-2x200SSD-2vGPU:A100-IB-Cascade > > > > [1] > > https://github.com/SovereignCloudStack/Operational-Docs/blob/main/flavor-naming-draft.MD > > > > > From senrique at redhat.com Wed Jun 23 13:52:58 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 23 Jun 2021 10:52:58 -0300 Subject: [cinder] Bug deputy report for week of 2021-23-06 Message-ID: Hello, This is a bug report from 2021-16-06 to 2021-23-06. You're welcome to join the Cinder Bug Meeting today. Weekly on Wednesday at 1500 UTC on #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High: - CI features discussed last week: - https://bugs.launchpad.net/cinder/+bug/1932188 "lvm commands crash, causing failure" Assigned to Sofia Enriquez and Eric Harney - https://bugs.launchpad.net/os-brick/+bug/1929223 "scaleio connector disables HTTPS certificate validation". Unassigned Medium: - https://bugs.launchpad.net/cinder/+bug/1932964 "SolidFire duplicate volume name exception on migration and replication". Assigned to Fábio Oliveira Low: - https://bugs.launchpad.net/cinder/+bug/1933265 "cinder-backup ceph snapshot delete". Unassigned Wishlist: - https://bugs.launchpad.net/cinder/+bug/1933052 "cinder_internal_tenant_* is not predictable". Unassigned Cheers, Sofia -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jun 23 14:04:14 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 23 Jun 2021 15:04:14 +0100 Subject: [neutron] OVS tunnels and VLAN provider networks on the same interface In-Reply-To: <037fe7ca-206e-4ab8-9a1e-7f30e7bbe7f9@www.fastmail.com> References: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> <017ee9ab21a22dc2534d0b15668173151d9bcbbf.camel@redhat.com> <037fe7ca-206e-4ab8-9a1e-7f30e7bbe7f9@www.fastmail.com> Message-ID: <0088da84c61d4a8d5a95ccaa1787d4861b79347c.camel@redhat.com> On Wed, 2021-06-23 at 15:50 +0200, Krzysztof Klimonda wrote: > Thanks, > > Does this assume that the ovs tunnel traffic is untagged, and there > are no other tagged vlans that we want to direct to the host instead > of ovs? you can do takgin with openflow rules or by taggin the interface in ovs. the l2 agent does not manage flows on the br-ex or your phsyical bridge so you as an operator are allowed to tag them > > What if I want ovs to handle only a subset of VLANs and have other > directed to the host? you can do that with a vlan subport on the ovs port but you should ensure that its outside of the range in the ml2 driver config for the avaible vlans on the phsynet. > That would probably work with with my second option (modulo possible > loss of connectivity if vswitchd goes down?) but I'm not sure how to > do that with ovs bridges - with normal bridge, I can make it vlan- > aware but I'm not sure how this would work with ovs. > > Best Regards, > Krzysztof > > On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote: > > On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote: > > > Hi All, > > > > > > What is the best practice for sharing same interface between OVS > > > tunnels and VLAN-based provider networks? For provider networks > > > to > > > work, I must "bind" entire interface to vswitchd, so that it can > > > handle > > > vlan bits, but this leaves me with a question of how to plug ovs > > > tunnel > > > interface (and os internal used for control<->compute > > > communication, if > > > shared). I have two ideas: > > > > you assign the ovs tunnel interface ip to the bridge with the > > physical > > interfaces. this is standard practice when using ovs-dpdk for > > example > > as otherwise the tunnel traffic will not be dpdk acclerated. i > > suspect > > the same requirement exits for hardware offloaded ovs. > > > > the bridge local port e.g. br-ex is a interface type internal port. > > ovs uses a chace of the host routing table to determin what > > interface > > to send the (vxlan,gre,geneve) encapsulated packet too based on the > > next hop interface in the routing table. if you assgign the tunnel > > local endpoint ip to an ovs bride it enable an internal > > optimisation > > that usesa a spescial out_port action that encuse the encapped > > packet > > on the bridge port's recive quene then simple mac learing enables > > it to > > forward the packet via the physical interface. > > > > that is the openflow view a thte dataplant view with ovs-dpctl (or > > ovs- > > appctl for dpdk) you will see that the actual datapath flow will > > just > > encap the packet and transmit it via physical interface although > > for > > this to hapen theere must be a path between the br-tun and tbe br- > > ex > > via the br-int that is interconnected via patch ports. > > > > creating a patch port via the br-ex and br-int and another pair > > between > > the br-tun and br-int can be done automaticaly by the l2 agent wtih > > teh > > correct fconfiguration and that allows ovs to collapse the bridge > > into > > a singel datapath instnace and execut this optimisation. > > > > this has been implemented in the network-ovs-dpdk devstack plugin > > and > > then we had it prot to fuel and tripleo depending on you installer > > it > > may already support this optimisation but its perfectly valid for > > kernel ovs also. > > > > > > > > > > 1) I can bind entire interface to ovs-vswitchd (in ip link output > > > it's > > > marked with "master ovs-system") and create vlan interfaces on > > > top of > > > that interface *in the system*. This seems to be working > > > correctly in > > > my lab tests. > > that inefficent since it required the packet to be rpcessed by ovs > > then > > sent to the kernel networking stack to finally be set via the  vlan > > interface. > > > > > > 2) I can create internal ports in vswitchd and plug them into ovs > > > bridge - this will make the interface show up in the system, and > > > I can > > > configure it afterwards. In this setup I'm concerned with how > > > packets > > > from VMs to other computes will flow through the system - will > > > they > > > leave openvswitch to host system just to go back again to be sent > > > through a tunnel? > > this would also work simiar t what i suggested above but its > > simpelr to > > just use the bridge local port instead. the packtes shoudl not > > leave > > ovs and renter in this case. and you can verify that by looking at > > the > > dataplane flows. > > > > > > I've tried looking for some documentation regarding that, but > > > came up > > > empty - are there some links I could look at to get a better > > > understanding of packet flow and best practices? > > > > > > Best Regards, > > > > > > > > > > > > > From sean.mcginnis at gmx.com Wed Jun 23 14:22:31 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 23 Jun 2021 09:22:31 -0500 Subject: [cinder] CI coverage for SPDK In-Reply-To: References: Message-ID: <20210623142231.GA1594121@sm-workstation> > The Mellanox CI is the correct place to look for SPDK CI results. As > recently as March (which, come to think of it, isn't all that recent) the > Mellanox CI ran two test jobs, "Cinder-tgtadm" and "SPDK" [0]. Looks like > the most recent Gerrit comments are showing only results from the > "Cinder-tgtadm" job [1]. (And those are from 8 June, which aren't all that > recent either.) > > I'll reach out to the Mellanox maintainer and maybe he can give the CI > machine a kick. In case folks are interested: Checking name: Mellanox CI - https://wiki.openstack.org/wiki/ThirdPartySystems/Mellanox_CI first seen: 2021-01-05 00:16:23 (169 days, 13:01:26 old) https://review.openstack.org/744069 last seen: 2021-06-09 00:00:59 (14 days, 13:16:50 old) https://review.openstack.org/733622 last success: 2021-06-08 11:38:40 (15 days, 1:39:09 old) https://review.openstack.org/760199 Job Cinder-tgtadm 52% success out of 250 comments S=131, F=119 last success: 2021-06-08 11:38:40 (15 days, 1:39:09 old) https://review.openstack.org/760199 Overall success rate: 52% of 250 comments http://cinderstats.ivehearditbothways.com/cireport.txt From kennelson11 at gmail.com Wed Jun 23 14:29:07 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 23 Jun 2021 07:29:07 -0700 Subject: [TC] Open Infra Live- Open Source Governance In-Reply-To: References: <187a78ef-0e29-d7dd-5506-73515fb28dbd@gmail.com> Message-ID: I think we are good at this point, Amy, but noted! Will let you know if we need a third/backup. -Kendall (diablo_rojo) On Wed, Jun 23, 2021 at 5:13 AM Amy Marrich wrote: > Kendall, > > If you need a third or a back up let me know, > > Thanks, > > Amy > > On Tue, Jun 15, 2021 at 11:56 AM Kendall Nelson > wrote: > >> It would be great to have both of you join! I passed your names onto >> Erin. She will reach out at some point soon. >> >> -Kendall (diablo_rojo) >> >> On Mon, Jun 14, 2021 at 10:57 AM Jay Bryant wrote: >> >>> >>> On 6/14/2021 9:45 AM, Kendall Nelson wrote: >>> >>> Hello TC Folks :) >>> >>> So I have been tasked with helping to collect a couple volunteers for >>> our July 29th episode of Open Infra Live (at 14:00 UTC) on open source >>> governance. >>> >>> I am also working on getting a couple members from the k8s steering >>> committee to join us that day. >>> >>> If you are interested in participating, please let me know! I only need >>> like two volunteers, but if we have more people than that dying to join in, >>> I am sure we can work it out. >>> >>> I can help if you need another person. Let me know. >>> >>> Jay >>> >>> Thanks! >>> >>> -Kendall Nelson (diablo_rojo) >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kklimonda at syntaxhighlighted.com Wed Jun 23 14:54:04 2021 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Wed, 23 Jun 2021 16:54:04 +0200 Subject: =?UTF-8?Q?Re:_[neutron]_OVS_tunnels_and_VLAN_provider_networks_on_the_sa?= =?UTF-8?Q?me_interface?= In-Reply-To: <0088da84c61d4a8d5a95ccaa1787d4861b79347c.camel@redhat.com> References: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> <017ee9ab21a22dc2534d0b15668173151d9bcbbf.camel@redhat.com> <037fe7ca-206e-4ab8-9a1e-7f30e7bbe7f9@www.fastmail.com> <0088da84c61d4a8d5a95ccaa1787d4861b79347c.camel@redhat.com> Message-ID: Hi, On Wed, Jun 23, 2021, at 16:04, Sean Mooney wrote: > On Wed, 2021-06-23 at 15:50 +0200, Krzysztof Klimonda wrote: > > Thanks, > > > > Does this assume that the ovs tunnel traffic is untagged, and there > > are no other tagged vlans that we want to direct to the host instead > > of ovs? > you can do takgin with openflow rules or by taggin the interface in > ovs. In this case, I'd no longer set IP on the bridge, but instead create and tag internal interfaces in vswitchd (basically my second scenario), or can the bridge be somehow tagged from ovs side? > > the l2 agent does not manage flows on the br-ex or your phsyical bridge > so you as an operator are allowed to tag them > > > > What if I want ovs to handle only a subset of VLANs and have other > > directed to the host? > you can do that with a vlan subport on the ovs port but you should > ensure that its outside of the range in the ml2 driver config for the > avaible vlans on the phsynet. Right, that's something I have a control over so it shouldn't be a problem. Thanks. > > That would probably work with with my second option (modulo possible > > loss of connectivity if vswitchd goes down?) but I'm not sure how to > > do that with ovs bridges - with normal bridge, I can make it vlan- > > aware but I'm not sure how this would work with ovs. > > > > Best Regards, > > Krzysztof > > > > On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote: > > > On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote: > > > > Hi All, > > > > > > > > What is the best practice for sharing same interface between OVS > > > > tunnels and VLAN-based provider networks? For provider networks > > > > to > > > > work, I must "bind" entire interface to vswitchd, so that it can > > > > handle > > > > vlan bits, but this leaves me with a question of how to plug ovs > > > > tunnel > > > > interface (and os internal used for control<->compute > > > > communication, if > > > > shared). I have two ideas: > > > > > > you assign the ovs tunnel interface ip to the bridge with the > > > physical > > > interfaces. this is standard practice when using ovs-dpdk for > > > example > > > as otherwise the tunnel traffic will not be dpdk acclerated. i > > > suspect > > > the same requirement exits for hardware offloaded ovs. > > > > > > the bridge local port e.g. br-ex is a interface type internal port. > > > ovs uses a chace of the host routing table to determin what > > > interface > > > to send the (vxlan,gre,geneve) encapsulated packet too based on the > > > next hop interface in the routing table. if you assgign the tunnel > > > local endpoint ip to an ovs bride it enable an internal > > > optimisation > > > that usesa a spescial out_port action that encuse the encapped > > > packet > > > on the bridge port's recive quene then simple mac learing enables > > > it to > > > forward the packet via the physical interface. > > > > > > that is the openflow view a thte dataplant view with ovs-dpctl (or > > > ovs- > > > appctl for dpdk) you will see that the actual datapath flow will > > > just > > > encap the packet and transmit it via physical interface although > > > for > > > this to hapen theere must be a path between the br-tun and tbe br- > > > ex > > > via the br-int that is interconnected via patch ports. > > > > > > creating a patch port via the br-ex and br-int and another pair > > > between > > > the br-tun and br-int can be done automaticaly by the l2 agent wtih > > > teh > > > correct fconfiguration and that allows ovs to collapse the bridge > > > into > > > a singel datapath instnace and execut this optimisation. > > > > > > this has been implemented in the network-ovs-dpdk devstack plugin > > > and > > > then we had it prot to fuel and tripleo depending on you installer > > > it > > > may already support this optimisation but its perfectly valid for > > > kernel ovs also. > > > > > > > > > > > > > > 1) I can bind entire interface to ovs-vswitchd (in ip link output > > > > it's > > > > marked with "master ovs-system") and create vlan interfaces on > > > > top of > > > > that interface *in the system*. This seems to be working > > > > correctly in > > > > my lab tests. > > > that inefficent since it required the packet to be rpcessed by ovs > > > then > > > sent to the kernel networking stack to finally be set via the  vlan > > > interface. > > > > > > > > 2) I can create internal ports in vswitchd and plug them into ovs > > > > bridge - this will make the interface show up in the system, and > > > > I can > > > > configure it afterwards. In this setup I'm concerned with how > > > > packets > > > > from VMs to other computes will flow through the system - will > > > > they > > > > leave openvswitch to host system just to go back again to be sent > > > > through a tunnel? > > > this would also work simiar t what i suggested above but its > > > simpelr to > > > just use the bridge local port instead. the packtes shoudl not > > > leave > > > ovs and renter in this case. and you can verify that by looking at > > > the > > > dataplane flows. > > > > > > > > I've tried looking for some documentation regarding that, but > > > > came up > > > > empty - are there some links I could look at to get a better > > > > understanding of packet flow and best practices? > > > > > > > > Best Regards, > > > > > > > > > > > > > > > > > > > > > > > > -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com From fabian.wiesel at sap.com Wed Jun 23 15:15:14 2021 From: fabian.wiesel at sap.com (Wiesel, Fabian) Date: Wed, 23 Jun 2021 15:15:14 +0000 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <29FD1A0B-9861-4DF6-858A-6410E8C8C9FB@gmail.com> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> <159a157942928d51a615d51ddfc1250b29f87ed9.camel@redhat.com> <8519F1B3-DBDB-42A2-BE9F-864D46762BBC@sap.com> <29FD1A0B-9861-4DF6-858A-6410E8C8C9FB@gmail.com> Message-ID: <1752692B-AEC8-4E36-8338-57D600D4A863@sap.com> Hi, On 23/6/21, 13:04, "Artem Goncharov" wrote: > - we have a project that adds plugins into OSC and in some cases overrides native behaviour of it. Delivery is as easy as “pip install openstackclient MY_FOKED_CLOUD_PLUGINS_PROJECT”. It is not that different from initially doing “pip install openstackclient" - we have a project that adds plugins into OSC and in some cases overrides native behaviour of it. Delivery is as easy as “pip install openstackclient MY_FOKED_CLOUD_PLUGINS_PROJECT”. It is not that different from initially doing “pip install openstackclient" How do you manage then the rest of the life-cycle of the client software? And other languages? > - fraction of seconds there, fraction in another place and suddenly users are crying: why the heck this tool is so slow (here I mean another side of the coin where simply forcing users to retry invocation with corrected set of parameters with +4s for initialisations, +1s on laggy network, + some more retries with further problems, + cleaning up after really failed attempts, etc are making users mad) I agree that responsiveness is good, but I think, the proposed client validation won't make much of a dent there. > - I am personally 100% belonging to "fail early” group. It just take much more efforts explaining to the user what this bloody server response without any message in it means (we all know the sad reality of usefulness of some of the responses). I think that points to more problems with the client-side approach, and is for me another argument to do it server-side: Doing the validation in the OSC means that other clients (java, go, etc...) are not benefitting from the work. Server-side, I can roll out an improved error message as fast as my deployment pipeline allows to all users and all clients. Which adds another point: The more logic you have in the client, the more likely they are going deviate from the server. Another source of bugs. And what about the error messages themselves? How do we ensure that they are consistent across the whole user-base? If they are client side, they differ from version to version, and language to language. > > As a compromise, I would suggest to make the client validation configurable as in kubectl with --validate=true. > Sounds really like a reasonable compromise (but I would reverse the flag to allow skipping - I hate possibility to create broken resources), but as I mentioned earlier - sooner or later you will start paying for the fork. So start doing things proper from the beginning. I agree, if going for client-side validation, would go with the validation being on by default. Cheers, Fabian From thierry at openstack.org Wed Jun 23 15:54:41 2021 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 23 Jun 2021 17:54:41 +0200 Subject: [largescale-sig] Next meeting: June 23, 15utc on #openstack-operators In-Reply-To: <21ffcd3a-eea8-6aaa-82cf-c6dc71fe03d0@openstack.org> References: <21ffcd3a-eea8-6aaa-82cf-c6dc71fe03d0@openstack.org> Message-ID: <2a7c6d6c-cd7d-df91-e427-6ba127eb55f2@openstack.org> We held our meeting today. Here is a high-level summary: - We selected the following topic for the next OpenInfra Live "large Scale OpenStack" episode on July 15: "How OpenStack large clouds manage their spare capacity" - Belmiro agreed to emcee and do the initial "challenge statement" short presentation - Thierry will reach out to potential guest speakers. Let me know if you run a large deployment and have a cool story to share on that topic. You can read the meeting logs at: https://meetings.opendev.org/meetings/large_scale_sig/2021/large_scale_sig.2021-06-23-15.00.html Our next IRC meeting will be July 7, at 1500utc on #openstack-operators on OFTC. Regards, -- Thierry Carrez (ttx) From gmann at ghanshyammann.com Wed Jun 23 16:01:26 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 23 Jun 2021 11:01:26 -0500 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <8519F1B3-DBDB-42A2-BE9F-864D46762BBC@sap.com> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> <159a157942928d51a615d51ddfc1250b29f87ed9.camel@redhat.com> <8519F1B3-DBDB-42A2-BE9F-864D46762BBC@sap.com> Message-ID: <17a399ad3fe.e9eb6850125616.4484464742576361933@ghanshyammann.com> ---- On Wed, 23 Jun 2021 05:21:58 -0500 Wiesel, Fabian wrote ---- > Hi, > > I take a different view, possibly because I am in a similar position as the requestor. > I also work on a openstack installation, which we need to patch to our needs. > We try to do everything upstream first, but chances are, there will be changes which are not upstreamable. > > We also have large user-base, and it is a great advantage to be able to point people to the official client, even if the server is not the official one. > A strict client policy would require us to fork the client as well, and distribute that to our user-base. With a couple of thousand users, that is not so trivial. > In my point-of-view, such a decision would tightly couple the client to the server for a limited benefit (a fraction of seconds earlier error message). What are the exact reason for not upstreaming the changes? We have microversion mechanish in Nova API to improve/change the API in backward compatible and discoverable way. That will be helpful to add the more API/changing existing APIs without impacting the existing user of that API. -gmann > > As a compromise, I would suggest to make the client validation configurable as in kubectl with --validate=true. > > Cheers, > Fabian > > > From gmann at ghanshyammann.com Wed Jun 23 16:03:07 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 23 Jun 2021 11:03:07 -0500 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <159a157942928d51a615d51ddfc1250b29f87ed9.camel@redhat.com> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> <159a157942928d51a615d51ddfc1250b29f87ed9.camel@redhat.com> Message-ID: <17a399c60c9.106116c3e125731.6901224704155133014@ghanshyammann.com> ---- On Wed, 23 Jun 2021 04:16:21 -0500 Stephen Finucane wrote ---- > On Tue, 2021-06-22 at 23:49 +0300, Andrey Kurilin wrote:Hi folks! > As a guy who was top 1 contributor to novaclient at some point, I tried to extend a validation at the client-side as much as possible.I mean, I really like the approach when the user sees a validation error in an ms (or a second depending on the system and plugins) without passing the auth and sending any request to API, so big +1 to leave enum of possible choices there. > > Cool, that's my thinking also. > BUT I have one concern here: does it possible that the number of official policies will be extended or it becomes pluggable(without patching of nova code itself)? In this case, it would be nice to be a bit less strict. > As Sean has said elsewhere, there's no way to extend this without a microversion. I think it's fair to request that users upgrade their client if they wish to support newer microversions. Yes, plugin-able or extension mechanisms had many other problems in term of interoperability or so. I think microversion is good way to introduce the new API changes without breaking existing users. > Stephen > > вт, 22 июн. 2021 г. в 20:51, Sean Mooney : > On Tue, 2021-06-22 at 17:17 +0000, Jeremy Stanley wrote: > > On 2021-06-22 17:39:42 +0100 (+0100), Stephen Finucane wrote: > > [...] > > > Apparently someone has been relying on a bug in Nova to pass a > > > different value to the API that what the schema should have > > > allowed, and they are dismayed that the client no longer allows > > > them to do this. > > [...] > > > > I can't find where they explained what new policy they've > > implemented in their fork. Perhaps if they elaborated on the use > > case, it could be it's something the Nova maintainers would accept a > > patch to officially extend the API to incorporate, allowing that > > deployment to un-fork? > my understandign is that they are trying to model fault domains an have > a fault domain aware anti affintiy policy that use host-aggreate or azs > to model the fault to doamin. > > they reasched out to us downstream too about this and all i know so > fart is they are implemetneign there own filter to do this which is > valid. what is not valid ti extending a seperate api in this case the > server group api to then use as an input to the out of tree filter. > > if they had use a schduler hint which inteionally support out of tree > hints or a flaovr extra spec then it would be fine. the use fo a custom > server group policy whne the server groups is not a defiend public > extion point is the soucce of the confilct. > > the use case of an host aggrate anti affinti plicy while likely not > efficent to implement is at leaset a somewhat resonable one that i > could see supporting upstream. although there are many edgcases with > regard to host being in mutliple host aggreates. if they are doing this > based on avaiablity zone that is simler since a hsot can only be in one > az. > > in anycase it woudl be nice if they brought there usecase upstream or > even downstream so we could find a more supprotable way to enable it. > > > > > > > From DHilsbos at performair.com Wed Jun 23 16:25:11 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Wed, 23 Jun 2021 16:25:11 +0000 Subject: [neutron] OVS tunnels and VLAN provider networks on the same interface In-Reply-To: <037fe7ca-206e-4ab8-9a1e-7f30e7bbe7f9@www.fastmail.com> References: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> <017ee9ab21a22dc2534d0b15668173151d9bcbbf.camel@redhat.com> <037fe7ca-206e-4ab8-9a1e-7f30e7bbe7f9@www.fastmail.com> Message-ID: <0670B960225633449A24709C291A5252511E76EF@COM01.performair.local> Krzysztof; You've gotten a number of very good answers to your question, but I think we have a similar network to yours. Our network is heavily VLANed, and we wanted tenant networks to be VxLAN tunneled (over a VLAN). Most of our OpenStack hosts need access to several VLANs. Here's how we did it: We started out by not assigning an IP address to the physical port. We defined VLAN ports in the OS for the VLANs that the host needs (OpenStack management & Service, and Ceph public, plus the tunneling VLAN), and assigned them IP addresses. Then, in /etc/neutron/plugins/ml2_config.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch extension_drivers = port_security [ml2_type_vxlan] vni_ranges = 1:1000 [ml2_type_vlan] network_vlan_ranges = provider_core::{, provider_core::} And, in /etc/neutron/plugins/ml2/openvswitch_agent.ini [agent] tunnel_types = vxlan [ovs] local_ip = bridge_mappings = provider_core: I don't know if this works better for you than previous answers, but it's what we decided to do. Thank you, Dominic L. Hilsbos, MBA Vice President - Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Krzysztof Klimonda [mailto:kklimonda at syntaxhighlighted.com] Sent: Wednesday, June 23, 2021 6:50 AM To: openstack-discuss at lists.openstack.org Subject: Re: [neutron] OVS tunnels and VLAN provider networks on the same interface Thanks, Does this assume that the ovs tunnel traffic is untagged, and there are no other tagged vlans that we want to direct to the host instead of ovs? What if I want ovs to handle only a subset of VLANs and have other directed to the host? That would probably work with with my second option (modulo possible loss of connectivity if vswitchd goes down?) but I'm not sure how to do that with ovs bridges - with normal bridge, I can make it vlan-aware but I'm not sure how this would work with ovs. Best Regards, Krzysztof On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote: > On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote: > > Hi All, > > > > What is the best practice for sharing same interface between OVS > > tunnels and VLAN-based provider networks? For provider networks to > > work, I must "bind" entire interface to vswitchd, so that it can handle > > vlan bits, but this leaves me with a question of how to plug ovs tunnel > > interface (and os internal used for control<->compute communication, if > > shared). I have two ideas: > > you assign the ovs tunnel interface ip to the bridge with the physical > interfaces. this is standard practice when using ovs-dpdk for example > as otherwise the tunnel traffic will not be dpdk acclerated. i suspect > the same requirement exits for hardware offloaded ovs. > > the bridge local port e.g. br-ex is a interface type internal port. > ovs uses a chace of the host routing table to determin what interface > to send the (vxlan,gre,geneve) encapsulated packet too based on the > next hop interface in the routing table. if you assgign the tunnel > local endpoint ip to an ovs bride it enable an internal optimisation > that usesa a spescial out_port action that encuse the encapped packet > on the bridge port's recive quene then simple mac learing enables it to > forward the packet via the physical interface. > > that is the openflow view a thte dataplant view with ovs-dpctl (or ovs- > appctl for dpdk) you will see that the actual datapath flow will just > encap the packet and transmit it via physical interface although for > this to hapen theere must be a path between the br-tun and tbe br-ex > via the br-int that is interconnected via patch ports. > > creating a patch port via the br-ex and br-int and another pair between > the br-tun and br-int can be done automaticaly by the l2 agent wtih teh > correct fconfiguration and that allows ovs to collapse the bridge into > a singel datapath instnace and execut this optimisation. > > this has been implemented in the network-ovs-dpdk devstack plugin and > then we had it prot to fuel and tripleo depending on you installer it > may already support this optimisation but its perfectly valid for > kernel ovs also. > > > > > > 1) I can bind entire interface to ovs-vswitchd (in ip link output it's > > marked with "master ovs-system") and create vlan interfaces on top of > > that interface *in the system*. This seems to be working correctly in > > my lab tests. > that inefficent since it required the packet to be rpcessed by ovs then > sent to the kernel networking stack to finally be set via the vlan > interface. > > > > 2) I can create internal ports in vswitchd and plug them into ovs > > bridge - this will make the interface show up in the system, and I can > > configure it afterwards. In this setup I'm concerned with how packets > > from VMs to other computes will flow through the system - will they > > leave openvswitch to host system just to go back again to be sent > > through a tunnel? > this would also work simiar t what i suggested above but its simpelr to > just use the bridge local port instead. the packtes shoudl not leave > ovs and renter in this case. and you can verify that by looking at the > dataplane flows. > > > > I've tried looking for some documentation regarding that, but came up > > empty - are there some links I could look at to get a better > > understanding of packet flow and best practices? > > > > Best Regards, > > > > > > -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com From smooney at redhat.com Wed Jun 23 17:02:12 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 23 Jun 2021 18:02:12 +0100 Subject: [neutron] OVS tunnels and VLAN provider networks on the same interface In-Reply-To: References: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> <017ee9ab21a22dc2534d0b15668173151d9bcbbf.camel@redhat.com> <037fe7ca-206e-4ab8-9a1e-7f30e7bbe7f9@www.fastmail.com> <0088da84c61d4a8d5a95ccaa1787d4861b79347c.camel@redhat.com> Message-ID: <9bfa0b564831970c499410cc982c296cb9ed0ab6.camel@redhat.com> On Wed, 2021-06-23 at 16:54 +0200, Krzysztof Klimonda wrote: > Hi, > > On Wed, Jun 23, 2021, at 16:04, Sean Mooney wrote: > > On Wed, 2021-06-23 at 15:50 +0200, Krzysztof Klimonda wrote: > > > Thanks, > > > > > > Does this assume that the ovs tunnel traffic is untagged, and > > > there > > > are no other tagged vlans that we want to direct to the host > > > instead > > > of ovs? > > you can do takgin with openflow rules or by taggin the interface in > > ovs. > > In this case, I'd no longer set IP on the bridge, but instead create > and tag internal interfaces in vswitchd (basically my second > scenario), or can the bridge be somehow tagged from ovs side? i would still assign the ip to the bridge an yes you can tag on the ovs side although i would not i route all my tenant traffic over a vlan sub inteface crated in a linux bond and add it as the only interface to my ovs. this means i cant use vlan network in my gues really as it will be duble taged but vxlan is confine din my case to vlan4 by the vlan sub interface. if i was not useing a kernel bond could also vlan tag inside ovs but since i want the bound to be on the host i cant use a macvlan or ipvlan since that will not work for arp reasons. all reponces for the cloud will go to the bond since the macvlan mac is different from the vm/router mac. you can just add the port or bound to ovs and then create a macvlan or vlan for the host if you want too. that works but for arp to work for you vms as i said the bound has to be attach to ovs directly and the subport used for host networking > > > > the l2 agent does not manage flows on the br-ex or your phsyical > > bridge > > so you as an operator are allowed to tag them > > > > > > What if I want ovs to handle only a subset of VLANs and have > > > other > > > directed to the host? > > you can do that with a vlan subport on the ovs port but you should > > ensure that its outside of the range in the ml2 driver config for > > the > > avaible vlans on the phsynet. > > Right, that's something I have a control over so it shouldn't be a > problem. > > Thanks. > > > >  That would probably work with with my second option (modulo > > > possible > > > loss of connectivity if vswitchd goes down?) but I'm not sure how > > > to > > > do that with ovs bridges - with normal bridge, I can make it > > > vlan- > > > aware but I'm not sure how this would work with ovs. > > > > > > Best Regards, > > > Krzysztof > > > > > > On Wed, Jun 23, 2021, at 12:45, Sean Mooney wrote: > > > > On Wed, 2021-06-23 at 10:10 +0200, Krzysztof Klimonda wrote: > > > > > Hi All, > > > > > > > > > > What is the best practice for sharing same interface between > > > > > OVS > > > > > tunnels and VLAN-based provider networks? For provider > > > > > networks > > > > > to > > > > > work, I must "bind" entire interface to vswitchd, so that it > > > > > can > > > > > handle > > > > > vlan bits, but this leaves me with a question of how to plug > > > > > ovs > > > > > tunnel > > > > > interface (and os internal used for control<->compute > > > > > communication, if > > > > > shared). I have two ideas: > > > > > > > > you assign the ovs tunnel interface ip to the bridge with the > > > > physical > > > > interfaces. this is standard practice when using ovs-dpdk for > > > > example > > > > as otherwise the tunnel traffic will not be dpdk acclerated. i > > > > suspect > > > > the same requirement exits for hardware offloaded ovs. > > > > > > > > the bridge local port e.g. br-ex is a interface type internal > > > > port. > > > > ovs uses a chace of the host routing table to determin what > > > > interface > > > > to send the (vxlan,gre,geneve) encapsulated packet too based on > > > > the > > > > next hop interface in the routing table. if you assgign the > > > > tunnel > > > > local endpoint ip to an ovs bride it enable an internal > > > > optimisation > > > > that usesa a spescial out_port action that encuse the encapped > > > > packet > > > > on the bridge port's recive quene then simple mac learing > > > > enables > > > > it to > > > > forward the packet via the physical interface. > > > > > > > > that is the openflow view a thte dataplant view with ovs-dpctl > > > > (or > > > > ovs- > > > > appctl for dpdk) you will see that the actual datapath flow > > > > will > > > > just > > > > encap the packet and transmit it via physical interface > > > > although > > > > for > > > > this to hapen theere must be a path between the br-tun and tbe > > > > br- > > > > ex > > > > via the br-int that is interconnected via patch ports. > > > > > > > > creating a patch port via the br-ex and br-int and another pair > > > > between > > > > the br-tun and br-int can be done automaticaly by the l2 agent > > > > wtih > > > > teh > > > > correct fconfiguration and that allows ovs to collapse the > > > > bridge > > > > into > > > > a singel datapath instnace and execut this optimisation. > > > > > > > > this has been implemented in the network-ovs-dpdk devstack > > > > plugin > > > > and > > > > then we had it prot to fuel and tripleo depending on you > > > > installer > > > > it > > > > may already support this optimisation but its perfectly valid > > > > for > > > > kernel ovs also. > > > > > > > > > > > > > > > > > > 1) I can bind entire interface to ovs-vswitchd (in ip link > > > > > output > > > > > it's > > > > > marked with "master ovs-system") and create vlan interfaces > > > > > on > > > > > top of > > > > > that interface *in the system*. This seems to be working > > > > > correctly in > > > > > my lab tests. > > > > that inefficent since it required the packet to be rpcessed by > > > > ovs > > > > then > > > > sent to the kernel networking stack to finally be set via the  > > > > vlan > > > > interface. > > > > > > > > > > 2) I can create internal ports in vswitchd and plug them into > > > > > ovs > > > > > bridge - this will make the interface show up in the system, > > > > > and > > > > > I can > > > > > configure it afterwards. In this setup I'm concerned with how > > > > > packets > > > > > from VMs to other computes will flow through the system - > > > > > will > > > > > they > > > > > leave openvswitch to host system just to go back again to be > > > > > sent > > > > > through a tunnel? > > > > this would also work simiar t what i suggested above but its > > > > simpelr to > > > > just use the bridge local port instead. the packtes shoudl not > > > > leave > > > > ovs and renter in this case. and you can verify that by looking > > > > at > > > > the > > > > dataplane flows. > > > > > > > > > > I've tried looking for some documentation regarding that, but > > > > > came up > > > > > empty - are there some links I could look at to get a better > > > > > understanding of packet flow and best practices? > > > > > > > > > > Best Regards, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From allison at openstack.org Wed Jun 23 20:11:38 2021 From: allison at openstack.org (Allison Price) Date: Wed, 23 Jun 2021 15:11:38 -0500 Subject: It's time to submit your OpenStack User Survey Message-ID: <8F385236-8B73-4AA5-B632-EDBD30FCBD2C@openstack.org> Hi everyone, Here’s my annual email requesting you complete the OpenStack User Survey [1] and share with your local communities / community friends who are operating OpenStack. Anonymous reposes and aggregated data from the User Survey are shared with the OPenStack PTLs and overall community to help understand operator software requirements. If you have never taken it before, please do! We welcome your feedback and I am happy to answer any questions you may have. If you are one of the many who have, all of your previous information should be stored, so you will just need to provide updates where relevant and re-save. Thank you all! Cheers, Allison [1] https://www.openstack.org/user-survey/survey-2021 Allison Price Director of Marketing & Community OpenInfra Foundation e: allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu Jun 24 03:29:55 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 23 Jun 2021 23:29:55 -0400 Subject: [cinder] status of some unapproved Xena specs Message-ID: As attentive members of the OpenStack community know, Friday 25 June is the Xena spec freeze for cinder [0]. There are four specs that have been discussed on and off at PTGs and midcycle meetings that don't appear ready for approval in Xena. The discussion of these specs at today's cinder meeting is here: https://meetings.opendev.org/meetings/cinder/2021/cinder.2021-06-23-14.00.log.html#l-183 The specs I'm talking about are these: 1. Support revert any snapshot https://review.opendev.org/c/openstack/cinder-specs/+/736111 discussion: https://wiki.openstack.org/wiki/CinderXenaMidCycleSummary#Support_revert_any_snapshot_to_the_volume 2. Migration support for a volume with replication status enabled https://review.opendev.org/c/openstack/cinder-specs/+/766130 discussion: https://wiki.openstack.org/wiki/CinderXenaMidCycleSummary#Migration_support_for_a_volume_with_replication_status_enabled 3. Update original volume az https://review.opendev.org/c/openstack/cinder-specs/+/778437 discussion: https://wiki.openstack.org/wiki/CinderXenaMidCycleSummary#Migration_support_for_a_volume_with_replication_status_enabled 4. Allow volumes to be part of multiple volume groups https://review.opendev.org/c/openstack/cinder-specs/+/792722 I think this spec isn't getting much attention because it has a -1 from Zuul, and the last few sections still contain the boilerplate language from the spec template, which makes it look like the spec is still in progress. When these have come up for discussion at previous meetings, there is general support among the cinder team, though there are details that we feel need to be worked out on the specs. These details have been noted on the Gerrit reviews and in the previous PTG and midcycle discussion summaries. While some points have been addressed, other important points have not. This makes me think that there is a communication problem between the team and the spec proposers. I encourage people proposing specs that are having problems in review to put their spec on the weekly cinder meeting agenda [1] so that we can discuss them interactively and hopefully explain more clearly what issues the cinder team has with the specs as currently proposed. If attending the weekly cinder meeting (1400 UTC on Wednesdays) is difficult because of time zone issues, please let me know and we can figure something out (maybe an occasional spec review meeting held at some time other than 1400 UTC). Or maybe there's an alternative communication medium we haven't explored yet. The cinder team encourages community participation in cinder development, and it distresses us to see these specs languishing. cheers, brian [0] https://releases.openstack.org/xena/schedule.html#x-cinder-spec-freeze [1] https://etherpad.opendev.org/p/cinder-xena-meetings From adivya1.singh at gmail.com Thu Jun 24 03:38:18 2021 From: adivya1.singh at gmail.com (Adivya Singh) Date: Thu, 24 Jun 2021 09:08:18 +0530 Subject: Regarding dhcp not forwarding the IP in openstack Message-ID: Hi Team, I have a issue where, the dhcp are not forwarding the IP for some reason, and the dhcp are not responding properly in a hypervisor I can see that, all the interfaces are up tap devices are ip, can u please suggest what should we do to resolve the issue Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From angyal.laszlo at gmail.com Thu Jun 24 06:07:32 2021 From: angyal.laszlo at gmail.com (Laszlo Angyal) Date: Thu, 24 Jun 2021 08:07:32 +0200 Subject: [neutron] OVS tunnels and VLAN provider networks on the same interface In-Reply-To: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> References: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> Message-ID: Hi, we share the same interface between OVS tunnels and VLAN-based provider networks like this: bondA - management / ceph frontend traffic (not interesting for now) bondB - plugged into br-ex, no ip, provider VLANs br-ex - we configured ip here and we use it in VXLAN overlay configuration as local_ip Laci On Wed, Jun 23, 2021 at 10:14 AM Krzysztof Klimonda < kklimonda at syntaxhighlighted.com> wrote: > Hi All, > > What is the best practice for sharing same interface between OVS tunnels > and VLAN-based provider networks? For provider networks to work, I must > "bind" entire interface to vswitchd, so that it can handle vlan bits, but > this leaves me with a question of how to plug ovs tunnel interface (and os > internal used for control<->compute communication, if shared). I have two > ideas: > > 1) I can bind entire interface to ovs-vswitchd (in ip link output it's > marked with "master ovs-system") and create vlan interfaces on top of that > interface *in the system*. This seems to be working correctly in my lab > tests. > > 2) I can create internal ports in vswitchd and plug them into ovs bridge - > this will make the interface show up in the system, and I can configure it > afterwards. In this setup I'm concerned with how packets from VMs to other > computes will flow through the system - will they leave openvswitch to host > system just to go back again to be sent through a tunnel? > > I've tried looking for some documentation regarding that, but came up > empty - are there some links I could look at to get a better understanding > of packet flow and best practices? > > Best Regards, > > -- > Krzysztof Klimonda > kklimonda at syntaxhighlighted.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Jun 24 06:25:02 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 24 Jun 2021 08:25:02 +0200 Subject: [neutron] Drivers meeting 25.06.2021 cancelled Message-ID: <16516651.zIkoY7RxDC@p1> Hi, I will be on PTO tomorrow thus I will not be able to chair the meeting. Let's cancel it this week and see You again next Friday. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gchamoul at redhat.com Thu Jun 24 07:54:02 2021 From: gchamoul at redhat.com (=?utf-8?B?R2HDq2w=?= Chamoulaud) Date: Thu, 24 Jun 2021 09:54:02 +0200 Subject: [all][tripleo][validations] Communication from the Validations Framework team! o/ Message-ID: <20210624075402.vfbfbfhrr6tnijan@gchamoul-mac> Greetings everyone! For those who do not know about the Validations Framework yet: The Validations Framework is at the same time, A collection of Ansible roles and playbooks to detect and report potential issues during a TripleO deployment and A Python library to manipulate them through a Command Line Interface. * Validations are used to efficiently and reliably verify various facts about the cloud (or any other products) on the level of individual nodes and hosts. * Validations are non-intrusive by design, and recommended when performing large scale changes to the cloud, for example upgrades, or to aid in the diagnosis of various issues. Historically, this project was created during the Newton cycle for a direct use with OpenStack/TripleO and became the Validations Framework since Stein. We strongly feel and think that this project can be beneficial to other environments, projects or products... If you read this line it's probably because you are still interested, so if you want to talk with us about the Validations Framework, how to create a new validation for your product and/or integrate our CLI in your workflow, please feel free to reach us on this mailing list or come join us on IRC: * IRC channel #validation-framework at Libera (For all subject-matters) * IRC channel #tripleo at OFTC (OpenStack and TripleO discussions) We are looking forward seeing you around! [1] - https://opendev.org/openstack/tripleo-validations [2] - https://docs.openstack.org/tripleo-validations/latest/ [3] - https://opendev.org/openstack/validations-libs [4] - https://docs.openstack.org/validations-libs/latest/ [5] - https://opendev.org/openstack/validations-common [6] - https://docs.openstack.org/validations-common/latest/ Best Regards, Cédric Jeanneret, David J. Peacock, Jan Buchta, Jiri Podivin, Mathieu Bultel, and I! -- Gaël Chamoulaud - (He/Him/His) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ralonsoh at redhat.com Thu Jun 24 08:07:15 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Thu, 24 Jun 2021 10:07:15 +0200 Subject: Regarding dhcp not forwarding the IP in openstack In-Reply-To: References: Message-ID: Hello Adivya: Sorry, but you need to provide more context on this problem. For example, logs from the server and the DHCP agent, backend used, network topology, etc. Did you try to dump traffic inside the DHCP namespace (if you are using OVS or Linux Bridge)? Is the VM requesting this info? Regards. On Thu, Jun 24, 2021 at 5:46 AM Adivya Singh wrote: > Hi Team, > > I have a issue where, the dhcp are not forwarding the IP for some reason, > and the dhcp are not responding properly in a hypervisor > > I can see that, all the interfaces are up tap devices are ip, can u > please suggest what should we do to resolve the issue > > Regards > Adivya Singh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yangyi01 at inspur.com Thu Jun 24 09:56:53 2021 From: yangyi01 at inspur.com (=?utf-8?B?WWkgWWFuZyAo5p2o54eaKS3kupHmnI3liqHpm4blm6I=?=) Date: Thu, 24 Jun 2021 09:56:53 +0000 Subject: [neutron] can floating IP port forwarding work when agent_mode is dvr or dvr_snat but on compute node? Message-ID: <902964b783584c4eb98d26d0e94c101a@inspur.com> Hi, folks I’m working on https://bugs.launchpad.net/neutron/+bug/1931953, per my check for neutron-specs/specs/rocky/port-forwarding.rst, it seems port forwarding function only can be done on network node, so I don’t think we need to do it on compute node, can anybody help confirm this? Per my understanding, if a FIP is set port forwarding to two VMS across two compute node, physical switch has no way to know which compute node it should send to if destination IP is this FIP, right? But on centralized mode, this isn’t an issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3600 bytes Desc: not available URL: From ralonsoh at redhat.com Thu Jun 24 10:38:55 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Thu, 24 Jun 2021 12:38:55 +0200 Subject: [neutron] can floating IP port forwarding work when agent_mode is dvr or dvr_snat but on compute node? In-Reply-To: <902964b783584c4eb98d26d0e94c101a@inspur.com> References: <902964b783584c4eb98d26d0e94c101a@inspur.com> Message-ID: Hello Yi: Yes, the FIP port forwarding is done in the network node [1][2]. Regards. [1] https://specs.openstack.org/openstack/neutron-specs/specs/rocky/port-forwarding.html [2]https://review.opendev.org/c/openstack/neutron/+/533850 On Thu, Jun 24, 2021 at 12:07 PM Yi Yang (杨燚)-云服务集团 wrote: > Hi, folks > > > > I’m working on https://bugs.launchpad.net/neutron/+bug/1931953, per my > check for neutron-specs/specs/rocky/port-forwarding.rst, it seems port > forwarding function only can be done on network node, so I don’t think we > need to do it on compute node, can anybody help confirm this? > > > > Per my understanding, if a FIP is set port forwarding to two VMS across > two compute node, physical switch has no way to know which compute node it > should send to if destination IP is this FIP, right? But on centralized > mode, this isn’t an issue. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabian.wiesel at sap.com Thu Jun 24 12:22:09 2021 From: fabian.wiesel at sap.com (Wiesel, Fabian) Date: Thu, 24 Jun 2021 12:22:09 +0000 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <17a399ad3fe.e9eb6850125616.4484464742576361933@ghanshyammann.com> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> <159a157942928d51a615d51ddfc1250b29f87ed9.camel@redhat.com> <8519F1B3-DBDB-42A2-BE9F-864D46762BBC@sap.com> <17a399ad3fe.e9eb6850125616.4484464742576361933@ghanshyammann.com> Message-ID: <99A034DD-29EE-45DE-A401-2E63879EBB59@sap.com> On 23/6/21, 18:03, "Ghanshyam Mann" wrote: ---- On Wed, 23 Jun 2021 05:21:58 -0500 Wiesel, Fabian wrote ---- > I take a different view, possibly because I am in a similar position as the requestor. > I also work on a openstack installation, which we need to patch to our needs. > We try to do everything upstream first, but chances are, there will be changes which are not upstreamable. > > We also have large user-base, and it is a great advantage to be able to point people to the official client, even if the server is not the official one. > A strict client policy would require us to fork the client as well, and distribute that to our user-base. With a couple of thousand users, that is not so trivial. > In my point-of-view, such a decision would tightly couple the client to the server for a limited benefit (a fraction of seconds earlier error message). What are the exact reason for not upstreaming the changes? We have microversion mechanish in Nova API to improve/change the API in backward compatible and discoverable way. That will be helpful to add the more API/changing existing APIs without impacting the existing user of that API. Currently, we do not have any API changes and our team inside SAP is pushing back against custom changes in the API from our user-base. Any API change we plan to do, we try to get consensus with upstream first. But chances are, that there are requests within our company we must fulfill (even if our team itself may disagree) within a certain timeline, and I do not expect that the community will comply with either the timeline or the request itself. The changes we do not try to upstream are simply things we consider workarounds for our special situation: We are reaching the supported limits of our vendor (VMware), and we are trying to get our vendor to fix those. Cheers, Fabian From scriptkiddie at wp.pl Thu Jun 24 12:27:27 2021 From: scriptkiddie at wp.pl (Marek Szuba) Date: Thu, 24 Jun 2021 13:27:27 +0100 Subject: [keystone] "identity_provider failed validation" getting scoped token, on Rocky/Python3 Message-ID: Dear everyone, I run a small OpenStack cloud on Debian Buster, using standard distro packages - i.e. Rocky. My Keystone is configured as a federated-identity Service Provider, with the IdP accessed using OpenID. Having got things working successfully using local users I went on to testing federation - and found out that using the OpenStack CLI to get a scoped token from an unscoped one reports a server error 500, and logging in to Horizon shows usage and event data couldn't be retrieved. On the server side, in both cases Keystone log shows the following: INFO keystone.common.wsgi [req-foo bar baz - Federated default] POST https://osc.example.com:5000/v3/auth/tokens ERROR keystone.common.wsgi [req-foo bar baz - Federated default] identity_provider failed validation: at 0xdeadbeef>: ValueError: identity_provider failed validation: at 0xdeadbeef> ERROR keystone.common.wsgi Traceback (most recent call last): ERROR keystone.common.wsgi File "/usr/lib/python3/dist-packages/keystone/common/wsgi.py", line 148, in __call__ ERROR keystone.common.wsgi result = method(req, **params) ERROR keystone.common.wsgi File "/usr/lib/python3/dist-packages/keystone/auth/controllers.py", line 67, in authenticate_for_token ERROR keystone.common.wsgi self.authenticate(request, auth_info, auth_context) ERROR keystone.common.wsgi File "/usr/lib/python3/dist-packages/keystone/auth/controllers.py", line 236, in authenticate ERROR keystone.common.wsgi auth_info.get_method_data(method_name)) ERROR keystone.common.wsgi File "/usr/lib/python3/dist-packages/keystone/auth/plugins/token.py", line 46, in authenticate ERROR keystone.common.wsgi PROVIDERS.identity_api ERROR keystone.common.wsgi File "/usr/lib/python3/dist-packages/keystone/auth/plugins/mapped.py", line 101, in handle_scoped_token ERROR keystone.common.wsgi send_notification(taxonomy.OUTCOME_SUCCESS) ERROR keystone.common.wsgi File "/usr/lib/python3/dist-packages/keystone/notifications.py", line 685, in send_saml_audit_notification ERROR keystone.common.wsgi user=user_id, groups=group_ids) ERROR keystone.common.wsgi File "/usr/lib/python3/dist-packages/pycadf/credential.py", line 84, in __init__ ERROR keystone.common.wsgi setattr(self, FED_CRED_KEYNAME_IDENTITY_PROVIDER, identity_provider) ERROR keystone.common.wsgi File "/usr/lib/python3/dist-packages/pycadf/cadftype.py", line 66, in __set__ ERROR keystone.common.wsgi (self.name, self.func)) ERROR keystone.common.wsgi ValueError: identity_provider failed validation: at 0xdeadbeef> i.e. the request has succeeded but then things fall over when an audit notification is to be sent. Having poked around the sources of keystone.notifications.send_saml_audit_notifications() and the code it references, I found out the following: - the lambda function which triggers the error checks if 'identity_provider' is a six string type; - when this error occurs the value of 'identity_provider' is indeed the name of my IdP - but as *bytes* rather than str! - this doesn't happen every time this IdP name is used - if I add a simple identity_provider = identity_provider.decode('utf-8') to the relevant function I start getting errors suggesting that under some circumstances, 'identity_provider' is str as it should be. All in all, it seems like this particular bit of Keystone code in Rocky does not properly support Python3. Unfortunately an upgrade to a newer OpenStack version is non-trivial for me for several reasons, only some of which are under my control - and the stop-gap measure of having patched the relevant function with "if identity_provider is bytes, decode it to str" means I've had to put the relevant Debian package on hold, thus blocking potential updates. Therefore, I look forward to hearing from you any and all ideas which might help address this problem in less radical fashion. Thank you in advance! -- MS From swogatpradhan22 at gmail.com Thu Jun 24 12:39:00 2021 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Thu, 24 Jun 2021 18:09:00 +0530 Subject: [cinder] [replication] Error in cinder replication | Openstack Victoria Message-ID: Hi, I am trying to configure cinder replication with ceph as backend and i followed this URL to configure the cinder replication: https://netapp.io/2016/10/14/cinder-replication-netapp-perfect-cheesecake-recipe/ and http://www.sebastien-han.fr/blog/2017/06/19/OpenStack-Cinder-configure-replication-api-with-ceph/ i created a Volume type - REPL, with replication enabled True and colume backend name parameter. But when i am trying to create a volume using the created volume type the volume is getting created in ceph but in cinder the status is showing error. and when checked log getting the following error: 2021-06-24 17:59:39.556 28472 ERROR cinder.volume.drivers.rbd [req-6e6b901e-f2dd-43a4-a089-6c734b279e17 d576181b6a444541b8ec7f37d750a4c1 a9ac2eba50d84d64ac44b179bbcc9183 - default default] Error creating rbd image volume-361f35c4-a589-4653-b404-fb93d2f598cf.: cinder.exception.ReplicationError: Volume 361f35c4-a589-4653-b404-fb93d2f598cf replication error: Failed to enable image replication I tried to create a volume using volume type- default and then later changed the volume type using command 'cinder retype' to the created volume type and it changed without any issue. I am using openstack victoria . With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariusz.karpiarz at vscaler.com Thu Jun 24 12:32:45 2021 From: mariusz.karpiarz at vscaler.com (Mariusz Karpiarz) Date: Thu, 24 Jun 2021 12:32:45 +0000 Subject: [nova][libvirt][cinder] Volume-based live migration and multi-attach Message-ID: <646FBE70-FE71-44D0-A8F0-37A66B26C1BE@vscaler.com> Hi all, For live migration of volume-based instances (a Cinder volume used for the main OS disk) to work does the Cinder backend need to support multi-attach or at any point of the migration process are there multiple processes accessing the same volume? -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jun 24 14:02:44 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 24 Jun 2021 15:02:44 +0100 Subject: [neutron] OVS tunnels and VLAN provider networks on the same interface In-Reply-To: References: <6ad2efa0-42ef-4070-84e0-b82ae4d554f4@www.fastmail.com> Message-ID: On Thu, 2021-06-24 at 08:07 +0200, Laszlo Angyal wrote: > Hi, > > we share the same interface between OVS tunnels and VLAN-based > provider > networks like this: > bondA - management / ceph frontend traffic (not interesting for now) > bondB - plugged into br-ex, no ip, provider VLANs > br-ex - we configured ip here and we use it in VXLAN overlay > configuration > as local_ip yep this is a pretty standard an more or less optimal configuration for kernel ovs wehre you want to share one interface for both vlan and vxlan networks. if you have only one interface or bond avaiable you would create macvlan or vlan interface for management and ceph and add the bond/interface to ovs directly. > > Laci > > > On Wed, Jun 23, 2021 at 10:14 AM Krzysztof Klimonda < > kklimonda at syntaxhighlighted.com> wrote: > > > Hi All, > > > > What is the best practice for sharing same interface between OVS > > tunnels > > and VLAN-based provider networks? For provider networks to work, I > > must > > "bind" entire interface to vswitchd, so that it can handle vlan > > bits, but > > this leaves me with a question of how to plug ovs tunnel interface > > (and os > > internal used for control<->compute communication, if shared). I > > have two > > ideas: > > > > 1) I can bind entire interface to ovs-vswitchd (in ip link output > > it's > > marked with "master ovs-system") and create vlan interfaces on top > > of that > > interface *in the system*. This seems to be working correctly in my > > lab > > tests. > > > > 2) I can create internal ports in vswitchd and plug them into ovs > > bridge - > > this will make the interface show up in the system, and I can > > configure it > > afterwards. In this setup I'm concerned with how packets from VMs > > to other > > computes will flow through the system - will they leave openvswitch > > to host > > system just to go back again to be sent through a tunnel? > > > > I've tried looking for some documentation regarding that, but came > > up > > empty - are there some links I could look at to get a better > > understanding > > of packet flow and best practices? > > > > Best Regards, > > > > -- > >   Krzysztof Klimonda > >   kklimonda at syntaxhighlighted.com > > > > From smooney at redhat.com Thu Jun 24 14:06:48 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 24 Jun 2021 15:06:48 +0100 Subject: [nova][libvirt][cinder] Volume-based live migration and multi-attach In-Reply-To: <646FBE70-FE71-44D0-A8F0-37A66B26C1BE@vscaler.com> References: <646FBE70-FE71-44D0-A8F0-37A66B26C1BE@vscaler.com> Message-ID: <8e8c496ee93e494f6d1ff72bf1ec203aaf0dd884.camel@redhat.com> On Thu, 2021-06-24 at 12:32 +0000, Mariusz Karpiarz wrote: > Hi all, > For live migration of volume-based instances (a Cinder volume used > for the main OS disk) to work does the Cinder backend need to support > multi-attach or at any point of the migration process are there > multiple processes accessing the same volume? it does not need to support multi atach. while the volume does need to be mapped to two host or qemu instaces for a period of time only one of the two instaces will ever be running. the other instace will be paused as libvirt is orcestrating the coping of the guest ram form the srouce to the dest. libvirt will pause the souce qemu instance before it unpauses the dest instance so the volume will only be acceessed by the source or dest at any one time but never both. From gmann at ghanshyammann.com Thu Jun 24 14:48:33 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 24 Jun 2021 09:48:33 -0500 Subject: [all][tc] Technical Committee next weekly meeting on June 24th at 1500 UTC In-Reply-To: <17a35177fb0.dd0c569265156.6885497306663202773@ghanshyammann.com> References: <17a35177fb0.dd0c569265156.6885497306663202773@ghanshyammann.com> Message-ID: <17a3e7e76bb.1294223ad20139.7682718328718051982@ghanshyammann.com> Hello Everyone, Below is the agenda for Today's TC meeting schedule at 1500 UTC in #openstack-tc IRC OFTC channel. -https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Migration from 'Freenode' to 'OFTC' (gmann) ** https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc * Xena Tracker ** https://etherpad.opendev.org/p/tc-xena-tracker * Governance non-active repos retirement & cleanup ** https://etherpad.opendev.org/p/governance-repos-cleanup * Election official assignments ** http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023060.html * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Tue, 22 Jun 2021 13:59:30 -0500 Ghanshyam Mann wrote ---- > > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for June 24th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, June 23rd, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From noonedeadpunk at ya.ru Thu Jun 24 15:28:39 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Thu, 24 Jun 2021 18:28:39 +0300 Subject: [nova] SCS standardized flavor naming In-Reply-To: References: <1508311624445726@mail.yandex.ru> <47e88e68efe3dad9b7f165e4738fca7708944cde.camel@redhat.com> Message-ID: <237831624548255@mail.yandex.ru> I decided to go on and did some sum up and submitted Issue [1] as was suggested at the beginning of the thread. Please feel free to adjust it if needed and add additional thoughts! [1] https://github.com/SovereignCloudStack/Operational-Docs/issues/18 23.06.2021, 16:51, "Julia Kreger" : > I'm suddenly reminded of the Sydney summit where the one thing > operators seemed to be able to agree upon, was that they would never > be able to agree upon standard naming for flavors. In large part > because a huge commonality was that some teams ultimately needed > highly tuned flavors, be it baremetal or virtual machines, to achieve > their jobs. Sometimes these flavors had to have special scheduling as > a result. It really sounds like a space we all want to avoid, but > humans really need easy to relate to information when starting out. > Easy to understand and relate to also likely solves a huge number of > the cases until we get into the hyper-scaler deployments with specific > needs throughout their business. > > On Wed, Jun 23, 2021 at 6:21 AM Sean Mooney wrote: >>  On Wed, 2021-06-23 at 14:31 +0300, Dmitriy Rabotyagov wrote: >>  > Hi! >>  > >>  > > The point is, a new customer will *not* spend time reading the >>  > > spec. >>  > > Typically, they will want to just fire-up a VM quickly without >>  > > reading >>  > > too much docs... >>  > >>  > While I find Thomases flavor naming also not really intuitive for >>  > customers - out of nvt4-a8-ram24-disk50-perf2 I could guess only disk >>  > size and amout of ram (is it in gygabytes?) but "SCS-16T:64:200s- >>  > GNa:64-ib" doesn't make any sense to me at all (if I haven't read [1] >>  > ofc). >>  > >>  > I totally agree that no user in Public cloud would read any spec >>  > before launching their VM and it would be super hard to force them to >>  > do so. >>  > >>  > So flavor naming should be as explicit and readable as possible and >>  > assuming that person who will use cloud has no idea about specs we're >>  > making. These specs should be designed for cloud providers to comply >>  > and have same standards so users feel comfortable and secure, but >>  > don't assume regular users to have special skills in reading what >>  > engineers come up to. If regular users would find this hard to use, >>  > companies might choose hapiness of customers over some compliance. >>  > >>  > As nova doesn't have any text description for flavors, so flavor name >>  > is everything we have to expose to the customers and it should be >>  > clean and readable from the first sight. >>  > >>  > > nvt4-a8-ram24-disk50-perf2 >>  > > >>  > > This means: >>  > > - nvt4: nvidia T4 GPU >>  > > - a8: AMD VCPU 8 (we also have i4 for example, for Intel) >>  > > - ram24: 24 GB of RAM >>  > > - disk50: 50 GB of local system disk >>  > > - perf2: level 2 of IOps / IO bandwidth >>  > >>  > So what I'd suggest to cover that usecase would be smth like: >>  > 8vCPU-24576RAM-50SSD-pGPU:T4-10kIOPS-EPYC4 >>  that is somewhat readable but in general i dont think we shoudl be >>  advocationg for standarised naming of flavor across clouds in general. >>  we might be able to encode some info but really user shoudl read the >>  extra specs and falvor values not realy on a nameing scheme. >>  > >>  > > SCS-8C:32:2x200S-bms-i2-GNa:64-ib >>  > > [4] In case you wonder: 8 dedicated cores, 32GiB RAM, 2x200GB SSD >>  > > disks >>  > > on bare metal sys, intel Cascade Lake, nVidia GPU with 64 >>  > > Ampere SMs >>  > > and InfiniBand. >>  > >>  > Would be probably smth like: >>  > 8pCPU-32768RAM-2x200SSD-2vGPU:A100-IB-Cascade >>  > >>  > [1] >>  > https://github.com/SovereignCloudStack/Operational-Docs/blob/main/flavor-naming-draft.MD >>  > --  Kind Regards, Dmitriy Rabotyagov From gmann at ghanshyammann.com Thu Jun 24 15:54:58 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 24 Jun 2021 10:54:58 -0500 Subject: [nova][osc][api-sig] How strict should our clients be? In-Reply-To: <99A034DD-29EE-45DE-A401-2E63879EBB59@sap.com> References: <086b45b9602e444e74c1808065d7e4abfcd52e8c.camel@redhat.com> <20210622171705.aubgvzkqamcyex4x@yuggoth.org> <159a157942928d51a615d51ddfc1250b29f87ed9.camel@redhat.com> <8519F1B3-DBDB-42A2-BE9F-864D46762BBC@sap.com> <17a399ad3fe.e9eb6850125616.4484464742576361933@ghanshyammann.com> <99A034DD-29EE-45DE-A401-2E63879EBB59@sap.com> Message-ID: <17a3ebb46f9.d349544324750.1992146148965451659@ghanshyammann.com> ---- On Thu, 24 Jun 2021 07:22:09 -0500 Wiesel, Fabian wrote ---- > > > On 23/6/21, 18:03, "Ghanshyam Mann" wrote: > > ---- On Wed, 23 Jun 2021 05:21:58 -0500 Wiesel, Fabian wrote ---- > > I take a different view, possibly because I am in a similar position as the requestor. > > I also work on a openstack installation, which we need to patch to our needs. > > We try to do everything upstream first, but chances are, there will be changes which are not upstreamable. > > > > We also have large user-base, and it is a great advantage to be able to point people to the official client, even if the server is not the official one. > > A strict client policy would require us to fork the client as well, and distribute that to our user-base. With a couple of thousand users, that is not so trivial. > > In my point-of-view, such a decision would tightly couple the client to the server for a limited benefit (a fraction of seconds earlier error message). > > What are the exact reason for not upstreaming the changes? We have microversion mechanish in Nova API to improve/change the API in > backward compatible and discoverable way. That will be helpful to add the more API/changing existing APIs without impacting the existing > user of that API. > > Currently, we do not have any API changes and our team inside SAP is pushing back against custom changes in the API from our user-base. > Any API change we plan to do, we try to get consensus with upstream first. > > But chances are, that there are requests within our company we must fulfill (even if our team itself may disagree) within a certain timeline, and I do not expect that the community will comply with either the timeline or the request itself. Thanks Fabian for explaining in detail. I understand the situation. In Nova, if you have API change request, we do follow the design discussion in specs repo first and then implementation should not take much time (depends on author activeness on updating review comment or so). All this is possible to merger in one cycle itself but to make it available at customer side depends on how soon you upgrade to that release. But I feel this is a general issue on long release cycle not just API or Client. In that case, how about providing a config option to disable the client side strict validation (by default we can keep the validation) ? Doing that in API side is not good but at least the client can be flexible. May be osc team can provide their opinion? -gmann > > The changes we do not try to upstream are simply things we consider workarounds for our special situation: We are reaching the supported limits of our vendor (VMware), and we are trying to get our vendor to fix those. > > Cheers, > Fabian > > From mark at stackhpc.com Thu Jun 24 16:51:36 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 24 Jun 2021 17:51:36 +0100 Subject: [kolla] Wallaby releases available Message-ID: Hi, I'm pleased to announce the availability of the first Wallaby releases for all Kolla deliverables: * kolla 12.0.0 (https://docs.openstack.org/releasenotes/kolla/wallaby.html#relnotes-12-0-0-stable-wallaby) * kolla-ansible 12.0.0 (https://docs.openstack.org/releasenotes/kolla-ansible/wallaby.html#relnotes-12-0-0-stable-wallaby) * kayobe 10.0.0 (https://docs.openstack.org/releasenotes/kayobe/wallaby.html#relnotes-10-0-0-stable-wallaby) A lot of hard work was involved in these releases which add support for two new OS versions: CentOS Stream and Debian Bullseye. Many thanks to everyone who contributed to these releases. And now onto Xena! Thanks, Mark From adivya1.singh at gmail.com Thu Jun 24 19:15:56 2021 From: adivya1.singh at gmail.com (Adivya Singh) Date: Fri, 25 Jun 2021 00:45:56 +0530 Subject: Regarding Bind server in ubuntu Message-ID: hi Team, I am facing one issue in Bind Server installation in ubuntu, I am not able to resolve the reserve DNS names , I am suspecting there is a issue in SOA , when i am doing dig from the public IP to the private subnet IP, it does not return anything like dig -x regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jun 24 19:19:19 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 24 Jun 2021 19:19:19 +0000 Subject: [dev][infra][tact-sig] Zuul 4.6.0 and associated job changes Message-ID: <20210624191918.uaud2ubpjp44heir@yuggoth.org> Today at 14:00 UTC the OpenDev Collaboratory upgraded its deployment of Zuul, coinciding with the 4.6.0 security release. This release disabled or changed a number of features which could previously be leveraged to take over executors or obtain decrypted copies of secret data, necessitating adjustments to some jobs. I think we've now addressed the majority of the central job resources which were impacted, but there are almost certainly less-frequently-exercised jobs which are still configured to do things which will no longer work. There were likely some strange-looking failures, particularly in promote and post pipeline builds, between 15:00 and 19:00 UTC today, so if you need something rerun for any reason please do reach out. The two main categories of new bugs which will need fixing are: * Use of Jinja2 templating in secret definitions * Setting ansible_connection, ansible_host, ansible_python_interpreter, ansible_shell_executable, or ansible_user The full release notes can be found in the release announcement here: http://lists.zuul-ci.org/pipermail/zuul-announce/2021-June/000096.html If you run into a new problem in one of your jobs and you believe it may be related to the above or similar fallout from the changes in Zuul 4.6.0 and need assistance, please don't hesitate to contact the TaCT SIG in the #openstack-infra channel on the OFTC IRC network or by replying to this mailing list thread. Apologies for any disruption this update may have caused, and thanks for your understanding. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From senrique at redhat.com Thu Jun 24 19:30:15 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Thu, 24 Jun 2021 16:30:15 -0300 Subject: [cinder] [replication] Error in cinder replication | Openstack Victoria In-Reply-To: References: Message-ID: Hi Swogat, Cinder supports replication since Rocky https://review.opendev.org/c/openstack/cinder/+/333565 . Do you mind creating a new bug with more extended log msg on https://bugs.launchpad.net/cinder ? Regards, Sofia Enriquez On Thu, Jun 24, 2021 at 9:44 AM Swogat Pradhan wrote: > Hi, > I am trying to configure cinder replication with ceph as backend and i > followed this URL to configure the cinder replication: > https://netapp.io/2016/10/14/cinder-replication-netapp-perfect-cheesecake-recipe/ > and > http://www.sebastien-han.fr/blog/2017/06/19/OpenStack-Cinder-configure-replication-api-with-ceph/ > > i created a Volume type - REPL, with replication enabled True and > colume backend name parameter. > > But when i am trying to create a volume using the created volume type the > volume is getting created in ceph but in cinder the status is showing > error. and when checked log getting the following error: > > 2021-06-24 17:59:39.556 28472 ERROR cinder.volume.drivers.rbd > [req-6e6b901e-f2dd-43a4-a089-6c734b279e17 d576181b6a444541b8ec7f37d750a4c1 > a9ac2eba50d84d64ac44b179bbcc9183 - default default] Error creating rbd > image volume-361f35c4-a589-4653-b404-fb93d2f598cf.: > cinder.exception.ReplicationError: Volume > 361f35c4-a589-4653-b404-fb93d2f598cf replication error: Failed to enable > image replication > > I tried to create a volume using volume type- default and then > later changed the volume type using command 'cinder retype' to the created > volume type and it changed without any issue. > > I am using openstack victoria . > > With regards, > Swogat Pradhan > -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ricardo.SoaresSarto at windriver.com Thu Jun 24 20:09:25 2021 From: Ricardo.SoaresSarto at windriver.com (Soares Sarto, Ricardo) Date: Thu, 24 Jun 2021 20:09:25 +0000 Subject: [dev][nova] PCI IRQ Affinity for VMS with dedicated CPUs Message-ID: Hello Everyone, Does NOVA provide any intelligent affining of IRQs of PCI Devices to CPUs pinned to VMs? i.e. for VMs that have pci-passthrough interfaces to PCI devices and are using ‘dedicated’ CPU policy, does NOVA automatically affine the IRQs for those PCI devices the CPUs that are allocated to the VM? Furthermore, is there any flavor extraspec that can further scope the IRQ affinity of the PCI Devices to a subset of the VM’s dedicated cpus? If not, is this a new capability that the NOVA team would be open to? -rsoaress -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Thu Jun 24 20:40:34 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 24 Jun 2021 13:40:34 -0700 Subject: Regarding Bind server in ubuntu In-Reply-To: References: Message-ID: Hi Adivya, Can you provide us more information about your installation and what you are attempting to do? It sounds like you booted a VM via nova on a private subnet, running an Ubuntu image, and installed BIND in it. You then created a floating IP address on the public subnet that points to the private subnet IP address of the VM running BIND. If that is the case, it could be a few issues: 1. Did you create a security group for your private subnet port on the VM that allows DNS in? UDP port 53? 2. The dig command may be incomplete to reach your BIND server. Try "dig -x N.N.N.N @F.F.F.F" where N.N.N.N is the IP address you want to look up in BIND and F.F.F.F is the floating IP address you created. Michael On Thu, Jun 24, 2021 at 12:20 PM Adivya Singh wrote: > > hi Team, > > I am facing one issue in Bind Server installation in ubuntu, I am not able to resolve the reserve DNS names , I am suspecting there is a issue in SOA , when i am doing dig from the public IP to the private subnet IP, it does not return anything > > like dig -x > > regards > Adivya Singh From rosmaita.fossdev at gmail.com Thu Jun 24 20:58:56 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 24 Jun 2021 16:58:56 -0400 Subject: [cinder] propose Sofia Enriquez for cinder core Message-ID: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> Sofia Enriquez (enriquetaso on IRC) has been active the past few cycles submitting patches, doing reviews, and participating in the Cinder weekly meeting, PTGs, and midcycles. She has also been acting as the Cinder Bug Deputy for Xena and has been running the Bug Squad meeting each week. Something I particularly appreciate is that she has included the cinder-tempest-plugin in her range of patches and reviews. Cinder and its various deliverables make up a large code base. Sofia has demonstrated a willingness and capacity to learn about the code, and I anticipate that she will continue to deepen her cinder knowledge as she continues to work in the project. At the same time, I believe that she is self-aware enough to know the limits of her knowledge and can be trusted not to approve patches that she doesn't understand. Above all, she will bring some much needed review bandwidth to the project. In the absence of objections, I'll add Sofia to the core team just before the next Cinder team meeting (Wednesday, 30 June at 1400 UTC). Please communicate any concerns to me before that time. cheers, brian From amy at demarco.com Thu Jun 24 21:05:09 2021 From: amy at demarco.com (Amy Marrich) Date: Thu, 24 Jun 2021 16:05:09 -0500 Subject: [cinder] propose Sofia Enriquez for cinder core In-Reply-To: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> References: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> Message-ID: Unofficial +1:) On Thu, Jun 24, 2021 at 4:01 PM Brian Rosmaita wrote: > Sofia Enriquez (enriquetaso on IRC) has been active the past few cycles > submitting patches, doing reviews, and participating in the Cinder > weekly meeting, PTGs, and midcycles. She has also been acting as the > Cinder Bug Deputy for Xena and has been running the Bug Squad meeting > each week. Something I particularly appreciate is that she has included > the cinder-tempest-plugin in her range of patches and reviews. > > Cinder and its various deliverables make up a large code base. Sofia > has demonstrated a willingness and capacity to learn about the code, and > I anticipate that she will continue to deepen her cinder knowledge as > she continues to work in the project. At the same time, I believe that > she is self-aware enough to know the limits of her knowledge and can be > trusted not to approve patches that she doesn't understand. Above all, > she will bring some much needed review bandwidth to the project. > > In the absence of objections, I'll add Sofia to the core team just > before the next Cinder team meeting (Wednesday, 30 June at 1400 UTC). > Please communicate any concerns to me before that time. > > cheers, > brian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iotchenko.i at gmail.com Thu Jun 24 22:47:53 2021 From: iotchenko.i at gmail.com (Ivan Iotchenko) Date: Thu, 24 Jun 2021 15:47:53 -0700 Subject: [ironic] Ansible deploy_interface error output Message-ID: <9CA1A9A4-E7DD-4503-95CC-C119A240A73E@gmail.com> Hello We are using Ironic (Train) and Ansible deploy interface. Everything works just awesome and we were able to include all the things we need into deployment/undeployment process (with respect to flexibility provided by Ansible). However, there is an issue. If you have failures during the process - you will be presented with full Ansible output which can be large enough (and thus cut). Almost every failure requires you to review log files (Ironic/Ansible). This is fine if deployment is being done by Ironic admin or person who have required experience. For other users it can be a blocker. Usually you just cannot determine root cause from the error message in last_failure field, which you can get both from console or UI. Error output was pretty straightforward with direct deploy interface. Is there a way to made Ansible error output better? E.g. to make it short and descriptive (“DNS registration failed bla-bla-bla”) Thanks, Ivan From yangyi01 at inspur.com Fri Jun 25 02:42:01 2021 From: yangyi01 at inspur.com (=?utf-8?B?WWkgWWFuZyAo5p2o54eaKS3kupHmnI3liqHpm4blm6I=?=) Date: Fri, 25 Jun 2021 02:42:01 +0000 Subject: =?utf-8?B?562U5aSNOiBbbmV1dHJvbl0gY2FuIGZsb2F0aW5nIElQIHBvcnQgZm9yd2Fy?= =?utf-8?B?ZGluZyB3b3JrIHdoZW4gYWdlbnRfbW9kZSBpcyBkdnIgb3IgZHZyX3NuYXQg?= =?utf-8?Q?but_on_compute_node=3F?= In-Reply-To: References: <902964b783584c4eb98d26d0e94c101a@inspur.com> Message-ID: Got it, thanks Roaolfo. 发件人: Rodolfo Alonso Hernandez [mailto:ralonsoh at redhat.com] 发送时间: 2021年6月24日 18:39 收件人: Yi Yang (杨燚)-云服务集团 抄送: openstack-discuss at lists.openstack.org; reedip.banerjee at nectechnologies.in; gal.sagie at gmail.com; tian.mingming at h3c.com; zhaobo6 at huawei.com 主题: Re: [neutron] can floating IP port forwarding work when agent_mode is dvr or dvr_snat but on compute node? Hello Yi: Yes, the FIP port forwarding is done in the network node [1][2]. Regards. [1]https://specs.openstack.org/openstack/neutron-specs/specs/rocky/port-forwarding.html [2]https://review.opendev.org/c/openstack/neutron/+/533850 On Thu, Jun 24, 2021 at 12:07 PM Yi Yang (杨燚)-云服务集团 > wrote: Hi, folks I’m working on https://bugs.launchpad.net/neutron/+bug/1931953, per my check for neutron-specs/specs/rocky/port-forwarding.rst, it seems port forwarding function only can be done on network node, so I don’t think we need to do it on compute node, can anybody help confirm this? Per my understanding, if a FIP is set port forwarding to two VMS across two compute node, physical switch has no way to know which compute node it should send to if destination IP is this FIP, right? But on centralized mode, this isn’t an issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3600 bytes Desc: not available URL: From smooney at redhat.com Fri Jun 25 04:20:27 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 25 Jun 2021 05:20:27 +0100 Subject: [dev][nova] PCI IRQ Affinity for VMS with dedicated CPUs In-Reply-To: References: Message-ID: <7a05ea34b50bf2e4837d040bb33138d7acbc87fb.camel@redhat.com> On Thu, 2021-06-24 at 20:09 +0000, Soares Sarto, Ricardo wrote: > Hello Everyone, > > Does NOVA provide any intelligent affining of IRQs of PCI Devices to CPUs > pinned to VMs? no i tried to enable this back in icehouse when i still worked at intel and the redhat virt and kernel folk said that we shoudl not do this at the time. > > i.e. for VMs that have pci-passthrough interfaces to PCI devices and are > using ‘dedicated’ CPU policy, does NOVA automatically affine the IRQs > for those PCI devices the CPUs that are allocated to the VM? they objected to this proposal on the ground that if you have a real time guest you dont want the irqs to be delvied to the cpu cores on which the vm is running. you likely want them to be deliverd to the non pinned cores of the vm or to the same socket as the vm is pinned too but you do not want the irqs to interupt your realtime application cores. nova should not really be managing this lovel system config directly either. it might be approage for libvirt ot manage irq mapping but nova should not be directlly reconfiguring the irq mappings itself. this type of lowlevel system tuneing has generally been considered out of scope of nova to do which si why nova does not isolate pinned cores such that kernel and os process do not run on them. we direct operatros to use tuned or if you must the kernel isocpus parmater to do that. dymimic irq managment to me is to lowlevel a detail to be managed by nova directly unless libvirt elect to provide an interface for that. even then im not conviced we shoudl do this by default or direct them to the dedicated vm cores. as i said the opisite directing ircus to the shared cpus cores would seam more desirable. > > Furthermore, is there any flavor extraspec that can further scope the IRQ > affinity of the PCI Devices to a subset of the VM’s dedicated cpus? no this does not exist. > > If not, is this a new capability that the NOVA team would be open to? for me this woudl be a -1 no. but other might be open to it. it has previouls been reject upstream about 6-7 years ago. this would one of the things that my team at intel at the time tried to enable when cpu pining, numa and neutorn sriov support was being added. > > -rsoaress From rdhasman at redhat.com Fri Jun 25 04:27:28 2021 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Fri, 25 Jun 2021 09:57:28 +0530 Subject: [cinder] [replication] Error in cinder replication | Openstack Victoria In-Reply-To: References: Message-ID: Hi, On Thu, Jun 24, 2021 at 6:11 PM Swogat Pradhan wrote: > Hi, > I am trying to configure cinder replication with ceph as backend and i > followed this URL to configure the cinder replication: > https://netapp.io/2016/10/14/cinder-replication-netapp-perfect-cheesecake-recipe/ > and > http://www.sebastien-han.fr/blog/2017/06/19/OpenStack-Cinder-configure-replication-api-with-ceph/ > > The first link is specific to netapp driver so I'm not sure how it helps with ceph replication. For the second link, did you configure two ceph clusters properly and set the needed values with replication_device parameter? > i created a Volume type - REPL, with replication enabled True and > colume backend name parameter. > > But when i am trying to create a volume using the created volume type the > volume is getting created in ceph but in cinder the status is showing > error. and when checked log getting the following error: > > 2021-06-24 17:59:39.556 28472 ERROR cinder.volume.drivers.rbd > [req-6e6b901e-f2dd-43a4-a089-6c734b279e17 d576181b6a444541b8ec7f37d750a4c1 > a9ac2eba50d84d64ac44b179bbcc9183 - default default] Error creating rbd > image volume-361f35c4-a589-4653-b404-fb93d2f598cf.: > cinder.exception.ReplicationError: Volume > 361f35c4-a589-4653-b404-fb93d2f598cf replication error: Failed to enable > image replication > > Any operation could be failing in this method[1] on rbd side to raise the above error. I suggest it's better to check if you've setup replication correctly. [1] https://github.com/openstack/cinder/blob/stable/victoria/cinder/volume/drivers/rbd.py#L802 > I tried to create a volume using volume type- default and then > later changed the volume type using command 'cinder retype' to the created > volume type and it changed without any issue. > > I am using openstack victoria . > > With regards, > Swogat Pradhan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Fri Jun 25 04:39:13 2021 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 25 Jun 2021 10:09:13 +0530 Subject: [cinder] [replication] Error in cinder replication | Openstack Victoria In-Reply-To: References: Message-ID: Hi Rajat, I was using the 1st link as a reference. As per the 2nd link The rbd mirroring in ceph is configured properly, we've tested by creating an image in ceph itself the mirroring is functioning properly. The problem we are facing is in cinder, apparently there is a bug https://bugs.launchpad.net/cinder/+bug/1898728 which states that cinder retype (to replication enabled volume type) does not enable replication and when we are creating a volume directly in cinder using replication enabled volume type, the volume is getting created in ceph but in cinder the volume stays in error state (as a result we are unable to use it in instances primary disk) On Fri, Jun 25, 2021 at 9:57 AM Rajat Dhasmana wrote: > Hi, > > On Thu, Jun 24, 2021 at 6:11 PM Swogat Pradhan > wrote: > >> Hi, >> I am trying to configure cinder replication with ceph as backend and i >> followed this URL to configure the cinder replication: >> https://netapp.io/2016/10/14/cinder-replication-netapp-perfect-cheesecake-recipe/ >> and >> http://www.sebastien-han.fr/blog/2017/06/19/OpenStack-Cinder-configure-replication-api-with-ceph/ >> >> > The first link is specific to netapp driver so I'm not sure how it helps > with ceph replication. For the second link, did you configure two ceph > clusters properly and set the needed values with replication_device > parameter? > > >> i created a Volume type - REPL, with replication enabled True and >> colume backend name parameter. >> >> But when i am trying to create a volume using the created volume type the >> volume is getting created in ceph but in cinder the status is showing >> error. and when checked log getting the following error: >> >> 2021-06-24 17:59:39.556 28472 ERROR cinder.volume.drivers.rbd >> [req-6e6b901e-f2dd-43a4-a089-6c734b279e17 d576181b6a444541b8ec7f37d750a4c1 >> a9ac2eba50d84d64ac44b179bbcc9183 - default default] Error creating rbd >> image volume-361f35c4-a589-4653-b404-fb93d2f598cf.: >> cinder.exception.ReplicationError: Volume >> 361f35c4-a589-4653-b404-fb93d2f598cf replication error: Failed to enable >> image replication >> >> > Any operation could be failing in this method[1] on rbd side to raise the > above error. I suggest it's better to check if you've setup replication > correctly. > > [1] > https://github.com/openstack/cinder/blob/stable/victoria/cinder/volume/drivers/rbd.py#L802 > > >> I tried to create a volume using volume type- default and then >> later changed the volume type using command 'cinder retype' to the created >> volume type and it changed without any issue. >> >> I am using openstack victoria . >> >> With regards, >> Swogat Pradhan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Fri Jun 25 05:02:16 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Fri, 25 Jun 2021 10:02:16 +0500 Subject: [wallaby][nova] CPU topology and NUMA Nodes Message-ID: Hi, I am using openstack wallaby on ubuntu 20.04 and kvm. I am working to make optimized flavor properties that should provide optimal performance. I was reviewing the document below. https://docs.openstack.org/nova/wallaby/admin/cpu-topologies.html I have two socket AMD compute node. The workload running on nodes are mixed workload. My question is should I use default nova CPU topology and NUMA node that nova deploys instance by default OR should I use hw:cpu_sockets='2' and hw:numa_nodes='2'. Which one from above provide best instance performance ? or any other tuning should I do ? The note in the URL (CPU topology sesion) suggests that I should stay with default options that nova provides. Currently it also works with libvirt/QEMU driver but we don’t recommend it in production use cases. This is because vCPUs are actually running in one thread on host in qemu TCG (Tiny Code Generator), which is the backend for libvirt/QEMU driver. Work to enable full multi-threading support for TCG (a.k.a. MTTCG) is on going in QEMU community. Please see this MTTCG project page for detail. Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikarnatathe at gmail.com Fri Jun 25 05:39:49 2021 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Fri, 25 Jun 2021 11:09:49 +0530 Subject: [wallaby][nova] CPU topology and NUMA Nodes In-Reply-To: References: Message-ID: Hi Ammad, It depends on your VM requirements. NUMA basically maps nic-mem-cpu. So if the VM is high traffic catering, you should define the NUMA properties in the flavor. Vikarna On Fri, 25 Jun 2021 at 10:34, Ammad Syed wrote: > Hi, > > I am using openstack wallaby on ubuntu 20.04 and kvm. I am working to make > optimized flavor properties that should provide optimal performance. I was > reviewing the document below. > > https://docs.openstack.org/nova/wallaby/admin/cpu-topologies.html > > I have two socket AMD compute node. The workload running on nodes are > mixed workload. > > My question is should I use default nova CPU topology and NUMA node that > nova deploys instance by default OR should I use hw:cpu_sockets='2' > and hw:numa_nodes='2'. > > Which one from above provide best instance performance ? or any other > tuning should I do ? > > The note in the URL (CPU topology sesion) suggests that I should stay with > default options that nova provides. > > Currently it also works with libvirt/QEMU driver but we don’t recommend it > in production use cases. This is because vCPUs are actually running in one > thread on host in qemu TCG (Tiny Code Generator), which is the backend for > libvirt/QEMU driver. Work to enable full multi-threading support for TCG > (a.k.a. MTTCG) is on going in QEMU community. Please see this MTTCG > project page for detail. > > > Ammad > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Jun 25 05:54:28 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 25 Jun 2021 06:54:28 +0100 Subject: [wallaby][nova] CPU topology and NUMA Nodes In-Reply-To: References: Message-ID: <9b6d248665ced4f826fedddd2ccb4649dd148273.camel@redhat.com> On Fri, 2021-06-25 at 10:02 +0500, Ammad Syed wrote: > Hi, > > I am using openstack wallaby on ubuntu 20.04 and kvm. I am working to make > optimized flavor properties that should provide optimal performance. I was > reviewing the document below. > > https://docs.openstack.org/nova/wallaby/admin/cpu-topologies.html > > I have two socket AMD compute node. The workload running on nodes are mixed > workload. > > My question is should I use default nova CPU topology and NUMA node that > nova deploys instance by default OR should I use hw:cpu_sockets='2' > and hw:numa_nodes='2'. the latter hw:cpu_sockets='2' and hw:numa_nodes='2' should give you better performce however you should also set hw:mem_page_size=small or hw:mem_page_size=any when you enable virtual numa policies we afinities the guest memory to host numa nodes. This can lead to Out of memory evnet on the the host numa nodes which can result in vms being killed by the host kernel memeory reaper if you do not enable numa aware memeory trackign iin nova which is done by setting hw:mem_page_size. setting hw:mem_page_size has the side effect of of disabling memory over commit so you have to bare that in mind. if you are using numa toplogy you should almost always also use hugepages which are enabled using hw:mem_page_size=large this however requires you to configure hupgepages in the host at boot. > > Which one from above provide best instance performance ? or any other > tuning should I do ? in the libvirt driver the default cpu toplogy we will genergated is 1 thread per core, 1 core per socket and 1 socket per flavor.vcpu. (technially this is an undocumeted implemation detail that you should not rely on, we have the hw:cpu_* element if you care about the toplogy) this was more effincet in the early days of qemu/openstack but has may issue when software is chagne per sokcet or oepreating systems have a limit on socket supported such as windows. generally i advies that you set hw:cpu_sockets to the typical number of sockets on the underlying host. simialrly if the flavor will only be run on host with SMT/hypertreading enabled on you shoudl set hw:cpu_threads=2 the flavor.vcpus must be devisable by the product of hw:cpu_sockets, hw:cpu_cores and hw:cpu_threads if they are set. so if you have hw:cpu_threads=2 it must be devisable by 2 if you have hw:cpu_threads=2 and hw:cpu_sockets=2 flavor.vcpus must be a multiple of 4 > > The note in the URL (CPU topology sesion) suggests that I should stay with > default options that nova provides. in generaly no you should aling it to the host toplogy if you have similar toplogy across your data center. the default should always just work but its not nessisarly optimal and window sguest might not boot if you have too many sockets. windows 10 for exmple only supprot 2 socket so you could only have 2 flavor.vcpus if you used the default toplogy. > > Currently it also works with libvirt/QEMU driver but we don’t recommend it > in production use cases. This is because vCPUs are actually running in one > thread on host in qemu TCG (Tiny Code Generator), which is the backend for > libvirt/QEMU driver. Work to enable full multi-threading support for TCG > (a.k.a. MTTCG) is on going in QEMU community. Please see this MTTCG project > page for detail. we do not gnerally recommende using qemu without kvm in produciton. the mttcg backend is useful in cases where you want to emulate other plathform but that usecsae is not currently supported in nova. for your deployment you should use libvirt with kvm and you should also consider if you want to support nested virtualisation or not. > > > Ammad From stephenfin at redhat.com Fri Jun 25 10:40:42 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 25 Jun 2021 11:40:42 +0100 Subject: [nova][os-vif][stable] Adding os-vif-core to os-vif stable branches Message-ID: <5316187e52240c1a91d7952418fc25d9ddf6b540.camel@redhat.com> Happy Friday, Per $subject, I'd like to propose that we add os-vif-core to the list of teams with +2 rights on os-vif stable branches. Currently this list is restricted to "Project Bootstrappers", nova-stable-maint and stable-maint-core. os-vif doesn't see many backports, but it is a rather specialist library that requires some domain-specific knowledge to grok and the existing stable reviewers typically insist on reviews from core reviewers before approving changes. By making this change, we can avoid this little dance. We have discussed this already on #openstack-nova, but if there are any concerns, please raise them now. If not, I'll seek to have the relevant changes merged to openstack/project-config by the end of next week. I'll post these changes shortly. Cheers, Stephen [1] https://meetings.opendev.org/irclogs/%23openstack-nova/%23openstack-nova.2021-06-02.log.html#t2021-06-02T12:42:10 From stephenfin at redhat.com Fri Jun 25 10:42:48 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 25 Jun 2021 11:42:48 +0100 Subject: [nova][os-vif][stable] Adding os-vif-core to os-vif stable branches In-Reply-To: <5316187e52240c1a91d7952418fc25d9ddf6b540.camel@redhat.com> References: <5316187e52240c1a91d7952418fc25d9ddf6b540.camel@redhat.com> Message-ID: <0a18a586394d2f51f767c38904846c17aee4957e.camel@redhat.com> On Fri, 2021-06-25 at 11:40 +0100, Stephen Finucane wrote: > Happy Friday, > > Per $subject, I'd like to propose that we add os-vif-core to the list of teams > with +2 rights on os-vif stable branches. Currently this list is restricted to > "Project Bootstrappers", nova-stable-maint and stable-maint-core. os-vif doesn't > see many backports, but it is a rather specialist library that requires some > domain-specific knowledge to grok and the existing stable reviewers typically > insist on reviews from core reviewers before approving changes. By making this > change, we can avoid this little dance. > > We have discussed this already on #openstack-nova, but if there are any > concerns, please raise them now. If not, I'll seek to have the relevant changes > merged to openstack/project-config by the end of next week. I'll post these > changes shortly. Change proposed at [1]. I have -W'd it pending any feedback. Stephen [1] https://review.opendev.org/c/openstack/project-config/+/798071 > Cheers, > Stephen > > [1] https://meetings.opendev.org/irclogs/%23openstack-nova/%23openstack-nova.2021-06-02.log.html#t2021-06-02T12:42:10 > From stephenfin at redhat.com Fri Jun 25 11:10:22 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 25 Jun 2021 12:10:22 +0100 Subject: [nova][stable] Adding nova-core to nova stable branches Message-ID: Happy Friday (again), Following on from my earlier email regarding os-vif stable branch permissions, I'd like to once again propose that we add nova-core to the list of teams with +2 rights on nova stable branches. Unlike the os-vif change, this has been discussed extensively in the past, most recently during the Victoria PTG [1] (search for "Can we add nova-core to nova-stable-maint") and therefore carries a little more baggage with it. Last time we discussed this, there were concerns from some existing stable cores that not all of nova-core were sufficiently well acquainted with stable branch policy and it was suggested that review velocity was sufficient. However, the stable team has remained pretty stable (heh) over the last couple of cycles and is staffed with developers with ample experience of both nova itself and the backport policies of nova and OpenStack as a whole. In addition, we continue to have a long tail of open stable reviews [2] from various people. The stable branches are increasingly important for users and organizations alike, and a faster, smoother backport policy encourages people to have an upstream-first mentality when it comes to maintaining a product built on these branches. The alternative is to handle these backports in a downstream-first or downstream- only manner, which results in duplication of effort and, over the long-term, potential difficulties keeping downstream and upstream in-sync. I think the time has long since passed where we should remove the artificial barrier that exists here. I expect all nova cores will have the maturity and sensibility to ask questions where uncertainty exists and only approve backports that they are confident and knowledgeable about. I do not expect existing stable cores will be sidelined either. Far from it, in fact. Existing stable cores will continue to be valuable both for their reviews and knowledge of the intricacies of stable policy. I proposed a waiting period of one week before making changes to os-vif permissions, but given the history around changes to nova stable core, I think we would benefit from some additional time to assuage any concerns that may still exist. I propose that we come to a decision by 09-Aug, or two weeks from now. If necessary, we can discuss this in real-time during the nova team meeting next Tuesday. Cheers, Stephen [1] https://etherpad.opendev.org/p/nova-victoria-ptg [2] https://review.opendev.org/q/project:openstack/nova+NOT+branch:master+is:open From sean.mcginnis at gmx.com Fri Jun 25 11:10:40 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 25 Jun 2021 06:10:40 -0500 Subject: [cinder] propose Sofia Enriquez for cinder core In-Reply-To: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> References: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> Message-ID: <20210625111040.GA4135146@sm-workstation> On Thu, Jun 24, 2021 at 04:58:56PM -0400, Brian Rosmaita wrote: > Sofia Enriquez (enriquetaso on IRC) has been active the past few cycles > submitting patches, doing reviews, and participating in the Cinder weekly > meeting, PTGs, and midcycles. She has also been acting as the Cinder Bug > Deputy for Xena and has been running the Bug Squad meeting each week. > Something I particularly appreciate is that she has included the > cinder-tempest-plugin in her range of patches and reviews. > > Cinder and its various deliverables make up a large code base. Sofia has > demonstrated a willingness and capacity to learn about the code, and I > anticipate that she will continue to deepen her cinder knowledge as she > continues to work in the project. At the same time, I believe that she is > self-aware enough to know the limits of her knowledge and can be trusted not > to approve patches that she doesn't understand. Above all, she will bring > some much needed review bandwidth to the project. > > In the absence of objections, I'll add Sofia to the core team just before > the next Cinder team meeting (Wednesday, 30 June at 1400 UTC). Please > communicate any concerns to me before that time. > > cheers, > brian > +1 from me. Sofia has been a great help to the project. From stephenfin at redhat.com Fri Jun 25 11:15:54 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 25 Jun 2021 12:15:54 +0100 Subject: [nova][stable] Adding nova-core to nova stable branches In-Reply-To: References: Message-ID: <26be248192a104b4da62fccd6fb5e369640557ba.camel@redhat.com> On Fri, 2021-06-25 at 12:10 +0100, Stephen Finucane wrote: > Happy Friday (again), > > Following on from my earlier email regarding os-vif stable branch permissions, > I'd like to once again propose that we add nova-core to the list of teams with > +2 rights on nova stable branches. Unlike the os-vif change, this has been > discussed extensively in the past, most recently during the Victoria PTG [1] > (search for "Can we add nova-core to nova-stable-maint") and therefore carries a > little more baggage with it. > > Last time we discussed this, there were concerns from some existing stable cores > that not all of nova-core were sufficiently well acquainted with stable branch > policy and it was suggested that review velocity was sufficient. However, the > stable team has remained pretty stable (heh) over the last couple of cycles and > is staffed with developers with ample experience of both nova itself and the > backport policies of nova and OpenStack as a whole. In addition, we continue to > have a long tail of open stable reviews [2] from various people. The stable > branches are increasingly important for users and organizations alike, and a > faster, smoother backport policy encourages people to have an upstream-first > mentality when it comes to maintaining a product built on these branches. The > alternative is to handle these backports in a downstream-first or downstream- > only manner, which results in duplication of effort and, over the long-term, > potential difficulties keeping downstream and upstream in-sync. > > I think the time has long since passed where we should remove the artificial > barrier that exists here. I expect all nova cores will have the maturity and > sensibility to ask questions where uncertainty exists and only approve backports > that they are confident and knowledgeable about. I do not expect existing stable > cores will be sidelined either. Far from it, in fact. Existing stable cores will > continue to be valuable both for their reviews and knowledge of the intricacies > of stable policy. > > I proposed a waiting period of one week before making changes to os-vif > permissions, but given the history around changes to nova stable core, I think > we would benefit from some additional time to assuage any concerns that may > still exist. I propose that we come to a decision by 09-Aug, or two weeks from > now. If necessary, we can discuss this in real-time during the nova team meeting > next Tuesday. Change proposed at [1]. As with the os-vif change, I have -W'd pending discussions. Stephen [1] https://review.opendev.org/c/openstack/project-config/+/798077 > Cheers, > Stephen > > [1] https://etherpad.opendev.org/p/nova-victoria-ptg > [2] https://review.opendev.org/q/project:openstack/nova+NOT+branch:master+is:open > From rosmaita.fossdev at gmail.com Fri Jun 25 12:03:31 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 25 Jun 2021 08:03:31 -0400 Subject: [cinder] upcoming driver deadlines Message-ID: <23de5dd4-2fc2-1a3a-df87-03d348ae50f1@gmail.com> This is a reminder that the cinder New Driver Merge Deadline is Thursday 15 July 2021 [0]. Please make sure *right now* that you've filed a Launchpad blueprint for your driver and that your blueprint shows up in this list: https://blueprints.launchpad.net/cinder/xena (if it doesn't, contact me off list) If you're adding a new feature to a driver, you should also file a Lanuchpad blueprint (it helps us prioritize reviews). See the Xena release schedule for more info about the Cinder Driver Features Declaration [1]. New features for current drivers follow the regular Feature Freeze policy (that is, must be merged by Milestone 3). cheers, brian [0] https://releases.openstack.org/xena/schedule.html#x-cinder-driver-deadline [1] https://releases.openstack.org/xena/schedule.html#x-cinder-driver-features-declaration From balazs.gibizer at est.tech Fri Jun 25 12:14:13 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 25 Jun 2021 14:14:13 +0200 Subject: [nova][os-vif][stable] Adding os-vif-core to os-vif stable branches In-Reply-To: <0a18a586394d2f51f767c38904846c17aee4957e.camel@redhat.com> References: <5316187e52240c1a91d7952418fc25d9ddf6b540.camel@redhat.com> <0a18a586394d2f51f767c38904846c17aee4957e.camel@redhat.com> Message-ID: On Fri, Jun 25, 2021 at 11:42, Stephen Finucane wrote: > On Fri, 2021-06-25 at 11:40 +0100, Stephen Finucane wrote: >> Happy Friday, >> >> Per $subject, I'd like to propose that we add os-vif-core to the >> list of teams >> with +2 rights on os-vif stable branches. Currently this list is >> restricted to >> "Project Bootstrappers", nova-stable-maint and stable-maint-core. >> os-vif doesn't >> see many backports, but it is a rather specialist library that >> requires some >> domain-specific knowledge to grok and the existing stable reviewers >> typically >> insist on reviews from core reviewers before approving changes. By >> making this >> change, we can avoid this little dance. >> I'm OK with this change. Just a note that os-vif-core contains nova-core so by this change all nova cores get stable rights in os-vif. Cheers, gibi >> >> We have discussed this already on #openstack-nova, but if there are >> any >> concerns, please raise them now. If not, I'll seek to have the >> relevant changes >> merged to openstack/project-config by the end of next week. I'll >> post these >> changes shortly. > > Change proposed at [1]. I have -W'd it pending any feedback. > > Stephen > > [1] https://review.opendev.org/c/openstack/project-config/+/798071 >> Cheers, >> Stephen >> >> [1] >> https://meetings.opendev.org/irclogs/%23openstack-nova/%23openstack-nova.2021-06-02.log.html#t2021-06-02T12:42:10 >> From geguileo at redhat.com Fri Jun 25 12:39:29 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 25 Jun 2021 14:39:29 +0200 Subject: [cinder] propose Sofia Enriquez for cinder core In-Reply-To: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> References: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> Message-ID: <20210625123929.4xt67sfmvrsz6dz2@localhost> On 24/06, Brian Rosmaita wrote: > Sofia Enriquez (enriquetaso on IRC) has been active the past few cycles > submitting patches, doing reviews, and participating in the Cinder weekly > meeting, PTGs, and midcycles. She has also been acting as the Cinder Bug > Deputy for Xena and has been running the Bug Squad meeting each week. > Something I particularly appreciate is that she has included the > cinder-tempest-plugin in her range of patches and reviews. > > Cinder and its various deliverables make up a large code base. Sofia has > demonstrated a willingness and capacity to learn about the code, and I > anticipate that she will continue to deepen her cinder knowledge as she > continues to work in the project. At the same time, I believe that she is > self-aware enough to know the limits of her knowledge and can be trusted not > to approve patches that she doesn't understand. Above all, she will bring > some much needed review bandwidth to the project. > > In the absence of objections, I'll add Sofia to the core team just before > the next Cinder team meeting (Wednesday, 30 June at 1400 UTC). Please > communicate any concerns to me before that time. > > cheers, > brian > +1 I agree, she's doing great efforts and she is cautious in her judgment. From rdhasman at redhat.com Fri Jun 25 12:54:59 2021 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Fri, 25 Jun 2021 18:24:59 +0530 Subject: [cinder] propose Sofia Enriquez for cinder core In-Reply-To: <20210625123929.4xt67sfmvrsz6dz2@localhost> References: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> <20210625123929.4xt67sfmvrsz6dz2@localhost> Message-ID: +1 She has made a lot of contribution to the cinder community in terms of good patches and reviews. On Fri, Jun 25, 2021 at 6:11 PM Gorka Eguileor wrote: > On 24/06, Brian Rosmaita wrote: > > Sofia Enriquez (enriquetaso on IRC) has been active the past few cycles > > submitting patches, doing reviews, and participating in the Cinder weekly > > meeting, PTGs, and midcycles. She has also been acting as the Cinder Bug > > Deputy for Xena and has been running the Bug Squad meeting each week. > > Something I particularly appreciate is that she has included the > > cinder-tempest-plugin in her range of patches and reviews. > > > > Cinder and its various deliverables make up a large code base. Sofia has > > demonstrated a willingness and capacity to learn about the code, and I > > anticipate that she will continue to deepen her cinder knowledge as she > > continues to work in the project. At the same time, I believe that she > is > > self-aware enough to know the limits of her knowledge and can be trusted > not > > to approve patches that she doesn't understand. Above all, she will > bring > > some much needed review bandwidth to the project. > > > > In the absence of objections, I'll add Sofia to the core team just before > > the next Cinder team meeting (Wednesday, 30 June at 1400 UTC). Please > > communicate any concerns to me before that time. > > > > cheers, > > brian > > > > +1 > > I agree, she's doing great efforts and she is cautious in her judgment. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Jun 25 12:59:24 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 25 Jun 2021 13:59:24 +0100 Subject: [nova][os-vif][stable] Adding os-vif-core to os-vif stable branches In-Reply-To: References: <5316187e52240c1a91d7952418fc25d9ddf6b540.camel@redhat.com> <0a18a586394d2f51f767c38904846c17aee4957e.camel@redhat.com> Message-ID: On Fri, 2021-06-25 at 14:14 +0200, Balazs Gibizer wrote: > > On Fri, Jun 25, 2021 at 11:42, Stephen Finucane > wrote: > > On Fri, 2021-06-25 at 11:40 +0100, Stephen Finucane wrote: > > > Happy Friday, > > > > > > Per $subject, I'd like to propose that we add os-vif-core to the > > > list of teams > > > with +2 rights on os-vif stable branches. Currently this list is > > > restricted to > > > "Project Bootstrappers", nova-stable-maint and stable-maint-core. > > > os-vif doesn't > > > see many backports, but it is a rather specialist library that > > > requires some > > > domain-specific knowledge to grok and the existing stable reviewers > > > typically > > > insist on reviews from core reviewers before approving changes. By > > > making this > > > change, we can avoid this little dance. > > > > > I'm OK with this change. > > Just a note that os-vif-core contains nova-core so by this change all > nova cores get stable rights in os-vif. i think im ok with that. the preence of the nova-core group in os-vif was alway more of a courtesy. courtesy is not really the reight word but it come with the expectation that there is no requirement ot review and if you dont feel comfortable reviewing the chagne due to a lack of context or any other reason then you shoudl fell free to not review it. i think the same applies here for stabel. just becasue you can review the stable change does not mean you have too or that you have to +2/+w if you are either unfamilar with the stable policies or otherwize dont feel equipted ot review. i think the nova core team generaly good a self moderating what they review ectra, so i dont think this is really that risky. although there is a tie into the nova-core and nova-stable-maint discussion. > > Cheers, > gibi > > > > > > > We have discussed this already on #openstack-nova, but if there are > > > any > > > concerns, please raise them now. If not, I'll seek to have the > > > relevant changes > > > merged to openstack/project-config by the end of next week. I'll > > > post these > > > changes shortly. > > > > Change proposed at [1]. I have -W'd it pending any feedback. > > > > Stephen > > > > [1] https://review.opendev.org/c/openstack/project-config/+/798071 > > > Cheers, > > > Stephen > > > > > > [1] > > > https://meetings.opendev.org/irclogs/%23openstack-nova/%23openstack-nova.2021-06-02.log.html#t2021-06-02T12:42:10 > > > > > > From jungleboyj at gmail.com Fri Jun 25 13:02:20 2021 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 25 Jun 2021 08:02:20 -0500 Subject: [cinder] propose Sofia Enriquez for cinder core In-Reply-To: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> References: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> Message-ID: <870fbc5d-a6eb-c540-0e92-5f4112fcc584@gmail.com> On 6/24/2021 3:58 PM, Brian Rosmaita wrote: > Sofia Enriquez (enriquetaso on IRC) has been active the past few > cycles submitting patches, doing reviews, and participating in the > Cinder weekly meeting, PTGs, and midcycles.  She has also been acting > as the Cinder Bug Deputy for Xena and has been running the Bug Squad > meeting each week.  Something I particularly appreciate is that she > has included the cinder-tempest-plugin in her range of patches and > reviews. > > Cinder and its various deliverables make up a large code base. Sofia > has demonstrated a willingness and capacity to learn about the code, > and I anticipate that she will continue to deepen her cinder knowledge > as she continues to work in the project.  At the same time, I believe > that she is self-aware enough to know the limits of her knowledge and > can be trusted not to approve patches that she doesn't understand.  > Above all, she will bring some much needed review bandwidth to the > project. > Definite +1 from me.  She has really grown her contributions to Cinder in the last year and I think she will be a great addition to the core team! Jay > In the absence of objections, I'll add Sofia to the core team just > before the next Cinder team meeting (Wednesday, 30 June at 1400 UTC). > Please communicate any concerns to me before that time. > > cheers, > brian > From geguileo at redhat.com Fri Jun 25 13:06:49 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 25 Jun 2021 15:06:49 +0200 Subject: [cinder] [replication] Error in cinder replication | Openstack Victoria In-Reply-To: References: Message-ID: <20210625130649.sw2thkxr3h7rghgx@localhost> On 25/06, Swogat Pradhan wrote: > Hi Rajat, > I was using the 1st link as a reference. As per the 2nd link The rbd > mirroring in ceph is configured properly, we've tested by creating an image > in ceph itself the mirroring is functioning properly. The problem we are Hi, If I remember correctly there are 2 types of mirroring, per-pool and per-image. When you say it's mirroring properly, do you mean you have enabled pool level replication? As in, any image created in that pool will be automatically replicated? If that's the case, then you have to change it, because Cinder replication works on a per image basis. > facing is in cinder, apparently there is a bug > https://bugs.launchpad.net/cinder/+bug/1898728 which states that cinder > retype (to replication enabled volume type) does not enable replication and > when we are creating a volume directly in cinder using replication enabled > volume type, the volume is getting created in ceph but in cinder the volume > stays in error state (as a result we are unable to use it in instances > primary disk) Those are 2 different cases, creating a new volume is different from retyping. The bug you mention describes that the problem on retype happens because the new type is a dictionary and not a Volume instance, so the extra_specs attribute doesn't exist. In your case I would say the issue is related to how the pool is configured in the Ceph cluster. Cheers, Gorka. > > On Fri, Jun 25, 2021 at 9:57 AM Rajat Dhasmana wrote: > > > Hi, > > > > On Thu, Jun 24, 2021 at 6:11 PM Swogat Pradhan > > wrote: > > > >> Hi, > >> I am trying to configure cinder replication with ceph as backend and i > >> followed this URL to configure the cinder replication: > >> https://netapp.io/2016/10/14/cinder-replication-netapp-perfect-cheesecake-recipe/ > >> and > >> http://www.sebastien-han.fr/blog/2017/06/19/OpenStack-Cinder-configure-replication-api-with-ceph/ > >> > >> > > The first link is specific to netapp driver so I'm not sure how it helps > > with ceph replication. For the second link, did you configure two ceph > > clusters properly and set the needed values with replication_device > > parameter? > > > > > >> i created a Volume type - REPL, with replication enabled True and > >> colume backend name parameter. > >> > >> But when i am trying to create a volume using the created volume type the > >> volume is getting created in ceph but in cinder the status is showing > >> error. and when checked log getting the following error: > >> > >> 2021-06-24 17:59:39.556 28472 ERROR cinder.volume.drivers.rbd > >> [req-6e6b901e-f2dd-43a4-a089-6c734b279e17 d576181b6a444541b8ec7f37d750a4c1 > >> a9ac2eba50d84d64ac44b179bbcc9183 - default default] Error creating rbd > >> image volume-361f35c4-a589-4653-b404-fb93d2f598cf.: > >> cinder.exception.ReplicationError: Volume > >> 361f35c4-a589-4653-b404-fb93d2f598cf replication error: Failed to enable > >> image replication > >> > >> > > Any operation could be failing in this method[1] on rbd side to raise the > > above error. I suggest it's better to check if you've setup replication > > correctly. > > > > [1] > > https://github.com/openstack/cinder/blob/stable/victoria/cinder/volume/drivers/rbd.py#L802 > > > > > >> I tried to create a volume using volume type- default and then > >> later changed the volume type using command 'cinder retype' to the created > >> volume type and it changed without any issue. > >> > >> I am using openstack victoria . > >> > >> With regards, > >> Swogat Pradhan > >> > > From eharney at redhat.com Fri Jun 25 13:26:04 2021 From: eharney at redhat.com (Eric Harney) Date: Fri, 25 Jun 2021 09:26:04 -0400 Subject: [cinder] propose Sofia Enriquez for cinder core In-Reply-To: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> References: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> Message-ID: <82886257-2e0f-5b6b-8f52-01bbcd4ead74@redhat.com> On 6/24/21 4:58 PM, Brian Rosmaita wrote: > Sofia Enriquez (enriquetaso on IRC) has been active the past few cycles > submitting patches, doing reviews, and participating in the Cinder > weekly meeting, PTGs, and midcycles.  She has also been acting as the > Cinder Bug Deputy for Xena and has been running the Bug Squad meeting > each week.  Something I particularly appreciate is that she has included > the cinder-tempest-plugin in her range of patches and reviews. > > Cinder and its various deliverables make up a large code base.  Sofia > has demonstrated a willingness and capacity to learn about the code, and > I anticipate that she will continue to deepen her cinder knowledge as > she continues to work in the project.  At the same time, I believe that > she is self-aware enough to know the limits of her knowledge and can be > trusted not to approve patches that she doesn't understand.  Above all, > she will bring some much needed review bandwidth to the project. > > In the absence of objections, I'll add Sofia to the core team just > before the next Cinder team meeting (Wednesday, 30 June at 1400 UTC). > Please communicate any concerns to me before that time. > > cheers, > brian > +1 from me. Sofia has been steadily contributing for a while and helps a lot. From victoria at vmartinezdelacruz.com Fri Jun 25 13:29:32 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Fri, 25 Jun 2021 15:29:32 +0200 Subject: [cinder] propose Sofia Enriquez for cinder core In-Reply-To: <870fbc5d-a6eb-c540-0e92-5f4112fcc584@gmail.com> References: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> <870fbc5d-a6eb-c540-0e92-5f4112fcc584@gmail.com> Message-ID: Unofficial +1 :) Sofi has been involved with the Cinder community for a couple of years now and have been doing great contributions since she started. It's also worth noting that Sofi has contributed by getting new people involved with the Cinder community, in previous Outreachy internships and also on mentoring activities such as the opensource day in the Grace Hopper conference. Happy to see this nomination! Best, V On Fri, Jun 25, 2021 at 3:05 PM Jay Bryant wrote: > > On 6/24/2021 3:58 PM, Brian Rosmaita wrote: > > Sofia Enriquez (enriquetaso on IRC) has been active the past few > > cycles submitting patches, doing reviews, and participating in the > > Cinder weekly meeting, PTGs, and midcycles. She has also been acting > > as the Cinder Bug Deputy for Xena and has been running the Bug Squad > > meeting each week. Something I particularly appreciate is that she > > has included the cinder-tempest-plugin in her range of patches and > > reviews. > > > > Cinder and its various deliverables make up a large code base. Sofia > > has demonstrated a willingness and capacity to learn about the code, > > and I anticipate that she will continue to deepen her cinder knowledge > > as she continues to work in the project. At the same time, I believe > > that she is self-aware enough to know the limits of her knowledge and > > can be trusted not to approve patches that she doesn't understand. > > Above all, she will bring some much needed review bandwidth to the > > project. > > > Definite +1 from me. She has really grown her contributions to Cinder > in the last year and I think she will be a great addition to the core team! > > Jay > > > In the absence of objections, I'll add Sofia to the core team just > > before the next Cinder team meeting (Wednesday, 30 June at 1400 UTC). > > Please communicate any concerns to me before that time. > > > > cheers, > > brian > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From waboring at hemna.com Fri Jun 25 14:19:07 2021 From: waboring at hemna.com (Walter Boring) Date: Fri, 25 Jun 2021 10:19:07 -0400 Subject: [cinder] propose Sofia Enriquez for cinder core In-Reply-To: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> References: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> Message-ID: +1 from me! On Thu, Jun 24, 2021 at 5:03 PM Brian Rosmaita wrote: > Sofia Enriquez (enriquetaso on IRC) has been active the past few cycles > submitting patches, doing reviews, and participating in the Cinder > weekly meeting, PTGs, and midcycles. She has also been acting as the > Cinder Bug Deputy for Xena and has been running the Bug Squad meeting > each week. Something I particularly appreciate is that she has included > the cinder-tempest-plugin in her range of patches and reviews. > > Cinder and its various deliverables make up a large code base. Sofia > has demonstrated a willingness and capacity to learn about the code, and > I anticipate that she will continue to deepen her cinder knowledge as > she continues to work in the project. At the same time, I believe that > she is self-aware enough to know the limits of her knowledge and can be > trusted not to approve patches that she doesn't understand. Above all, > she will bring some much needed review bandwidth to the project. > > In the absence of objections, I'll add Sofia to the core team just > before the next Cinder team meeting (Wednesday, 30 June at 1400 UTC). > Please communicate any concerns to me before that time. > > cheers, > brian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Fri Jun 25 15:54:02 2021 From: abishop at redhat.com (Alan Bishop) Date: Fri, 25 Jun 2021 08:54:02 -0700 Subject: [cinder] propose Sofia Enriquez for cinder core In-Reply-To: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> References: <3a5c7cf8-bd53-2f27-1373-d1a9efa3505c@gmail.com> Message-ID: A definite (but unofficial) +1 for me! On Thu, Jun 24, 2021 at 2:03 PM Brian Rosmaita wrote: > Sofia Enriquez (enriquetaso on IRC) has been active the past few cycles > submitting patches, doing reviews, and participating in the Cinder > weekly meeting, PTGs, and midcycles. She has also been acting as the > Cinder Bug Deputy for Xena and has been running the Bug Squad meeting > each week. Something I particularly appreciate is that she has included > the cinder-tempest-plugin in her range of patches and reviews. > > Cinder and its various deliverables make up a large code base. Sofia > has demonstrated a willingness and capacity to learn about the code, and > I anticipate that she will continue to deepen her cinder knowledge as > she continues to work in the project. At the same time, I believe that > she is self-aware enough to know the limits of her knowledge and can be > trusted not to approve patches that she doesn't understand. Above all, > she will bring some much needed review bandwidth to the project. > > In the absence of objections, I'll add Sofia to the core team just > before the next Cinder team meeting (Wednesday, 30 June at 1400 UTC). > Please communicate any concerns to me before that time. > > cheers, > brian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Jun 25 16:11:30 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 25 Jun 2021 16:11:30 +0000 Subject: [barbican][kolla][neutron][puppet-openstack][requirements] Please clean up Zuul configuration errors In-Reply-To: <20210624191918.uaud2ubpjp44heir@yuggoth.org> References: <20210624191918.uaud2ubpjp44heir@yuggoth.org> Message-ID: <20210625161130.ki3prg2bzspitrig@yuggoth.org> For the teams tagged in the subject, please have a look at https://zuul.opendev.org/t/openstack/config-errors and merge fixes to your respective repositories for the errors listed there. A summary view can also be found by clicking the "bell" icon in the top-right corner of https://zuul.opendev.org/t/openstack/status or similar pages). Many of these errors are new as of yesterday, due to lingering ansible_python_interpreter variable assignments left over from the Python 3.x default transition. Zuul no longer allows to override the value of this variable, but it can be safely removed since all cases seem to be setting it to the same as our current default. Roughly half the errors look like they've been there for longer, and seem to relate to project renames or job removals leaving stale references in other projects. In most cases you should simply be able to update the project names in these or remove the associated jobs as they're likely no longer used. Also be aware that many of these errors are on stable branches, so the cleanup will need backporting in such cases. Thanks for your prompt attention! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From aschultz at redhat.com Fri Jun 25 16:39:25 2021 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 25 Jun 2021 10:39:25 -0600 Subject: [barbican][kolla][neutron][puppet-openstack][requirements] Please clean up Zuul configuration errors In-Reply-To: <20210625161130.ki3prg2bzspitrig@yuggoth.org> References: <20210624191918.uaud2ubpjp44heir@yuggoth.org> <20210625161130.ki3prg2bzspitrig@yuggoth.org> Message-ID: On Fri, Jun 25, 2021 at 10:19 AM Jeremy Stanley wrote: > For the teams tagged in the subject, please have a look at > https://zuul.opendev.org/t/openstack/config-errors and merge fixes > to your respective repositories for the errors listed there. A > summary view can also be found by clicking the "bell" icon in the > top-right corner of https://zuul.opendev.org/t/openstack/status or > similar pages). > > It looks like puppet-openstack-integration stable/ocata and stable/pike needs to be cleaned up/removed. I don't see it as deliverables in the releases repo so these might have been manually created before moving under the release umbrella. I believe we've EOL'd pike and ocata for the regular modules. What would be the best course of action to clean up these branches? Thanks, -Alex > Many of these errors are new as of yesterday, due to lingering > ansible_python_interpreter variable assignments left over from the > Python 3.x default transition. Zuul no longer allows to override the > value of this variable, but it can be safely removed since all cases > seem to be setting it to the same as our current default. > > Roughly half the errors look like they've been there for longer, and > seem to relate to project renames or job removals leaving stale > references in other projects. In most cases you should simply be > able to update the project names in these or remove the associated > jobs as they're likely no longer used. Also be aware that many of > these errors are on stable branches, so the cleanup will need > backporting in such cases. > > Thanks for your prompt attention! > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri Jun 25 16:48:58 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 25 Jun 2021 09:48:58 -0700 Subject: =?UTF-8?Q?Re:_[barbican][kolla][neutron][puppet-openstack][requirements]?= =?UTF-8?Q?_Please_clean_up_Zuul_configuration_errors?= In-Reply-To: References: <20210624191918.uaud2ubpjp44heir@yuggoth.org> <20210625161130.ki3prg2bzspitrig@yuggoth.org> Message-ID: <208a3c19-c930-431a-b4f9-96aeeaf030d7@www.fastmail.com> On Fri, Jun 25, 2021, at 9:39 AM, Alex Schultz wrote: > > > On Fri, Jun 25, 2021 at 10:19 AM Jeremy Stanley wrote: > > For the teams tagged in the subject, please have a look at > > https://zuul.opendev.org/t/openstack/config-errors and merge fixes > > to your respective repositories for the errors listed there. A > > summary view can also be found by clicking the "bell" icon in the > > top-right corner of https://zuul.opendev.org/t/openstack/status or > > similar pages). > > > > It looks like puppet-openstack-integration stable/ocata and stable/pike > needs to be cleaned up/removed. I don't see it as deliverables in the > releases repo so these might have been manually created before moving > under the release umbrella. I believe we've EOL'd pike and ocata for > the regular modules. What would be the best course of action to clean > up these branches? For OpenStack release managed projects (I believe this is one) the OpenStack release teams has appropriate permissions in Gerrit as well as script tools to EOL branches properly. I think you can make a request to them and they can run through that for you. For projects that are not managed by the OpenStack release team we can help you update the Gerrit ACLs so that you have appropriate permissions for this type of cleanup. https://opendev.org/openstack/project-config/src/branch/master/gerrit/acls/openstack/meta-config.config#L2-L5 shows the set of permissions needed to abandon all open changes on a branch, tag the branch with an eol tag, then remove the branch. (The create permission isn't strictly necessary here). > > Thanks, > -Alex > > > Many of these errors are new as of yesterday, due to lingering > > ansible_python_interpreter variable assignments left over from the > > Python 3.x default transition. Zuul no longer allows to override the > > value of this variable, but it can be safely removed since all cases > > seem to be setting it to the same as our current default. > > > > Roughly half the errors look like they've been there for longer, and > > seem to relate to project renames or job removals leaving stale > > references in other projects. In most cases you should simply be > > able to update the project names in these or remove the associated > > jobs as they're likely no longer used. Also be aware that many of > > these errors are on stable branches, so the cleanup will need > > backporting in such cases. > > > > Thanks for your prompt attention! > > -- > > Jeremy Stanley From fungi at yuggoth.org Fri Jun 25 16:51:31 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 25 Jun 2021 16:51:31 +0000 Subject: [puppet-openstack][release] Please clean up Zuul configuration errors In-Reply-To: References: <20210624191918.uaud2ubpjp44heir@yuggoth.org> <20210625161130.ki3prg2bzspitrig@yuggoth.org> Message-ID: <20210625165131.f334r5wlimopxd34@yuggoth.org> On 2021-06-25 10:39:25 -0600 (-0600), Alex Schultz wrote: [...] > It looks like puppet-openstack-integration stable/ocata and > stable/pike needs to be cleaned up/removed. I don't see it as > deliverables in the releases repo so these might have been > manually created before moving under the release umbrella. I > believe we've EOL'd pike and ocata for the regular modules. What > would be the best course of action to clean up these branches? [...] The OpenStack Release Managers have branch deletion access via the Gerrit WebUI and REST API, and have been performing scripted batch deletions of EOL branches for a little while now. These may already be slated for removal, but it can't hurt to confirm. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From salman10sheikh at gmail.com Fri Jun 25 06:47:53 2021 From: salman10sheikh at gmail.com (Salman Sheikh) Date: Fri, 25 Jun 2021 12:17:53 +0530 Subject: How to get information about the available space in all cinder-volume Message-ID: Dear experts, I have made cinder-volume /dev/sdb on controller as well as compute node, how do i get the information of space available in cinder. -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Jun 25 17:19:00 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 25 Jun 2021 19:19:00 +0200 Subject: [kolla] URGENT to cores - DO NOT MERGE patches Message-ID: Please DO NOT MERGE any changes until issues mentioned in thread [1] are resolved. The reason is that these cause the crucial jobs to never run. I see we have merged one job without proper testing already. [2] You can see only tox testing ran there. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023291.html [2] https://review.opendev.org/c/openstack/kolla-ansible/+/779204 -yoctozepto From gmann at ghanshyammann.com Fri Jun 25 20:14:34 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 25 Jun 2021 15:14:34 -0500 Subject: [tc][all][goal] Migrate RBAC Policy Format from JSON to YAML: COMPLETED Message-ID: <17a44cf4f71.127b099b389155.531362054920193110@ghanshyammann.com> Hello Everyone, I am happy to update that we have completed the community wide goal for "Migrate RBAC Policy Format from JSON to YAML". - https://governance.openstack.org/tc/goals/selected/wallaby/migrate-policy-format-from-json-to-yaml.html Thanks to all the projects/contributors involved in this work. Completion Report: =============== * Projects completed: 33 * Projects do not need any changes: 16 * project left: 0 * This is the complete work we did for this goal: ** Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) ** Tracking Etherpad: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml -gmann From gmann at ghanshyammann.com Fri Jun 25 22:41:29 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 25 Jun 2021 17:41:29 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 25th June, 21: Reading: 5 min Message-ID: <17a4555cfca.fd1421f090594.2325280403133109430@ghanshyammann.com> Hello Everyone, Here is last week's summary of the Technical Committee activities. 1. What we completed this week: ========================= * Deprecated panko and puppet-panko project[1] 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - https://meetings.opendev.org/meetings/tc/2021/tc.2021-06-24-15.00.log.html * We will have next week's meeting on July 1st, Thursday 15:00 UTC[2]. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ TC is using the etherpad[3] for Xena cycle working item. We will be checking and updating the status biweekly in the same etherpad. Open Reviews ----------------- * Five open review for ongoing activities[4]. Migration from Freenode to OFTC ----------------------------------------- * Not much progress on project side wiki/doc page which is only thing left for this migration. * All the required work for this migration is tracked in this etherpad[5] 'Y' release naming process ------------------------------- * Y release naming election is closed now. As a last step, the foundation is doing trademark checks on elected ranking. Deprecate OpenStack-Ansible nspawn repositories ------------------------------------------------------------ * OpenStack-Ansible nspawn repositories is in process of retirement[6] Test support for TLS default: ---------------------------------- * Rico has started a separate email thread over testing with tls-proxy enabled[7], we encourage projects to participate in that testing and help to enable the tls-proxy in gate testing. Retiring governance's in-active repos ------------------------------------------- * The Technical Committee retiring the governance's in-active repos which are not required in current structure[8]. Adding Ceph Dashboard charm to OpenStack charms --------------------------------------------------------------- * Proposal to add Ceph Dashboard charm to OpenStack charms[9] Other Changes ------------------ * Charter change to handle the vacant seat situation[10] * Add DPL model also in 'Appointing leaders' section[11] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[12]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [13] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [14] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://review.opendev.org/c/openstack/governance/+/796408 [2] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [3] https://etherpad.opendev.org/p/tc-xena-tracker [4] https://review.opendev.org/q/project:openstack/governance+status:open [5] https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc [7] https://review.opendev.org/c/openstack/governance/+/797731 [7] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023000.html [8] https://etherpad.opendev.org/p/governance-repos-cleanup [9] https://review.opendev.org/c/openstack/governance/+/797913 [10] https://review.opendev.org/c/openstack/governance/+/797912 [11] https://review.opendev.org/c/openstack/governance/+/797985 [12] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [13] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [14] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From swogatpradhan22 at gmail.com Sat Jun 26 03:57:25 2021 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Sat, 26 Jun 2021 09:27:25 +0530 Subject: [Solved] [cinder] [replication] Error in cinder replication | Openstack Victoria In-Reply-To: <20210625130649.sw2thkxr3h7rghgx@localhost> References: <20210625130649.sw2thkxr3h7rghgx@localhost> Message-ID: Hi Gorka, Thank you for helping me find the issue, the ceph was set up for pool replication whereas it should be set up for image replication. After setting up image replication i am able to create volume without any issue and the replication functionality is working fine. With regards Swogat Pradhan On Fri, Jun 25, 2021 at 6:36 PM Gorka Eguileor wrote: > On 25/06, Swogat Pradhan wrote: > > Hi Rajat, > > I was using the 1st link as a reference. As per the 2nd link The rbd > > mirroring in ceph is configured properly, we've tested by creating an > image > > in ceph itself the mirroring is functioning properly. The problem we are > > Hi, > > If I remember correctly there are 2 types of mirroring, per-pool and > per-image. > > When you say it's mirroring properly, do you mean you have enabled pool > level replication? As in, any image created in that pool will be > automatically replicated? > > If that's the case, then you have to change it, because Cinder > replication works on a per image basis. > > > > facing is in cinder, apparently there is a bug > > https://bugs.launchpad.net/cinder/+bug/1898728 which states that cinder > > > > retype (to replication enabled volume type) does not enable replication > and > > when we are creating a volume directly in cinder using replication > enabled > > volume type, the volume is getting created in ceph but in cinder the > volume > > stays in error state (as a result we are unable to use it in instances > > primary disk) > > Those are 2 different cases, creating a new volume is different from > retyping. > > The bug you mention describes that the problem on retype happens because > the new type is a dictionary and not a Volume instance, so the > extra_specs attribute doesn't exist. > > In your case I would say the issue is related to how the pool is > configured in the Ceph cluster. > > Cheers, > Gorka. > > > > > On Fri, Jun 25, 2021 at 9:57 AM Rajat Dhasmana > wrote: > > > > > Hi, > > > > > > On Thu, Jun 24, 2021 at 6:11 PM Swogat Pradhan < > swogatpradhan22 at gmail.com> > > > wrote: > > > > > >> Hi, > > >> I am trying to configure cinder replication with ceph as backend and i > > >> followed this URL to configure the cinder replication: > > >> > https://netapp.io/2016/10/14/cinder-replication-netapp-perfect-cheesecake-recipe/ > > >> and > > >> > http://www.sebastien-han.fr/blog/2017/06/19/OpenStack-Cinder-configure-replication-api-with-ceph/ > > >> > > >> > > > The first link is specific to netapp driver so I'm not sure how it > helps > > > with ceph replication. For the second link, did you configure two ceph > > > clusters properly and set the needed values with replication_device > > > parameter? > > > > > > > > >> i created a Volume type - REPL, with replication enabled True and > > >> colume backend name parameter. > > >> > > >> But when i am trying to create a volume using the created volume type > the > > >> volume is getting created in ceph but in cinder the status is showing > > >> error. and when checked log getting the following error: > > >> > > >> 2021-06-24 17:59:39.556 28472 ERROR cinder.volume.drivers.rbd > > >> [req-6e6b901e-f2dd-43a4-a089-6c734b279e17 > d576181b6a444541b8ec7f37d750a4c1 > > >> a9ac2eba50d84d64ac44b179bbcc9183 - default default] Error creating rbd > > >> image volume-361f35c4-a589-4653-b404-fb93d2f598cf.: > > >> cinder.exception.ReplicationError: Volume > > >> 361f35c4-a589-4653-b404-fb93d2f598cf replication error: Failed to > enable > > >> image replication > > >> > > >> > > > Any operation could be failing in this method[1] on rbd side to raise > the > > > above error. I suggest it's better to check if you've setup replication > > > correctly. > > > > > > [1] > > > > https://github.com/openstack/cinder/blob/stable/victoria/cinder/volume/drivers/rbd.py#L802 > > > > > > > > >> I tried to create a volume using volume type- default and then > > >> later changed the volume type using command 'cinder retype' to the > created > > >> volume type and it changed without any issue. > > >> > > >> I am using openstack victoria . > > >> > > >> With regards, > > >> Swogat Pradhan > > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Sat Jun 26 04:10:57 2021 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Sat, 26 Jun 2021 09:40:57 +0530 Subject: [openstack] [DC-DC Setup] [Replication] Correct approach to a DC-DC setup for Openstack (victoria) Message-ID: Hi, I am trying to setup a DC-DC setup in openstack victoria, I have 2 numbers of all in one setup (controller, compute) with shared mysql and rabbitmq cluster and am using ceph image replication and configuring cinder replication on top of it. So when the 1st node goes down then i will perform a 'cinder failover-host node1' and then do a nova-evacuate to failover to the 2nd DC setup and once the node 1 comes up i will use cinder failback and live migration from node2 to node1. **i am facing some minor issues in this setup which I will ask once I know this is the right approach to the DC-DC concept. Can you please shed some light on this being the right approach or not and if not then how can i improve it? With regards Swogat pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Sat Jun 26 05:28:25 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Sat, 26 Jun 2021 14:28:25 +0900 Subject: [puppet-openstack][release] Please clean up Zuul configuration errors In-Reply-To: <20210625165131.f334r5wlimopxd34@yuggoth.org> References: <20210624191918.uaud2ubpjp44heir@yuggoth.org> <20210625161130.ki3prg2bzspitrig@yuggoth.org> <20210625165131.f334r5wlimopxd34@yuggoth.org> Message-ID: The EOL tags for stable/ocata and pike for puppet-openstack were created a while ago[1] and these two branches are no longer maintained. Because eol tag was already created, we can remove the stable/ocata branch and the stable/pike branch from git repo and gerrit. I'll ask the Release Management team to delete these two branches, then I expect the current errors will be solved. (Sorry but it seems I forgot to ask the deletion when I proposed EOL) [1] https://review.opendev.org/c/openstack/releases/+/726392/ On Sat, Jun 26, 2021 at 1:57 AM Jeremy Stanley wrote: > On 2021-06-25 10:39:25 -0600 (-0600), Alex Schultz wrote: > [...] > > It looks like puppet-openstack-integration stable/ocata and > > stable/pike needs to be cleaned up/removed. I don't see it as > > deliverables in the releases repo so these might have been > > manually created before moving under the release umbrella. I > > believe we've EOL'd pike and ocata for the regular modules. What > > would be the best course of action to clean up these branches? > [...] > > The OpenStack Release Managers have branch deletion access via the > Gerrit WebUI and REST API, and have been performing scripted batch > deletions of EOL branches for a little while now. These may already > be slated for removal, but it can't hurt to confirm. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pshreee at gmail.com Sat Jun 26 13:51:52 2021 From: pshreee at gmail.com (pradyumna borge) Date: Sat, 26 Jun 2021 19:21:52 +0530 Subject: [cinder] OpenStack lvm and Shared Storage Message-ID: Hi, In a mulit-node setup do we need to provide shared storage via Cinder when setting up the second compute node? In a typical mulit-node setup we will have: 1. First node as Controller node acting as a Compute node too. This will have Cinder *lvm*. 2. Second node as Compute node. 1. Will this node have any storage via lvm? If yes then how will the first compute node access storage on the second node? 2. Likewise, how can the VMs on this second node access storage on the first compute node? My other questions are: 1. So if I spawn a VM on the second Compute node, where will the disks of the VM reside? 2. Can I attach attach a disk on the first node to a VM on the second node? 3. Do I have to configure NFS storge as shared storage for Cinder? 4. Does Cinder take care of sharing the disks (I dont think so) 5. What are the steps to setup devstack for multi-node and multi storage (nfs and lvm) ~ shree From berndbausch at gmail.com Sun Jun 27 06:29:39 2021 From: berndbausch at gmail.com (Bernd Bausch) Date: Sun, 27 Jun 2021 15:29:39 +0900 Subject: [kolla-ansible] [swift] access denied to various APIs Message-ID: I set up a cloud with the Victoria version of Kolla-Ansible. I enabled Swift and configured Swift as backend for Glance and Cinder-Backup. The Glance backend works, the Cinder-Backup backend doesn't. Furthermore, as a non-admin user I can't do anything with Swift. These two headscratchers that may or may not be related. I seek for help how to troubleshoot this. *Headscratcher 1*: Swift doesn't accept unauthenticated /info API, although expose_info is explicitly set to "true". This is why Cinder-Backup fails; it performs this API when starting up: curl http://192.168.122.253:8080/info {"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}} When I add a valid token, this works. *Headscratcher 2*: Swift refuses access except for the admin role. I get this when I don't have the admin role: $ source demorc.sh $ swift stat Account HEAD failed: http://192.168.122.253:8080/v1/AUTH_06d5618863294187bf46c611c0ebb4a7 403 Forbidden Failed Transaction ID: tx7e8d958e3c7b410880000-0060d81a25 To add insult to injury, I don't see relevant messages in the centralized Elasticsearch log, and logging does not go to any log files. Any thoughts? Thanks, Bernd -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sun Jun 27 20:02:17 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 27 Jun 2021 22:02:17 +0200 Subject: [kolla] URGENT to cores - DO NOT MERGE patches In-Reply-To: References: Message-ID: Dears, I have proposed a series of patches [1] to kolla ansible that fix its jobs. They seem to be passing now. Please prioritise reviewing them and merging. A similar deed should be done for kolla. [1] https://review.opendev.org/q/topic:%22ci-emergency-fix-for-zuul-4-6%22 -yoctozepto On Fri, Jun 25, 2021 at 7:19 PM Radosław Piliszek wrote: > > Please DO NOT MERGE any changes until issues mentioned in thread [1] > are resolved. > The reason is that these cause the crucial jobs to never run. > I see we have merged one job without proper testing already. [2] > You can see only tox testing ran there. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023291.html > [2] https://review.opendev.org/c/openstack/kolla-ansible/+/779204 > > -yoctozepto From bkslash at poczta.onet.pl Mon Jun 28 07:23:58 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Mon, 28 Jun 2021 09:23:58 +0200 Subject: [neutron-vpnaas][kolla-ansible][victoria] Growing memory consumption with only one VPN connection Message-ID: <83F40238-8016-4C8F-A0E7-E6B387C89FB7@poczta.onet.pl> Hi, I have problem with neutron vpnaas - after enabling vpnaas plugin everything was ok at first. I was able to create VPN connection and communication is correct. But after a week of running vpnaas (only one VPN connection created/working!) I’ve noticed, that neutron-vpnaas takes more and more memory. I have 5 processes on each controller (there should be always five? Or it is changing dynamically?): 42435 1545384 0.6 8.6 5802516 5712412 ? S Jun15 110:50 /var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf 42435 1545389 0.6 8.5 5735832 5645856 ? S Jun15 112:16 /var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf 42435 1545378 0.5 8.5 5734192 5643620 ? S Jun15 108:09 /var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf 42435 1545372 0.5 8.5 5731128 5641436 ? S Jun15 109:07 /var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf 42435 1545369 0.6 8.4 5637084 5547392 ? S Jun15 114:21 /var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf now neutron_server takes over 27G of RAM on each controller: neutron_server running 10.2 27.2G After stopping all neutron containers and starting it again neutron takes a lot less memory: neutron_server running 0.5 583M but memory usage keeps growing (about 5-6 M every minute). What’s wrong? When I disable vpnaas globally there’s no problem with excessive memory usage, so it’s vpnaas for sure… Best regards Adam Tomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Mon Jun 28 07:46:55 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 28 Jun 2021 12:46:55 +0500 Subject: [wallaby][magnum] magnum-conductor service crash Message-ID: Hi, I am using openstack wallaby on ubuntu20.04. My service of magnum-conductor is getting crashed with below error in syslog of ubuntu. Jun 26 22:52:17 orchestration magnum-conductor[173714]: /usr/lib/python3/dist-packages/magnum/drivers/common/driver.py:38: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. Jun 26 22:52:17 orchestration magnum-conductor[173714]: yield entry_point, entry_point.load(require=False) Jun 26 22:52:17 orchestration magnum-conductor[173714]: /usr/lib/python3/dist-packages/kubernetes/client/apis/init.py:10: DeprecationWarning: The package kubernetes.client.apis is renamed and deprecated, use kubernetes.client.api instead (please note that the trailing s was removed). Jun 26 22:52:17 orchestration magnum-conductor[173714]: warnings.warn( Jun 26 22:52:17 orchestration magnum-conductor[173714]: /usr/lib/python3/dist-packages/magnum/drivers/common/driver.py:38: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. Jun 26 22:52:17 orchestration magnum-conductor[173714]: yield entry_point, entry_point.load(require=False) Jun 26 22:52:17 orchestration magnum-conductor[173714]: Traceback (most recent call last): Jun 26 22:52:17 orchestration magnum-conductor[173714]: File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 476, in fire_timers Jun 26 22:52:17 orchestration magnum-conductor[173714]: timer() Jun 26 22:52:17 orchestration magnum-conductor[173714]: File "/usr/lib/python3/dist-packages/eventlet/hubs/timer.py", line 59, in call Jun 26 22:52:17 orchestration magnum-conductor[173714]: cb(*args, **kw) Jun 26 22:52:17 orchestration magnum-conductor[173714]: File "/usr/lib/python3/dist-packages/eventlet/semaphore.py", line 152, in _do_acquire Jun 26 22:52:17 orchestration magnum-conductor[173714]: waiter.switch() Jun 26 22:52:17 orchestration magnum-conductor[173714]: greenlet.error: cannot switch to a different thread Any advice how to fix it ? -- Regards, Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Jun 28 08:21:47 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 28 Jun 2021 09:21:47 +0100 Subject: [kolla-ansible] [swift] access denied to various APIs In-Reply-To: References: Message-ID: On Sun, 27 Jun 2021 at 07:32, Bernd Bausch wrote: > > I set up a cloud with the Victoria version of Kolla-Ansible. I enabled Swift and configured Swift as backend for Glance and Cinder-Backup. The Glance backend works, the Cinder-Backup backend doesn't. Furthermore, as a non-admin user I can't do anything with Swift. These two headscratchers that may or may not be related. I seek for help how to troubleshoot this. > > Headscratcher 1: Swift doesn't accept unauthenticated /info API, although expose_info is explicitly set to "true". This is why Cinder-Backup fails; it performs this API when starting up: > > curl http://192.168.122.253:8080/info > {"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}} > > When I add a valid token, this works. > > Headscratcher 2: Swift refuses access except for the admin role. I get this when I don't have the admin role: > > $ source demorc.sh > $ swift stat > Account HEAD failed: http://192.168.122.253:8080/v1/AUTH_06d5618863294187bf46c611c0ebb4a7 403 Forbidden > Failed Transaction ID: tx7e8d958e3c7b410880000-0060d81a25 > > To add insult to injury, I don't see relevant messages in the centralized Elasticsearch log, and logging does not go to any log files. Hi Bernd, we had some issues with a broken fluentd release breaking the logging pipeline. We have since pinned the version, and pulling a new fluentd image should resolve the issue. > > Any thoughts? > > Thanks, > > Bernd > > From doug at stackhpc.com Mon Jun 28 08:28:48 2021 From: doug at stackhpc.com (Doug) Date: Mon, 28 Jun 2021 09:28:48 +0100 Subject: [kolla][monasca] thresh keeps dying In-Reply-To: <16e169aa9a91a7a0bdeecb4d790aa15cdc6f7fd1.camel@netart.pl> References: <16e169aa9a91a7a0bdeecb4d790aa15cdc6f7fd1.camel@netart.pl> Message-ID: <44c8015d-5fd9-7180-1a83-3a747401bdb4@stackhpc.com> On 16/06/2021 09:15, Tomasz Rutkowski wrote: > can someone direct me where to search for the cause? Have you seen this? https://bugs.launchpad.net/kolla-ansible/+bug/1808805 The current behaviour without that fix on a multi-node cluster is for only one of the thresh containers to run in local mode and the others continually restart. > as I understand failing to submit the topology to storm cluster > shouldn't be final, and the process just dissapers there... > > + exec /opt/storm/bin/storm jar /monasca-thresh-source/monasca-thresh-stable-victoria/thresh/target/monasca-thresh-2.4.0-SNAPSHOT-shaded.jar -Djava.io.tmpdir=/var/lib/monasca-thresh/data monasca.thresh.ThresholdingEngine /etc/monasca/thresh-config.yml monasca-thresh > Running: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -client -Ddaemon.name= -Dstorm.options= -Dstorm.home=/opt/storm -Dstorm.log.dir=/var/log/kolla/storm -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= -cp /opt/storm/*:/opt/storm/lib/*:/opt/storm/extlib/*:/monasca-thresh-source/monasca-thresh-stable-victoria/thresh/target/monasca-thresh-2.4.0-SNAPSHOT-shaded.jar:/opt/storm/conf:/opt/storm/bin -Dstorm.jar=/monasca-thresh-source/monasca-thresh-stable-victoria/thresh/target/monasca-thresh-2.4.0-SNAPSHOT-shaded.jar -Dstorm.dependency.jars= -Dstorm.dependency.artifacts={} -Djava.io.tmpdir=/var/lib/monasca-thresh/data monasca.thresh.ThresholdingEngine /etc/monasca/thresh-config.yml monasca-thresh > 687 [main] INFO m.t.ThresholdingEngine - -------- Version Information -------- > 692 [main] INFO m.t.ThresholdingEngine - monasca-thresh-2.4.0-SNAPSHOT-2021-06-06T08:32:08-${buildNumber} > 693 [main] INFO m.t.ThresholdingEngine - Instantiating ThresholdingEngine with config file: /etc/monasca/thresh-config.yml, topology: monasca-thresh > 1000 [main] INFO o.h.v.i.u.Version - HV000001: Hibernate Validator 5.2.1.Final > 1197 [main] INFO m.t.ThresholdingEngine - local set to false > 1312 [main] INFO m.t.i.t.MetricSpout - Created > 1340 [main] INFO m.t.i.t.EventSpout - EventSpout created > 1516 [main] WARN o.a.s.u.Utils - STORM-VERSION new 1.2.2 old null > 1516 [main] INFO m.t.ThresholdingEngine - submitting topology monasca-thresh to non-local storm cluster > 1549 [main] INFO o.a.s.StormSubmitter - Generated ZooKeeper secret payload for MD5-digest: -7012431400424907995:-9108807134284416946 > 1728 [main] INFO o.a.s.u.NimbusClient - Found leader nimbus : storm1:6627 > 1751 [main] INFO o.a.s.s.a.AuthUtils - Got AutoCreds [] > 1756 [main] INFO o.a.s.u.NimbusClient - Found leader nimbus : storm1:6627 > Exception in thread "main" java.lang.RuntimeException: Topology with name `monasca-thresh` already exists on cluster > at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:237) > at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:387) > at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:159) > at monasca.thresh.ThresholdingEngine.run(ThresholdingEngine.java:111) > at monasca.thresh.ThresholdingEngine.main(ThresholdingEngine.java:82) > > > regards From tomasz.rutkowski at netart.pl Mon Jun 28 08:54:45 2021 From: tomasz.rutkowski at netart.pl (Tomasz Rutkowski) Date: Mon, 28 Jun 2021 10:54:45 +0200 Subject: [kolla][monasca] thresh keeps dying In-Reply-To: <44c8015d-5fd9-7180-1a83-3a747401bdb4@stackhpc.com> References: <16e169aa9a91a7a0bdeecb4d790aa15cdc6f7fd1.camel@netart.pl> <44c8015d-5fd9-7180-1a83-3a747401bdb4@stackhpc.com> Message-ID: W dniu pon, 28.06.2021 o godzinie 09∶28 +0100, użytkownik Doug napisał: > > On 16/06/2021 09:15, Tomasz Rutkowski wrote: > > can someone direct me where to search for the cause? > > Have you seen this? > > https://bugs.launchpad.net/kolla-ansible/+bug/1808805 > > continually restart. > > thanks, I haven't found this, however I managed to overcome the problem with two changes (one mentioned there): 1. delete "local" from the end of the command (connects to storm) 2. change the remaining "monasca-thresh" to "thresh-cluster" (without that the topology name is present but with empty config) then the containers die as before, however the topology is put in storm cluster and everything (alarms so far ;)) works as expected regards -- Tomasz Rutkowski Dział Rozwoju Systemów -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3728 bytes Desc: not available URL: From tkajinam at redhat.com Mon Jun 28 11:15:45 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 28 Jun 2021 20:15:45 +0900 Subject: [puppet-openstack][release] Please clean up Zuul configuration errors In-Reply-To: References: <20210624191918.uaud2ubpjp44heir@yuggoth.org> <20210625161130.ki3prg2bzspitrig@yuggoth.org> <20210625165131.f334r5wlimopxd34@yuggoth.org> Message-ID: Regarding the error in puppet repos, it turned out that one repo(puppet-openstack-integration) has no deliverables for Pike and Ocata and because of that its stable/ocata and stable/pike have not yet been EOLed. (These two stable branches were EOLed in the other puppet repos) I've raised this in #openstack-release channel and will ask some help from the release team to move these two branches in p-o-i repo to EOL. On Sat, Jun 26, 2021 at 2:28 PM Takashi Kajinami wrote: > The EOL tags for stable/ocata and pike for puppet-openstack were created a > while ago[1] > and these two branches are no longer maintained. > Because eol tag was already created, we can remove the stable/ocata branch > and the stable/pike branch from git repo and gerrit. > I'll ask the Release Management team to delete these two branches, then I > expect > the current errors will be solved. > (Sorry but it seems I forgot to ask the deletion when I proposed EOL) > > [1] https://review.opendev.org/c/openstack/releases/+/726392/ > > > On Sat, Jun 26, 2021 at 1:57 AM Jeremy Stanley wrote: > >> On 2021-06-25 10:39:25 -0600 (-0600), Alex Schultz wrote: >> [...] >> > It looks like puppet-openstack-integration stable/ocata and >> > stable/pike needs to be cleaned up/removed. I don't see it as >> > deliverables in the releases repo so these might have been >> > manually created before moving under the release umbrella. I >> > believe we've EOL'd pike and ocata for the regular modules. What >> > would be the best course of action to clean up these branches? >> [...] >> >> The OpenStack Release Managers have branch deletion access via the >> Gerrit WebUI and REST API, and have been performing scripted batch >> deletions of EOL branches for a little while now. These may already >> be slated for removal, but it can't hurt to confirm. >> -- >> Jeremy Stanley >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon Jun 28 11:41:44 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 28 Jun 2021 20:41:44 +0900 Subject: [barbican][kolla][neutron][puppet-openstack][requirements] Please clean up Zuul configuration errors In-Reply-To: <20210625161130.ki3prg2bzspitrig@yuggoth.org> References: <20210624191918.uaud2ubpjp44heir@yuggoth.org> <20210625161130.ki3prg2bzspitrig@yuggoth.org> Message-ID: Regarding neutron related errors, I see two patterns. The one is "Unknown projects: openstack/networking-l2gw". This is caused by the official retirement of networking-l2gw and openstack/networking-l2gw was dropped from zuul/main.yaml. We need to replace openstack/networking-l2gw with x/networking-l2gw. This happens in networking-odl, networking-midonet and neutron-fwaas. The other is "Job neutron-fwaas-networking-midonet-cross-py35 not defined". This happens in neutron-fwaas. I will take care of them. Thanks, Akihiro Motoki (amotoki) On Sat, Jun 26, 2021 at 1:14 AM Jeremy Stanley wrote: > > For the teams tagged in the subject, please have a look at > https://zuul.opendev.org/t/openstack/config-errors and merge fixes > to your respective repositories for the errors listed there. A > summary view can also be found by clicking the "bell" icon in the > top-right corner of https://zuul.opendev.org/t/openstack/status or > similar pages). > > Many of these errors are new as of yesterday, due to lingering > ansible_python_interpreter variable assignments left over from the > Python 3.x default transition. Zuul no longer allows to override the > value of this variable, but it can be safely removed since all cases > seem to be setting it to the same as our current default. > > Roughly half the errors look like they've been there for longer, and > seem to relate to project renames or job removals leaving stale > references in other projects. In most cases you should simply be > able to update the project names in these or remove the associated > jobs as they're likely no longer used. Also be aware that many of > these errors are on stable branches, so the cleanup will need > backporting in such cases. > > Thanks for your prompt attention! > -- > Jeremy Stanley From skaplons at redhat.com Mon Jun 28 12:03:01 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 28 Jun 2021 14:03:01 +0200 Subject: [barbican][kolla][neutron][puppet-openstack][requirements] Please clean up Zuul configuration errors In-Reply-To: References: <20210624191918.uaud2ubpjp44heir@yuggoth.org> <20210625161130.ki3prg2bzspitrig@yuggoth.org> Message-ID: <3147679.WaQ0PvHZp8@p1> Hi, On poniedziałek, 28 czerwca 2021 13:41:44 CEST Akihiro Motoki wrote: > Regarding neutron related errors, I see two patterns. > > The one is "Unknown projects: openstack/networking-l2gw". > This is caused by the official retirement of networking-l2gw and > openstack/networking-l2gw was dropped from zuul/main.yaml. > We need to replace openstack/networking-l2gw with x/networking-l2gw. > This happens in networking-odl, networking-midonet and neutron-fwaas. > > The other is "Job neutron-fwaas-networking-midonet-cross-py35 not defined". > This happens in neutron-fwaas. > > I will take care of them. Thx a lot Akihiro for taking care of it. Please ping me when You will have something to review :) > > Thanks, > Akihiro Motoki (amotoki) > > On Sat, Jun 26, 2021 at 1:14 AM Jeremy Stanley wrote: > > For the teams tagged in the subject, please have a look at > > https://zuul.opendev.org/t/openstack/config-errors and merge fixes > > to your respective repositories for the errors listed there. A > > summary view can also be found by clicking the "bell" icon in the > > top-right corner of https://zuul.opendev.org/t/openstack/status or > > similar pages). > > > > Many of these errors are new as of yesterday, due to lingering > > ansible_python_interpreter variable assignments left over from the > > Python 3.x default transition. Zuul no longer allows to override the > > value of this variable, but it can be safely removed since all cases > > seem to be setting it to the same as our current default. > > > > Roughly half the errors look like they've been there for longer, and > > seem to relate to project renames or job removals leaving stale > > references in other projects. In most cases you should simply be > > able to update the project names in these or remove the associated > > jobs as they're likely no longer used. Also be aware that many of > > these errors are on stable branches, so the cleanup will need > > backporting in such cases. > > > > Thanks for your prompt attention! > > -- > > Jeremy Stanley -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From tkajinam at redhat.com Mon Jun 28 13:25:34 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 28 Jun 2021 22:25:34 +0900 Subject: [barbican][kolla][neutron][puppet-openstack][requirements] Please clean up Zuul configuration errors In-Reply-To: <20210625161130.ki3prg2bzspitrig@yuggoth.org> References: <20210624191918.uaud2ubpjp44heir@yuggoth.org> <20210625161130.ki3prg2bzspitrig@yuggoth.org> Message-ID: Regarding Barbican, I have submitted a series of patches to remove reference to the octavia-v1-dsvm-scenario job. https://review.opendev.org/q/topic:%22octavia-v1%22+(status:open%20OR%20status:merged) It seems the job no longer exists since Octavia stable/stein was EOLed. On Sat, Jun 26, 2021 at 1:18 AM Jeremy Stanley wrote: > For the teams tagged in the subject, please have a look at > https://zuul.opendev.org/t/openstack/config-errors and merge fixes > to your respective repositories for the errors listed there. A > summary view can also be found by clicking the "bell" icon in the > top-right corner of https://zuul.opendev.org/t/openstack/status or > similar pages). > > Many of these errors are new as of yesterday, due to lingering > ansible_python_interpreter variable assignments left over from the > Python 3.x default transition. Zuul no longer allows to override the > value of this variable, but it can be safely removed since all cases > seem to be setting it to the same as our current default. > > Roughly half the errors look like they've been there for longer, and > seem to relate to project renames or job removals leaving stale > references in other projects. In most cases you should simply be > able to update the project names in these or remove the associated > jobs as they're likely no longer used. Also be aware that many of > these errors are on stable branches, so the cleanup will need > backporting in such cases. > > Thanks for your prompt attention! > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Jun 28 13:31:12 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 28 Jun 2021 08:31:12 -0500 Subject: How to get information about the available space in all cinder-volume In-Reply-To: References: Message-ID: <20210628133112.GA3190345@sm-workstation> On Fri, Jun 25, 2021 at 12:17:53PM +0530, Salman Sheikh wrote: > Dear experts, > > I have made cinder-volume /dev/sdb on controller as well as compute node, > how do i get the information of space available in cinder. Hi Salman, Cinder is just the management/control plane for the storage, so it has no visibility into actual space consumed on the volume. It can only report the configured size. In order to find out the space available, you would need to run something like `df -h` on the node to see its usage stats. Sean From iurygregory at gmail.com Mon Jun 28 15:57:23 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 28 Jun 2021 17:57:23 +0200 Subject: [Ironic] No usptream meeting on July 5th Message-ID: Hello Ironicers! During our weekly meeting today we agreed that we will skip the next meeting due to holidays in the US and other countries! -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Mon Jun 28 16:30:11 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 28 Jun 2021 18:30:11 +0200 Subject: [cinder] OpenStack lvm and Shared Storage In-Reply-To: References: Message-ID: <20210628163011.dmbduhyulagw6whj@localhost> On 26/06, pradyumna borge wrote: > Hi, > > In a mulit-node setup do we need to provide shared storage via Cinder > when setting up the second compute node? > > In a typical mulit-node setup we will have: > 1. First node as Controller node acting as a Compute node too. This > will have Cinder *lvm*. > 2. Second node as Compute node. > 1. Will this node have any storage via lvm? If yes then how will > the first compute node access storage on the second node? > 2. Likewise, how can the VMs on this second node access storage on > the first compute node? > > My other questions are: > 1. So if I spawn a VM on the second Compute node, where will the disks > of the VM reside? > 2. Can I attach attach a disk on the first node to a VM on the second > node? > 3. Do I have to configure NFS storge as shared storage for Cinder? > 4. Does Cinder take care of sharing the disks (I dont think so) > 5. What are the steps to setup devstack for multi-node and multi > storage (nfs and lvm) > > ~ shree > Hi, I believe there may be some misunderstandings on how OpenStack operates. Some clarifications: Nova: - Can run VMs without Cinder volumes, using only ephemeral volumes that are stored in compute's local disk. - Can run with ephemeral local boot volumes and attach Cinder external volumes. - Can run with Cinder boot volumes. Cinder: Cinder-volume usually connects to an external storage solution that is not running on the controller node itself, except when LVM is used. In that case the volume is local no the node where cinder-volume is running and the host exports the volume via iSCSI so any compute node can connect or the cinder-backup service running on any controller node can connect. But since the volume data is only stored in that specific node, it means that when the node is offline no cinder volume can be used, so it's usually only used for POCs. There are multiple ways to deploying devstack with multiple backends, but one way to do it is using the CINDER_ENABLED_BACKENDS variable in your local.conf file. I never use NFS, but for example, to have 2 LVM backends: CINDER_ENABLED_BACKENDS=lvm:lvmdriver-1,lvm:lvmdriver-2 To enable the Ceph plugin and have lvm and ceph with: enable_plugin devstack-plugin-ceph git://git.openstack.org/openstack/devstack-plugin-ceph CINDER_ENABLED_BACKENDS=lvm,ceph Cheers, Gorka. From elod.illes at est.tech Mon Jun 28 18:03:48 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Mon, 28 Jun 2021 20:03:48 +0200 Subject: [release][stable] stale old branches probably should be EOL'd Message-ID: Hi Release Team, now that stable/ocata is mostly tagged with ocata-eol tag it turned out that there are repositories that did not have any deliverables in ocata (not even a yaml under deliverables/ocata in the release repository). In fact, I've checked and there are several examples from the past where repositories have old stable branches open [1] that should be $series-eol tagged and deleted in my understanding. I think these old stable branches were skipped accidentally during the old EOL processing, so I would suggest to tag these with $series-eol and then delete the old branches. I don't see any reason these branches should be kept open. But let me know if I miss anything! If we stick to the tagging + deletion, then the next question is how to achieve this. There are a couple of options: 1. use the existing tools: such as creating a yaml file under deliverables/$series/ and add the $series-eol tag for the given repositories (I understand that these are not 'real' deliverables, but does that cause any issue for us?) 2. implement some new mechanism, similar to option 1, but clearly indicate that the tagging does not create any deliverables 3. manual tagging + deletion I think the 1st option is the easiest and since we already have the whole process there, we can simply use the existing tool. So what do you think? - Is that OK to tag these open old stable branches with $series-eol tag and then delete them? - If yes, which option from the above list is acceptable, or what else can we do? Thanks for the answers/ideas in advance, Előd [1] http://paste.openstack.org/show/806995/ From fungi at yuggoth.org Mon Jun 28 18:12:25 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 28 Jun 2021 18:12:25 +0000 Subject: [release][stable] stale old branches probably should be EOL'd In-Reply-To: References: Message-ID: <20210628181225.hdz7ai2vgxgjsdl7@yuggoth.org> On 2021-06-28 20:03:48 +0200 (+0200), Előd Illés wrote: [...] > If we stick to the tagging + deletion, then the next question is how to > achieve this. There are a couple of options: > > 1. use the existing tools: such as creating a yaml file under > deliverables/$series/ and add the $series-eol tag for the given repositories > (I understand that these are not 'real' deliverables, but does that cause > any issue for us?) > 2. implement some new mechanism, similar to option 1, but clearly indicate > that the tagging does not create any deliverables > 3. manual tagging + deletion > > I think the 1st option is the easiest and since we already have the whole > process there, we can simply use the existing tool. > > So what do you think? > - Is that OK to tag these open old stable branches with $series-eol tag and > then delete them? > - If yes, which option from the above list is acceptable, or what else can > we do? [...] My two cents, I think it's okay to tag those old branches and delete them, and I support option 1 as it helps create a bit of a breadcrumb trail for future reflection. As for how they got overlooked, I expect you're right. In the past, well before we had any real release automation and tracking, projects asked the Infra team to delete their old branches, and quite often did not provide a complete list. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Mon Jun 28 18:14:40 2021 From: smooney at redhat.com (Sean Mooney) Date: Mon, 28 Jun 2021 19:14:40 +0100 Subject: [cinder] OpenStack lvm and Shared Storage In-Reply-To: <20210628163011.dmbduhyulagw6whj@localhost> References: <20210628163011.dmbduhyulagw6whj@localhost> Message-ID: <784e56aa67975b4e674cbb96ebc88e784c8c97fc.camel@redhat.com> On Mon, 2021-06-28 at 18:30 +0200, Gorka Eguileor wrote: > On 26/06, pradyumna borge wrote: > > Hi, > > > > In a mulit-node setup do we need to provide shared storage via Cinder > > when setting up the second compute node? > > > > In a typical mulit-node setup we will have: > > 1. First node as Controller node acting as a Compute node too. This > > will have Cinder *lvm*. > > 2. Second node as Compute node. > > 1. Will this node have any storage via lvm? If yes then how will > > the first compute node access storage on the second node? > > 2. Likewise, how can the VMs on this second node access storage on > > the first compute node? > > > > My other questions are: > > 1. So if I spawn a VM on the second Compute node, where will the disks > > of the VM reside? > > 2. Can I attach attach a disk on the first node to a VM on the second > > node? > > 3. Do I have to configure NFS storge as shared storage for Cinder? > > 4. Does Cinder take care of sharing the disks (I dont think so) > > 5. What are the steps to setup devstack for multi-node and multi > > storage (nfs and lvm) > > > > ~ shree > > > > Hi, > > I believe there may be some misunderstandings on how OpenStack operates. > > Some clarifications: > > Nova: > > - Can run VMs without Cinder volumes, using only ephemeral volumes that > are stored in compute's local disk. > - Can run with ephemeral local boot volumes and attach Cinder external > volumes. > - Can run with Cinder boot volumes. > > Cinder: > > Cinder-volume usually connects to an external storage solution that is > not running on the controller node itself, except when LVM is used. In > that case the volume is local no the node where cinder-volume is running > and the host exports the volume via iSCSI so any compute node can > connect or the cinder-backup service running on any controller node can > connect. > > But since the volume data is only stored in that specific node, it means > that when the node is offline no cinder volume can be used, so it's > usually only used for POCs. well pocs or small scale deployment like 3-5 node cluseter that might be deployed at the edge or labs. what i have seen people suggest in the past was to use a drbd volume https://linbit.com/drbd/ for the lvm PV  or similarly use a san/disk shleve to provide the storage for lvm with reduntant connections to multiple hosts and use pacemakeer to manage the cinder volume processs in active backup but in pratice you should really only use it if our ok wiht only one copy of your data. you can hack aroudn its lack fo ha support but its not the right direction cinder does not explictly garentee that if the host runing cinder volume goes down that you will be aboule to still acess your data but in pratice you often can. as such its often implictly assumed that cinder storage is some how redunant. e.g. with ceph if you have only one instance of cinder volumen and that host goes down the vms will still be ablel to connect to the ceph cluster assuming it was also not on that host. its only the managblity that would be impacted and that is mitigated by the fact you can have multiple instnace of cinder volume runing managing the same ceph cluster. trust gorka when they advise against the use of lvm in production. while possible it wont fulfil the expectaion of consomer of cinder that assume there data is safe. > > There are multiple ways to deploying devstack with multiple backends, > but one way to do it is using the CINDER_ENABLED_BACKENDS variable in > your local.conf file. > > I never use NFS, but for example, to have 2 LVM backends: > > CINDER_ENABLED_BACKENDS=lvm:lvmdriver-1,lvm:lvmdriver-2 > > To enable the Ceph plugin and have lvm and ceph with: > > enable_plugin devstack-plugin-ceph git://git.openstack.org/openstack/devstack-plugin-ceph > > CINDER_ENABLED_BACKENDS=lvm,ceph > > Cheers, > Gorka. > > From kennelson11 at gmail.com Mon Jun 28 21:25:12 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 28 Jun 2021 14:25:12 -0700 Subject: [tc][all] What should we have as community goals for Y series?( Starting community-wide goals ideas) In-Reply-To: References: Message-ID: Hello :) Wanted to bring this back to the top of people's inboxes as we are starting to approach Milestone 2 when we want to actually define the goals selected for the next release (Y release). It looks like we only have one goal suggested so far (which is fine; we can only have one goal): Support TLS default in test jobs[1]. That said, we have no goal champion listed for it. We do still have a pretty hefty backlog of alternatives[2] if anyone wanted to step up as champion for one of those at this point. -Kendall (diablo_rojo) [1] https://etherpad.opendev.org/p/y-series-goals [2] https://etherpad.opendev.org/p/community-goals On Tue, May 4, 2021 at 6:10 AM Rico Lin wrote: > Dear all, > > We're now in R-22 week for Xena cycle which sounds like a perfect time to > start calling for community-wide goals ideas for Y-series. According to the > goal process schedule [1], we need to find potential goals, and champions > before Xena milestone-1 and provide proper discussion in the > community right after that to give a clear view and detail on each goal. > And if we would like to keep up with the schedule, we should start > right away to identify potential goals. > > So please help to provide ideas for Y series community-wide goals in [2]. > > Community-wide goals are important in terms of solving and improving a > technical > area across OpenStack as a whole. It has a lot more benefits to be > considered from > users as well from a developer's perspective. See [3] for more details > about > community-wide goals and processes. > > Also, you can refer to the backlogs of community-wide goals from this[4] > and victoria > cycle goals[5] (also ussuri[6]). We took cool-down cycle goal step for > Xena cycle [7], so no selected goals for Xena. > > [1] https://governance.openstack.org/tc/goals/#goal-selection-schedule > [2] https://etherpad.opendev.org/p/y-series-goals > [3] https://governance.openstack.org/tc/goals/index.html > [4] https://etherpad.openstack.org/p/community-goals > [5] https://etherpad.openstack.org/p/YVR-v-series-goals > [6] https://etherpad.openstack.org/p/PVG-u-series-goals > [7] https://review.opendev.org/c/openstack/governance/+/770616 > > *Rico Lin* > OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, > Senior Software Engineer at EasyStack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jun 28 21:34:59 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 28 Jun 2021 21:34:59 +0000 Subject: [tc][all] What should we have as community goals for Y series?( Starting community-wide goals ideas) In-Reply-To: References: Message-ID: <20210628213459.zoqszfoot7xhxij3@yuggoth.org> On 2021-06-28 14:25:12 -0700 (-0700), Kendall Nelson wrote: [...] > the next release (Y release). You gave me a scare, but... *phew* that's the release after next. We haven't released Xena yet! > It looks like we only have one goal suggested so far (which is > fine; we can only have one goal) [...] Can we only have one goal? Or can we have only one goal? I assume you mean the latter, but they're definitely different things. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From juliaashleykreger at gmail.com Mon Jun 28 22:30:55 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 28 Jun 2021 15:30:55 -0700 Subject: [tc][all] What should we have as community goals for Y series?( Starting community-wide goals ideas) In-Reply-To: <20210628213459.zoqszfoot7xhxij3@yuggoth.org> References: <20210628213459.zoqszfoot7xhxij3@yuggoth.org> Message-ID: On Mon, Jun 28, 2021 at 2:40 PM Jeremy Stanley wrote: > > On 2021-06-28 14:25:12 -0700 (-0700), Kendall Nelson wrote: > [...] > > the next release (Y release). > > You gave me a scare, but... *phew* that's the release after next. We > haven't released Xena yet! > > > It looks like we only have one goal suggested so far (which is > > fine; we can only have one goal) > [...] > > Can we only have one goal? Or can we have only one goal? I assume > you mean the latter, but they're definitely different things. > -- > Jeremy Stanley I have a crazy idea! What if instead of a common singular goal to uniformly raise the bar across projects, we have each project work on their *most* painful operator perceived performance or experience issue and attempt to try and eliminate the issue or perception? And where cross-project integrations are involved, other projects could put review priority on helping get fixes or improvements pushed forward to address such operator experiences. Such an effort would take a dramatically different appearance by project, and would really require each project to identify a known issue, and then to report it back along with the gain they yielded from the effort. Of course, to get there, projects would also have to ensure that they could somehow measure the impact of their changes to remedy such an issue. From gmann at ghanshyammann.com Mon Jun 28 23:45:20 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 28 Jun 2021 18:45:20 -0500 Subject: [tc][all] What should we have as community goals for Y series?( Starting community-wide goals ideas) In-Reply-To: References: Message-ID: <17a55035802.bc20f8fc184709.3022569586572022930@ghanshyammann.com> ---- On Mon, 28 Jun 2021 16:25:12 -0500 Kendall Nelson wrote ---- > Hello :) > > Wanted to bring this back to the top of people's inboxes as we are starting to approach Milestone 2 when we want to actually define the goals selected for the next release (Y release). > It looks like we only have one goal suggested so far (which is fine; we can only have one goal): Support TLS default in test jobs[1]. That said, we have no goal champion listed for it. > > We do still have a pretty hefty backlog of alternatives[2] if anyone wanted to step up as champion for one of those at this point. I have added the secure RBAC as one of the candidate in etherpad. I will compose the goal this or next week early. -gmann > -Kendall (diablo_rojo)[1] https://etherpad.opendev.org/p/y-series-goals[2] https://etherpad.opendev.org/p/community-goals > > > On Tue, May 4, 2021 at 6:10 AM Rico Lin wrote: > Dear all, > We're now in R-22 week for Xena cycle which sounds like a perfect time to start calling for community-wide goals ideas for Y-series. According to the goal process schedule [1], we need to find potential goals, and champions before Xena milestone-1 and provide proper discussion in the community right after that to give a clear view and detail on each goal. And if we would like to keep up with the schedule, we should start right away to identify potential goals. > So please help to provide ideas for Y series community-wide goals in [2]. > Community-wide goals are important in terms of solving and improving a technical > area across OpenStack as a whole. It has a lot more benefits to be considered from > users as well from a developer's perspective. See [3] for more details about > community-wide goals and processes. > > Also, you can refer to the backlogs of community-wide goals from this[4] and victoria > cycle goals[5] (also ussuri[6]). We took cool-down cycle goal step for Xena cycle [7], so no selected goals for Xena. > > [1] https://governance.openstack.org/tc/goals/#goal-selection-schedule > [2] https://etherpad.opendev.org/p/y-series-goals > [3] https://governance.openstack.org/tc/goals/index.html > [4] https://etherpad.openstack.org/p/community-goals > [5] https://etherpad.openstack.org/p/YVR-v-series-goals[6] https://etherpad.openstack.org/p/PVG-u-series-goals[7] https://review.opendev.org/c/openstack/governance/+/770616 > Rico LinOIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack > From gmann at ghanshyammann.com Tue Jun 29 02:06:52 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 28 Jun 2021 21:06:52 -0500 Subject: [all][tc] Technical Committee next weekly meeting on July 1st at 1500 UTC Message-ID: <17a5584edbc.e62ace7a185349.751069679551853439@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for July 1st at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, June 30th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From gagehugo at gmail.com Tue Jun 29 05:31:34 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 29 Jun 2021 00:31:34 -0500 Subject: [openstack-helm] No Meeting Tomorrow Message-ID: Hey team, Since there are no agenda items [0] for the IRC meeting tomorrow, the meeting is cancelled. Our next meeting will be July 06th. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Tue Jun 29 06:33:19 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 29 Jun 2021 12:03:19 +0530 Subject: [EXTERNAL] Error creating VMs In-Reply-To: <598a0aa92ca243e4bb40f1576ade3788@ncwmexgp009.CORP.CHARTERCOM.com> References: <7ac3946329884646994628eb9d519b04@ncwmexgp009.CORP.CHARTERCOM.com> <3599ddf2a37d4069b6fc1e3bb9d9efa1@ncwmexgp009.CORP.CHARTERCOM.com> <51ccfe2090bb444380dd3c09ed6e0de5@ncwmexgp009.CORP.CHARTERCOM.com> <598a0aa92ca243e4bb40f1576ade3788@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: VM provisioning is resolved. But still having an issue with openstack networking. "Unable to create the network. No tenant network is available for allocation." http://paste.openstack.org/show/807014/ On Thu, Jun 17, 2021 at 11:14 PM Braden, Albert wrote: > I see “no valid host” at 10:48:39 in the conductor log: > > > > 2021-06-17T10:48:39.65107847Z stdout F nova.exception.NoValidHost: No > valid host was found. > > > > In the scheduler log at 10:37:43 we see the scheduler starting and the RMQ > error, followed by the scheduling failure at 10:48:39. It looks like the > scheduler can’t connect to RMQ. > > > > 2021-06-17T10:37:43.038362995Z stdout F 2021-06-17 10:37:43.038 1 INFO > nova.service [-] Starting scheduler node (version 21.2.1) > > 2021-06-17T10:37:43.08082925Z stdout F 2021-06-17 10:37:43.079 1 ERROR > oslo.messaging._drivers.impl_rabbit > [req-775b2b25-b9ef-4642-a6fd-7574eab7cc37 - - - - -] Connection failed: > failed to resolve broker hostname (retrying in 0 seconds): OSError: failed > to resolve broker hostname > > 2021-06-17T10:48:39.197318418Z stdout F 2021-06-17 10:48:39.196 1 INFO > nova.scheduler.manager [req-b30bb3d6-0ab1-4dbb-aed9-1d9cece3eac7 > d9f7048c1cd947cfa8ecef128a6cee89 e8813293073545f99658adbec2f80c1d - default > default] Got no allocation candidates from the Placement API. This could be > due to insufficient resources or a temporary occurrence as compute nodes > start up. > > > > *From:* open infra > *Sent:* Thursday, June 17, 2021 12:38 PM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Another instance created at 2021-06-17T16:10:06Z, and it also failed. > > I can't see rabbitmq errors around that timestamp. > > > > scheduler log http://paste.openstack.org/show/806737/ > > > conducter log http://paste.openstack.org/show/806738/ > > > > > At 2021-06-17 15:52:38.810, system was able to connect to AMQP > > http://paste.openstack.org/show/806739/ > > > > > > > > > controller-0:/var/log/pods$ sudo rabbitmqctl list_queues > Password: > Listing queues ... > sysinv.ceph_manager.192.168.204.1 0 > sysinv.ceph_manager 0 > barbican.workers 0 > sysinv.fpga_agent_manager.controller-0 0 > barbican.workers.barbican.queue 0 > sysinv.conductor_manager 0 > sysinv.agent_manager_fanout_fed76e414eb04da084ab35a1c27e1bf1 0 > sysinv.agent_manager 0 > sysinv.ceph_manager_fanout_d585fb522f46431da60741573a7f8575 0 > notifications.info 0 > sysinv.conductor_manager_fanout_cba8926fa47f4780a9e17f3d9b889500 0 > sysinv.fpga_agent_manager 0 > sysinv-keystone-listener-workers 0 > sysinv.fpga_agent_manager_fanout_f00a01bfd3f54a64860f8d7454e9a78e 0 > sysinv.agent_manager.controller-0 0 > barbican.workers_fanout_2c97c319faa943e88eaed4c101e530c7 0 > sysinv.conductor_manager.controller-0 0 > > > > On Thu, Jun 17, 2021 at 7:51 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > It looks like RMQ is up but services can’t connect to it. What do you see > in the RMQ web interface? > > > > This might be a clue: > > > > ERROR oslo.messaging._drivers.impl_rabbit > [req-b30bb3d6-0ab1-4dbb-aed9-1d9cece3eac7 d9f7048c1cd947cfa8ecef128a6cee89 > e8813293073545f99658adbec2f80c1d - default default] Connection failed: > failed to resolve broker hostname (retrying in 0 seconds): OSError: failed > to resolve broker hostname > > > > Check for a typo in your config that points services to an incorrect RMQ > hostname, or a networking issue that prevents them from connecting. > > > > *From:* open infra > *Sent:* Thursday, June 17, 2021 10:06 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > controller-0:/var/log/pods$ sudo rabbitmqctl cluster_status > Password: > Cluster status of node rabbit at localhost ... > [{nodes,[{disc,[rabbit at localhost]}]}, > {running_nodes,[rabbit at localhost]}, > {cluster_name,<<"rabbit at controller-0">>}, > {partitions,[]}, > {alarms,[{rabbit at localhost,[]}]}] > > > > On Thu, Jun 17, 2021 at 7:32 PM Braden, Albert < > C-Albert.Braden at charter.com> wrote: > > It looks like your RMQ is broken. What do you get from “rabbitmqctl > cluster_status”? > > > > *From:* open infra > *Sent:* Thursday, June 17, 2021 9:58 AM > *To:* Braden, Albert > *Cc:* openstack-discuss > *Subject:* Re: [EXTERNAL] Error creating VMs > > > > *CAUTION:* The e-mail below is from an external source. Please exercise > caution before opening attachments, clicking links, or following guidance. > > Hi Albert, > > Sorry for the inconvenience. > > Please note that I have recreated both the data network at starlingx (physical network of openstack) and the network of openstack. > > But I still have the same issue. Please find scheduler and conductor logs. > > # Scheduler Logs > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Tue Jun 29 08:33:15 2021 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 29 Jun 2021 14:03:15 +0530 Subject: Fwd: [TRIPLEO] - ZUN Support in TripleO In-Reply-To: References: Message-ID: Hello Everyone, We are curious in understanding the usage of ZUN Sevice in TripleO with respect to which we have questions as below: 1. Does TripleO Support ZUN? 2. If not then, is there any alternative to deploy containerize services in TripleO? Any support with respect to the questions raised will definitely help us in deciding the tripleO usage. -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Jun 29 09:46:21 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 29 Jun 2021 12:46:21 +0300 Subject: [TRIPLEO] - ZUN Support in TripleO In-Reply-To: References: Message-ID: On Tue, Jun 29, 2021 at 11:36 AM Lokendra Rathour wrote: > > Hello Everyone, > We are curious in understanding the usage of ZUN Sevice in TripleO with > respect to which we have questions as below: > > 1. Does TripleO Support ZUN? > > no > > 1. If not then, is there any alternative to deploy containerize > services in TripleO? > > yes we have been deploying services with containers since queens and this is the default (in fact we have stopped supporting non containerized services altogether for a few releases now). For the default list of containers see [1] and information regarding the deployment can be found in [2] (though note that is community best effort docs so beware it may be a bit outdated in places). hope it helps for now regards, marios [1] https://opendev.org/openstack/tripleo-common/src/commit/5836974cf216f5230843e0c63eea21194b527368/container-images/tripleo_containers.yaml [2] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud Any support with respect to the questions raised will definitely help us in > deciding the tripleO usage. > > -- > ~ Lokendra > skype: lokendrarathour > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Tue Jun 29 10:22:46 2021 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 29 Jun 2021 15:52:46 +0530 Subject: [TRIPLEO] - ZUN Support in TripleO In-Reply-To: References: Message-ID: Hi Marios, Thank you for the information. With respect to the *second question*, please note: "is there any alternative to deploy containerize services in TripleO?" In our use case, in addition to having our workloads in VNFs, we also want to have containerized deployment of certain workloads on top of OpenStack. Zun service could give us that flexibility. Is there any reason that deployment of Zun with TripleO is not supported? And is there an alternative to Zun that the community is using in productions for deploying containerized workloads on top of OpenStack? please advise. Regards, Lokendra On Tue, Jun 29, 2021 at 3:16 PM Marios Andreou wrote: > > > On Tue, Jun 29, 2021 at 11:36 AM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> >> Hello Everyone, >> We are curious in understanding the usage of ZUN Sevice in TripleO with >> respect to which we have questions as below: >> >> 1. Does TripleO Support ZUN? >> >> > no > > >> >> 1. If not then, is there any alternative to deploy containerize >> services in TripleO? >> >> > yes we have been deploying services with containers since queens and this > is the default (in fact we have stopped supporting non containerized > services altogether for a few releases now). For the default list of > containers see [1] and information regarding the deployment can be found in > [2] (though note that is community best effort docs so beware it may be a > bit outdated in places). > > hope it helps for now > > regards, marios > > [1] > https://opendev.org/openstack/tripleo-common/src/commit/5836974cf216f5230843e0c63eea21194b527368/container-images/tripleo_containers.yaml > [2] > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud > > > Any support with respect to the questions raised will definitely help us >> in deciding the tripleO usage. >> >> -- >> ~ Lokendra >> skype: lokendrarathour >> >> >> -- ~ Lokendra www.inertiaspeaks.com www.inertiagroups.com skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Jun 29 10:31:51 2021 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 29 Jun 2021 12:31:51 +0200 Subject: [tc][all] What should we have as community goals for Y series?( Starting community-wide goals ideas) In-Reply-To: References: <20210628213459.zoqszfoot7xhxij3@yuggoth.org> Message-ID: <184b80d3-2237-8147-3ce8-99c8280849d7@openstack.org> Julia Kreger wrote: > I have a crazy idea! > > What if instead of a common singular goal to uniformly raise the bar > across projects, we have each project work on their *most* painful > operator perceived performance or experience issue and attempt to try > and eliminate the issue or perception? And where cross-project > integrations are involved, other projects could put review priority on > helping get fixes or improvements pushed forward to address such > operator experiences. > > Such an effort would take a dramatically different appearance by > project, and would really require each project to identify a known > issue, and then to report it back along with the gain they yielded > from the effort. Of course, to get there, projects would also have to > ensure that they could somehow measure the impact of their changes to > remedy such an issue. I like that! -- Thierry From tonyppe at gmail.com Tue Jun 29 11:02:29 2021 From: tonyppe at gmail.com (Tony Pearce) Date: Tue, 29 Jun 2021 19:02:29 +0800 Subject: [kayobe][victoria] no module named docker - deploy fail after deploy successful Message-ID: I had a successful deployment of Openstack Victoria via Kayobe with an all-in-one node running controller and compute roles. I wanted to then add 2 controller nodes to make 3 controllers and one compute. The 2 additional controllers have a different interface naming so I needed to modify the inventory. I checked this ansible documentation to figure out the changes I’d need to make [1]. The first try, I misunderstood the layout because kayobe tried to configure the new nodes interfaces with incorrect naming. In my first try I tried this inventory layout: Move existing configuration: [kayobe config] / Inventory / * To [kayobe config] / Inventory / Ciscohosts / Create new from copy of existing:[kayobe config] / Inventory / Ciscohosts / > [kayobe config] / Inventory / Otherhosts / The Otherhosts had its own host file with the 2 controllers and group_vars/controllers network interface configuration as per these two hosts. But anyway it didnt achieve the desired result so I rechecked the ansible doc [1] and decided to do this another way as follows: In my second try I first reversed the inventory change: Delete the new dir: [kayobe config] / Inventory / Otherhosts / Move back the config: [kayobe config] / Inventory / Ciscohosts / * > [kayobe config] / Inventory / Delete the empty dir: [kayobe config] / Inventory / Ciscohosts / Then create host_vars for the two individual hosts: [kayobe config] / Inventory / host_vars / cnode2 [kayobe config] / Inventory / host_vars / cnode3 And updated the single hosts inventory file [kayobe config] / Inventory / hosts This seemed to work fine, the “kayobe overcloud host configure” was successful and the hosts interfaces were set up as I desired. The issue came when doing the “kayobe overcloud service deploy” and failed with "/usr/bin/python3", "-c", "import docker” = ModuleNotFoundError: No module named 'docker' for all three nodes, where previously it (the deployment) had been successful for the all-in-one node. I do not know if this task had run or skipped before but the task is run against "baremetal" group and controllers and compute are in this group so I assume that it had been ran successfully in previous deployments and this is the weird thing because no other changes have been made apart from as described here. Verbose error output: [3] After the above error, I reverted the inventory back to the “working” state, which is basically to update the inventory hosts and removed the 2 controllers. As well as remove the whole host_vars directory. After doing this however, the same error is still seen /usr/bin/python3", "-c", "import docker” = ModuleNotFoundError. I logged into the host and tried to run this manually on the CLI and I see the same output. What I don’t understand is why this error is occurring now after previous successful deployments. To try and resolve /workaround this issue I have tried to no avail: - recreating virtual environments on all-in-one node - recreating virtual environments on ACH - deleting the [kolla config] directory - deleting .ansible and /tmp/ caches - turning off pipelining After doing the above I needed to do the control host bootstrap and host configure before service deploy however the same error persisted and I could not work around it with any of the above steps being performed. As a test, I decided to turn off this task in the playbook [4] and the yml file runs as follows: [2]. This results in a (maybe pseudo) successful deployment again, in a sense that it deploys without failure because that task does not run. After this was successful in deploying once again as it had previously had been, I added the two controller nodes using the “host_vars” and then I was able to successfully deploy again with HA controllers. Well, it is successful apart from Designate issue due to Designate already having the config [5]. I can log in to the horizon dashboard and under system information I can see all three controllers there. Could I ask the community for help with: 1. Regarding the kayobe inventory, is anything wrong with the 2nd attempt in line with Kayobe? 2. Has anyone come across this docker issue (or similar within this context of failing after being successful) and can suggest? I repeatedly get these odd issues where successful deployments then fail in the future. This often occurs after making a config change and then rolling back but the roll back does not return to a working deployment state. The fix/workaround for me in these cases is to “kayobe overcloud service destroy --yes-i-really-really-mean-it” and also re-deploy the host. [1] Best Practices — Ansible Documentation [2] modified Checking docker SDK version# command: "{{ ansible_python.execut - Pastebin.com [3] TASK [prechecks : Checking docker SDK version] ********************************* - Pastebin.com [4] /home/cv-user/kayobe-victoria/venvs/kolla-ansible/share/kolla-ansible/ansible/roles/prechecks/tasks/package_checks.yml [5] TASK [designate : Update DNS pools] ******************************************** - Pastebin.com Kind regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Jun 29 12:02:40 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 29 Jun 2021 21:02:40 +0900 Subject: [neutron] bug deputy report (Jun 21-27) Message-ID: Hi team, I was a bug deputy in neutron last week. Sorry for late. Summary ======= The following needs attentions from OVN and DVR folks. - 1 bug on OVN driver related to security group rule deletion needs investigation - 2 DVR-specific bugs need to be triaged by L3 DVR folks I also marked a bug on explicit deletion of a port used by nova as RFE. Needs attentions ================ * [OVN] neutron with ovn returns Conflict on security group rules delete https://bugs.launchpad.net/bugs/1933638 Medium, unassigned, specific to OVN mechanism driver * Untriaged specific to DVR * snat arp entry missing in qrouter namespace https://bugs.launchpad.net/neutron/+bug/1933092 * Unable to update mtu on DVR snat https://bugs.launchpad.net/bugs/1933273 * OSError: Premature eof waiting for privileged process error in Ussuri release of Neutron https://bugs.launchpad.net/neutron/+bug/1933813 Collecting more information from the bug author * Unassigned Gate failure * [Fullstack] TestLegacyL3Agent.test_mtu_update fails sometimes https://bugs.launchpad.net/neutron/+bug/1933234 Bugs with assignees =================== * New with assignees * [OVN] Live migration of network sensitive VMs breaks communication https://bugs.launchpad.net/neutron/+bug/1933517 New, Medium, assigned to ralonsoh * Explicitly provide a DB context when executing a DB transaction https://bugs.launchpad.net/neutron/+bug/1933321 New, Wishlist, this is a tracker for all patches to decorate db transactions * Confirmed/Triaged * Unable to show security groups for non-admin users if custom policies using. https://bugs.launchpad.net/bugs/1933242 Confirmed, High, Assigned to amotoki * missing global_request_id in neutron_lib context from_dict method https://bugs.launchpad.net/bugs/1933802 Low, Confirmed, potential improvement in neutron-lib Context.from_dict() Unassigned but amotoki can take care of it. * In Progress * [OVN] The type of ovn controller is not recognized as a gateway agent https://bugs.launchpad.net/bugs/1933401 * L3 DB FloatingIP callbacks improvements https://bugs.launchpad.net/bugs/1933502 * Stable branches * [stable/train] Backported patch introduced incompatible method calls for py2.7 https://bugs.launchpad.net/bugs/1933366 High, New, assigned to ralonsoh RFEs ==== * locked instance can be rendered broken by deleting port https://bugs.launchpad.net/bugs/1930866 It was filed as a normal bug, but it requires an API change, so I triaged it as RFE. * [RFE] Add distributed datapath for metadata https://bugs.launchpad.net/neutron/+bug/1933222 From syedammad83 at gmail.com Tue Jun 29 12:44:26 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 29 Jun 2021 17:44:26 +0500 Subject: [wallaby][nova] CPU topology and NUMA Nodes In-Reply-To: <9b6d248665ced4f826fedddd2ccb4649dd148273.camel@redhat.com> References: <9b6d248665ced4f826fedddd2ccb4649dd148273.camel@redhat.com> Message-ID: Thanks,, the information is really helpful. I am have set below properties to flavor according to my numa policies. --property hw:numa_nodes=FLAVOR-NODES \ --property hw:numa_cpus.N=FLAVOR-CORES \ --property hw:numa_mem.N=FLAVOR-MEMORY I am having below error in compute logs. Any advise. libvirt.libvirtError: Unable to write to '/sys/fs/cgroup/cpuset/machine.slice/machine-qemu\x2d48\x2dinstance\x2d0000026b.scope/emulator/cpuset.cpus': Permission denied 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest Traceback (most recent call last): 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 155, in launch 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest return self._domain.createWithFlags(flags) 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 193, in doit 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest result = proxy_call(self._autowrap, f, *args, **kwargs) 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 151, in proxy_call 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest rv = execute(f, *args, **kwargs) 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 132, in execute 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest six.reraise(c, e, tb) 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest raise value 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 86, in tworker 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest rv = meth(*args, **kwargs) 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest File "/usr/lib/python3/dist-packages/libvirt.py", line 1265, in createWithFlags 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest libvirt.libvirtError: Unable to write to '/sys/fs/cgroup/cpuset/machine.slice/machine-qemu\x2d48\x2dinstance\x2d0000026b.scope/emulator/cpuset.cpus': Permission denied 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest 2021-06-29 12:33:10.146 1310945 ERROR nova.virt.libvirt.driver [req-4f6fc6aa-04d6-4dc0-921f-2913b40a76a9 2af528fdf3244e15b4f3f8fcfc0889c5 890eb2b7d1b8488aa88de7c34d08817a - default default] [instance: ed87bf68-b631-4a00-9eb5-22d32ec37402] Failed to start libvirt guest: libvirt.libvirtError: Unable to write to '/sys/fs/cgroup/cpuset/machine.slice/machine-qemu\x2d48\x2dinstance\x2d0000026b.scope/emulator/cpuset.cpus': Permission denied 2021-06-29 12:33:10.150 1310945 INFO os_vif [req-4f6fc6aa-04d6-4dc0-921f-2913b40a76a9 2af528fdf3244e15b4f3f8fcfc0889c5 890eb2b7d1b8488aa88de7c34d08817a - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:3d:c8,bridge_name='br-int',has_traffic_filtering=True,id=a991cd33-2610-4823-a471-62171037e1b5,network=Network(a0d85af2-a991-4102-8453-ba68c5e10b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa991cd33-26') 2021-06-29 12:33:10.151 1310945 INFO nova.virt.libvirt.driver [req-4f6fc6aa-04d6-4dc0-921f-2913b40a76a9 2af528fdf3244e15b4f3f8fcfc0889c5 890eb2b7d1b8488aa88de7c34d08817a - default default] [instance: ed87bf68-b631-4a00-9eb5-22d32ec37402] Deleting instance files /var/lib/nova/instances/ed87bf68-b631-4a00-9eb5-22d32ec37402_del 2021-06-29 12:33:10.152 1310945 INFO nova.virt.libvirt.driver [req-4f6fc6aa-04d6-4dc0-921f-2913b40a76a9 2af528fdf3244e15b4f3f8fcfc0889c5 890eb2b7d1b8488aa88de7c34d08817a - default default] [instance: ed87bf68-b631-4a00-9eb5-22d32ec37402] Deletion of /var/lib/nova/instances/ed87bf68-b631-4a00-9eb5-22d32ec37402_del complete 2021-06-29 12:33:10.258 1310945 ERROR nova.compute.manager [req-4f6fc6aa-04d6-4dc0-921f-2913b40a76a9 2af528fdf3244e15b4f3f8fcfc0889c5 890eb2b7d1b8488aa88de7c34d08817a - default default] [instance: ed87bf68-b631-4a00-9eb5-22d32ec37402] Instance failed to spawn: libvirt.libvirtError: Unable to write to '/sys/fs/cgroup/cpuset/machine.slice/machine-qemu\x2d48\x2dinstance\x2d0000026b.scope/emulator/cpuset.cpus': Permission denied Any advise how to fix this permission issue ? I have manually created the directory machine-qemu in /sys/fs/cgroup/cpuset/machine.slice/ but still having the same error. I have also tried to set [compute] cpu_shared_set AND [compute] cpu_dedicated_set they are also giving the same error. Using ubuntu20.04 and qemu-kvm 4.2. Ammad On Fri, Jun 25, 2021 at 10:54 AM Sean Mooney wrote: > On Fri, 2021-06-25 at 10:02 +0500, Ammad Syed wrote: > > Hi, > > > > I am using openstack wallaby on ubuntu 20.04 and kvm. I am working to > make > > optimized flavor properties that should provide optimal performance. I > was > > reviewing the document below. > > > > https://docs.openstack.org/nova/wallaby/admin/cpu-topologies.html > > > > I have two socket AMD compute node. The workload running on nodes are > mixed > > workload. > > > > My question is should I use default nova CPU topology and NUMA node that > > nova deploys instance by default OR should I use hw:cpu_sockets='2' > > and hw:numa_nodes='2'. > the latter hw:cpu_sockets='2' and hw:numa_nodes='2' should give you better > performce > however you should also set hw:mem_page_size=small or hw:mem_page_size=any > when you enable virtual numa policies we afinities the guest memory to > host numa nodes. > This can lead to Out of memory evnet on the the host numa nodes which can > result in vms > being killed by the host kernel memeory reaper if you do not enable numa > aware memeory > trackign iin nova which is done by setting hw:mem_page_size. setting > hw:mem_page_size has > the side effect of of disabling memory over commit so you have to bare > that in mind. > if you are using numa toplogy you should almost always also use hugepages > which are enabled > using hw:mem_page_size=large this however requires you to configure > hupgepages in the host > at boot. > > > > Which one from above provide best instance performance ? or any other > > tuning should I do ? > > in the libvirt driver the default cpu toplogy we will genergated > is 1 thread per core, 1 core per socket and 1 socket per flavor.vcpu. > (technially this is an undocumeted implemation detail that you should not > rely on, we have the hw:cpu_* element if you care about the toplogy) > > this was more effincet in the early days of qemu/openstack but has may > issue when software is chagne per sokcet or oepreating systems have > a limit on socket supported such as windows. > > generally i advies that you set hw:cpu_sockets to the typical number of > sockets on the underlying host. > simialrly if the flavor will only be run on host with SMT/hypertreading > enabled on you shoudl set hw:cpu_threads=2 > > the flavor.vcpus must be devisable by the product of hw:cpu_sockets, > hw:cpu_cores and hw:cpu_threads if they are set. > > so if you have hw:cpu_threads=2 it must be devisable by 2 > if you have hw:cpu_threads=2 and hw:cpu_sockets=2 flavor.vcpus must be a > multiple of 4 > > > > The note in the URL (CPU topology sesion) suggests that I should stay > with > > default options that nova provides. > in generaly no you should aling it to the host toplogy if you have similar > toplogy across your data center. > the default should always just work but its not nessisarly optimal and > window sguest might not boot if you have too many sockets. > windows 10 for exmple only supprot 2 socket so you could only have 2 > flavor.vcpus if you used the default toplogy. > > > > > Currently it also works with libvirt/QEMU driver but we don’t recommend > it > > in production use cases. This is because vCPUs are actually running in > one > > thread on host in qemu TCG (Tiny Code Generator), which is the backend > for > > libvirt/QEMU driver. Work to enable full multi-threading support for TCG > > (a.k.a. MTTCG) is on going in QEMU community. Please see this MTTCG > project > > page for detail. > we do not gnerally recommende using qemu without kvm in produciton. > the mttcg backend is useful in cases where you want to emulate other > plathform but that usecsae > is not currently supported in nova. > for your deployment you should use libvirt with kvm and you should also > consider if you want to support > nested virtualisation or not. > > > > > > Ammad > > > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricolin at ricolky.com Tue Jun 29 13:29:02 2021 From: ricolin at ricolky.com (Rico Lin) Date: Tue, 29 Jun 2021 21:29:02 +0800 Subject: [tc][all] What should we have as community goals for Y series?( Starting community-wide goals ideas) In-Reply-To: References: <20210628213459.zoqszfoot7xhxij3@yuggoth.org> Message-ID: On Tue, Jun 29, 2021 at 6:36 AM Julia Kreger wrote: > I have a crazy idea! I love crazy ideas > > What if instead of a common singular goal to uniformly raise the bar > across projects, we have each project work on their *most* painful > operator perceived performance or experience issue and attempt to try > and eliminate the issue or perception? And where cross-project > integrations are involved, other projects could put review priority on > helping get fixes or improvements pushed forward to address such > operator experiences. What we can do is to ask projects to provide their *most* painful operator perceived performance or experience issue as pre-select goal survey for the first step and then calling for help or mark it as a goal for the Y cycle as the second step. IMO, marking the pre-select goal is exactly to checking if any goal material can be actually a goal. So I think going ahead and raise such activity makes sense. We can discuss this more in our next TC meeting if anyone also interested. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jun 29 13:42:07 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 29 Jun 2021 14:42:07 +0100 Subject: [tc][all] What should we have as community goals for Y series?( Starting community-wide goals ideas) In-Reply-To: <184b80d3-2237-8147-3ce8-99c8280849d7@openstack.org> References: <20210628213459.zoqszfoot7xhxij3@yuggoth.org> <184b80d3-2237-8147-3ce8-99c8280849d7@openstack.org> Message-ID: On Tue, 2021-06-29 at 12:31 +0200, Thierry Carrez wrote: > Julia Kreger wrote: > > I have a crazy idea! > > > > What if instead of a common singular goal to uniformly raise the bar > > across projects, we have each project work on their *most* painful > > operator perceived performance or experience issue and attempt to try > > and eliminate the issue or perception? And where cross-project > > integrations are involved, other projects could put review priority on > > helping get fixes or improvements pushed forward to address such > > operator experiences. > > > > Such an effort would take a dramatically different appearance by > > project, and would really require each project to identify a known > > issue, and then to report it back along with the gain they yielded > > from the effort. Of course, to get there, projects would also have to > > ensure that they could somehow measure the impact of their changes to > > remedy such an issue. > > I like that! i was going to suggest something similar. we had talked about dedicated the xena or wallaby release in light of world events to take a step back and have a stablisation release wehre we focused less on feature development and workd on tech debt and basically slowed down developemtn to acount for the stress and other pressures people were under. unfortunetly at least for nova that never happened. im not sure about other project but i really like the idea of have a stablisation release where project focus on fixint the pain points of operators rahter then on enabling shiny feature X > From smooney at redhat.com Tue Jun 29 13:45:45 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 29 Jun 2021 14:45:45 +0100 Subject: [TRIPLEO] - ZUN Support in TripleO In-Reply-To: References: Message-ID: <44c2d59d18916edab9ec1a6a2aeba129963ba2e8.camel@redhat.com> On Tue, 2021-06-29 at 15:52 +0530, Lokendra Rathour wrote: > Hi Marios, > Thank you for the information. > > With respect to the *second question*, please note: > "is there any alternative to deploy containerize services in TripleO?" > > In our use case, in addition to having our workloads in VNFs, we also want > to have containerized deployment of certain workloads on top of OpenStack. > Zun service could give us that flexibility. Is there any reason that > deployment of Zun with TripleO is not supported? And is there an > alternative to Zun that the community is using in productions for deploying > containerized workloads on top of OpenStack? i think marios's resopnce missed that zun is the containers as a service project which provide an alternitive to nova or ironci to provision compute resouces as containers directly on the physical hosts. in ooo term deploying tenant contaienrs directly on the overcloud host with docker or podman. i dont think ooo currently supports this as it is not listed in https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/deployment so to answer your orginal questrion this does not appear to be currently supported. > > please advise. > > Regards, > Lokendra > > > On Tue, Jun 29, 2021 at 3:16 PM Marios Andreou wrote: > > > > > > > On Tue, Jun 29, 2021 at 11:36 AM Lokendra Rathour < > > lokendrarathour at gmail.com> wrote: > > > > > > > > Hello Everyone, > > > We are curious in understanding the usage of ZUN Sevice in TripleO with > > > respect to which we have questions as below: > > > > > > 1. Does TripleO Support ZUN? > > > > > > > > no > > > > > > > > > > 1. If not then, is there any alternative to deploy containerize > > > services in TripleO? > > > > > > > > yes we have been deploying services with containers since queens and this > > is the default (in fact we have stopped supporting non containerized > > services altogether for a few releases now). For the default list of > > containers see [1] and information regarding the deployment can be found in > > [2] (though note that is community best effort docs so beware it may be a > > bit outdated in places). > > > > hope it helps for now > > > > regards, marios > > > > [1] > > https://opendev.org/openstack/tripleo-common/src/commit/5836974cf216f5230843e0c63eea21194b527368/container-images/tripleo_containers.yaml > > [2] > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud > > > > > > Any support with respect to the questions raised will definitely help us > > > in deciding the tripleO usage. > > > > > > -- > > > ~ Lokendra > > > skype: lokendrarathour > > > > > > > > > > From juliaashleykreger at gmail.com Tue Jun 29 13:58:57 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 29 Jun 2021 06:58:57 -0700 Subject: [tc][all] What should we have as community goals for Y series?( Starting community-wide goals ideas) In-Reply-To: References: <20210628213459.zoqszfoot7xhxij3@yuggoth.org> <184b80d3-2237-8147-3ce8-99c8280849d7@openstack.org> Message-ID: On Tue, Jun 29, 2021 at 6:45 AM Sean Mooney wrote: > > On Tue, 2021-06-29 at 12:31 +0200, Thierry Carrez wrote: > > Julia Kreger wrote: > > > I have a crazy idea! > > > > > > What if instead of a common singular goal to uniformly raise the bar > > > across projects, we have each project work on their *most* painful > > > operator perceived performance or experience issue and attempt to try > > > and eliminate the issue or perception? And where cross-project > > > integrations are involved, other projects could put review priority on > > > helping get fixes or improvements pushed forward to address such > > > operator experiences. > > > > > > Such an effort would take a dramatically different appearance by > > > project, and would really require each project to identify a known > > > issue, and then to report it back along with the gain they yielded > > > from the effort. Of course, to get there, projects would also have to > > > ensure that they could somehow measure the impact of their changes to > > > remedy such an issue. > > > > I like that! > > i was going to suggest something similar. > we had talked about dedicated the xena or wallaby release in light of world events > to take a step back and have a stablisation release wehre we focused less on feature > development and workd on tech debt and basically slowed down developemtn to acount for > the stress and other pressures people were under. > > unfortunetly at least for nova that never happened. im not sure about other project but > i really like the idea of have a stablisation release where project focus on fixint > the pain points of operators rahter then on enabling shiny feature X > > > I think we, as a community, are very much long due for something such as this. We've got technical debt to pay down. We've got bugs to be fixed. I suspect most of the older projects know exactly where their pain points are already, just because of the need for features, efforts to work on them get put in a back seat or minimal review attention. Perceptions are important, and shiny features are shiny features. If the operator frustration outweighs the value of the shiny feature, then off to another product someone will go. > > From hberaud at redhat.com Tue Jun 29 14:03:58 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 29 Jun 2021 16:03:58 +0200 Subject: [release][stable] stale old branches probably should be EOL'd In-Reply-To: <20210628181225.hdz7ai2vgxgjsdl7@yuggoth.org> References: <20210628181225.hdz7ai2vgxgjsdl7@yuggoth.org> Message-ID: Hello, I was thinking that our tooling will fail with these deliverables addition and tagging but apparently that's not the case so I would suggest to follow the 1st solution (using the existing tools, such as creating a yaml file etc ). Le lun. 28 juin 2021 à 20:15, Jeremy Stanley a écrit : > On 2021-06-28 20:03:48 +0200 (+0200), Előd Illés wrote: > [...] > > If we stick to the tagging + deletion, then the next question is how to > > achieve this. There are a couple of options: > > > > 1. use the existing tools: such as creating a yaml file under > > deliverables/$series/ and add the $series-eol tag for the given > repositories > > (I understand that these are not 'real' deliverables, but does that cause > > any issue for us?) > > 2. implement some new mechanism, similar to option 1, but clearly > indicate > > that the tagging does not create any deliverables > > 3. manual tagging + deletion > > > > I think the 1st option is the easiest and since we already have the whole > > process there, we can simply use the existing tool. > > > > So what do you think? > > - Is that OK to tag these open old stable branches with $series-eol tag > and > > then delete them? > WFM > - If yes, which option from the above list is acceptable, or what else can > > we do? > The first one. [...] > > My two cents, I think it's okay to tag those old branches and delete > them, and I support option 1 as it helps create a bit of a > breadcrumb trail for future reflection. > > As for how they got overlooked, I expect you're right. In the past, > well before we had any real release automation and tracking, > projects asked the Infra team to delete their old branches, and > quite often did not provide a complete list. > -- > Jeremy Stanley > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Tue Jun 29 14:08:38 2021 From: bkslash at poczta.onet.pl (at) Date: Tue, 29 Jun 2021 16:08:38 +0200 Subject: [neutron-vpnaas][kolla-ansible][victoria] Growing memory consumption with only one VPN connection In-Reply-To: References: Message-ID: <44F0A10E-99B2-430B-9F32-2E3A4CE95B58@poczta.onet.pl> Hi, 1. There is only one, bidirectional vpn connection between project networks in two regions 2. there is no (or almost no) traffic on this vpn link 3. second region is on one host (kolla all-in-one) and situation looks the same- with vpn service enabled there is excessive Memory use (growing also about 5-6MB/ minute and forcing the system to use swapfile) there is no big cpu usage (ofcourse also due to no traffic inside VPN) It's the same case even if there's no vpn connection configured. Best regards Adam Tomas Wysłane z iPhone'a > Wiadomość napisana przez Vinh Nguyen Duc w dniu 29.06.2021, o godz. 11:04: > >  > Hi Adam, > Do you think the VPN connection is use CPU and memory to encrypt the VPN traffic? > > On Mon, Jun 28, 2021 at 14:30 Adam Tomas > wrote: > Hi, > I have problem with neutron vpnaas - after enabling vpnaas plugin everything was ok at first. I was able to create VPN connection and communication is correct. But after a week of running vpnaas (only one VPN connection created/working!) I’ve noticed, that neutron-vpnaas takes more and more memory. I have 5 processes on each controller (there should be always five? Or it is changing dynamically?): > > 42435 1545384 0.6 8.6 5802516 5712412 ? S Jun15 110:50 /var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf > 42435 1545389 0.6 8.5 5735832 5645856 ? S Jun15 112:16 /var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf > 42435 1545378 0.5 8.5 5734192 5643620 ? S Jun15 108:09 /var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf > 42435 1545372 0.5 8.5 5731128 5641436 ? S Jun15 109:07 /var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf > 42435 1545369 0.6 8.4 5637084 5547392 ? S Jun15 114:21 /var/lib/kolla/venv/bin/python3.8 /var/lib/kolla/venv/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron_vpnaas.conf > > now neutron_server takes over 27G of RAM on each controller: > > neutron_server running 10.2 27.2G > > After stopping all neutron containers and starting it again neutron takes a lot less memory: > > neutron_server running 0.5 583M > > but memory usage keeps growing (about 5-6 M every minute). > What’s wrong? When I disable vpnaas globally there’s no problem with excessive memory usage, so it’s vpnaas for sure… > > Best regards > Adam Tomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Tue Jun 29 14:12:20 2021 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 29 Jun 2021 09:12:20 -0500 Subject: October 2021 PTG Dates & Registration Message-ID: <05081016-72E3-4BC6-A21C-4366BAA22EFC@openstack.org> Hi everyone, We're happy to announce the next virtual PTG[1] will take place October 18-22, 2021! Registration is now open[2]. The virtual PTG is free to attend, but make sure to register so you recieve important communications like schedules, passwords, and other relevant updates. Next week, keep an eye out for info regarding team sign-ups. Can't wait to see you all there! Ashlee [1] https://www.openstack.org/ptg/ [2] https://openinfra-ptg.eventbrite.com From rosmaita.fossdev at gmail.com Tue Jun 29 14:14:59 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 29 Jun 2021 10:14:59 -0400 Subject: [cinder] reminder: this week's meeting in video+IRC Message-ID: Quick reminder that this week's Cinder team meeting on Wednesday 30 June, being the final meeting of the month, will be held in both videoconference and IRC at the regularly scheduled time of 1400 UTC. These are the video meeting rules we've agreed to: * Everyone will keep IRC open during the meeting. * We'll take notes in IRC to leave a record similar to what we have for our regular IRC meetings. * Some people are more comfortable communicating in written English. So at any point, any attendee may request that the discussion of the current topic be conducted entirely in IRC. * The meeting will be recorded. connection info: https://bluejeans.com/3228528973 meeting agenda: https://etherpad.opendev.org/p/cinder-xena-meetings cheers, brian From radoslaw.piliszek at gmail.com Tue Jun 29 14:28:18 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 29 Jun 2021 16:28:18 +0200 Subject: [masakari] PTL on vacations, next two meetings cancelled Message-ID: Hello, Propagating this information from today's meeting: The next two Masakari meetings (July 6 and 13) are cancelled because I will be on vacations. Kind regards, -yoctozepto From salman10sheikh at gmail.com Tue Jun 29 03:36:57 2021 From: salman10sheikh at gmail.com (Salman Sheikh) Date: Tue, 29 Jun 2021 09:06:57 +0530 Subject: How to get information about the available space in all cinder-volume In-Reply-To: <20210628133112.GA3190345@sm-workstation> References: <20210628133112.GA3190345@sm-workstation> Message-ID: Thanks for the information. On Mon, Jun 28, 2021 at 7:01 PM Sean McGinnis wrote: > On Fri, Jun 25, 2021 at 12:17:53PM +0530, Salman Sheikh wrote: > > Dear experts, > > > > I have made cinder-volume /dev/sdb on controller as well as compute node, > > how do i get the information of space available in cinder. > > Hi Salman, > > Cinder is just the management/control plane for the storage, so it has no > visibility into actual space consumed on the volume. It can only report the > configured size. > > In order to find out the space available, you would need to run something > like > `df -h` on the node to see its usage stats. > > Sean > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Jun 29 15:29:47 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 29 Jun 2021 16:29:47 +0100 Subject: [wallaby][nova] CPU topology and NUMA Nodes In-Reply-To: References: <9b6d248665ced4f826fedddd2ccb4649dd148273.camel@redhat.com> Message-ID: <34ae50bc2491eaae52e6a340dc96424153fd9531.camel@redhat.com> On Tue, 2021-06-29 at 17:44 +0500, Ammad Syed wrote: > Thanks,, the information is really helpful. I am have set below properties to > flavor according to my numa policies.  > >     --property hw:numa_nodes=FLAVOR-NODES \ >     --property hw:numa_cpus.N=FLAVOR-CORES \ >     --property hw:numa_mem.N=FLAVOR-MEMORY > > I am having below error in compute logs. Any advise. > >  libvirt.libvirtError: Unable to write to > '/sys/fs/cgroup/cpuset/machine.slice/machine- > qemu\x2d48\x2dinstance\x2d0000026b.scope/emulator/cpuset.cpus': Permission > denied > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest Traceback (most > recent call last): > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest   File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 155, in launch > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest     return > self._domain.createWithFlags(flags) > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest   File > "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 193, in doit > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest     result = > proxy_call(self._autowrap, f, *args, **kwargs) > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest   File > "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 151, in proxy_call > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest     rv = > execute(f, *args, **kwargs) > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest   File > "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 132, in execute > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest     six.reraise(c, > e, tb) > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest   File > "/usr/lib/python3/dist-packages/six.py", line 703, in reraise > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest     raise value > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest   File > "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 86, in tworker > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest     rv = > meth(*args, **kwargs) > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest   File > "/usr/lib/python3/dist-packages/libvirt.py", line 1265, in createWithFlags > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest     if ret == -1: > raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest > libvirt.libvirtError: Unable to write to > '/sys/fs/cgroup/cpuset/machine.slice/machine- > qemu\x2d48\x2dinstance\x2d0000026b.scope/emulator/cpuset.cpus': Permission > denied > 2021-06-29 12:33:10.144 1310945 ERROR nova.virt.libvirt.guest > 2021-06-29 12:33:10.146 1310945 ERROR nova.virt.libvirt.driver [req-4f6fc6aa- > 04d6-4dc0-921f-2913b40a76a9 2af528fdf3244e15b4f3f8fcfc0889c5 > 890eb2b7d1b8488aa88de7c34d08817a - default default] [instance: ed87bf68-b631- > 4a00-9eb5-22d32ec37402] Failed to start libvirt guest: libvirt.libvirtError: > Unable to write to '/sys/fs/cgroup/cpuset/machine.slice/machine- > qemu\x2d48\x2dinstance\x2d0000026b.scope/emulator/cpuset.cpus': Permission > denied > 2021-06-29 12:33:10.150 1310945 INFO os_vif [req-4f6fc6aa-04d6-4dc0-921f- > 2913b40a76a9 2af528fdf3244e15b4f3f8fcfc0889c5 890eb2b7d1b8488aa88de7c34d08817a - > default default] Successfully unplugged vif > VIFOpenVSwitch(active=False,address=fa:16:3e:ba:3d:c8,bridge_name='br- > int',has_traffic_filtering=True,id=a991cd33-2610-4823-a471- > 62171037e1b5,network=Network(a0d85af2-a991-4102-8453- > ba68c5e10b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_de > lete=False,vif_name='tapa991cd33-26') > 2021-06-29 12:33:10.151 1310945 INFO nova.virt.libvirt.driver [req-4f6fc6aa- > 04d6-4dc0-921f-2913b40a76a9 2af528fdf3244e15b4f3f8fcfc0889c5 > 890eb2b7d1b8488aa88de7c34d08817a - default default] [instance: ed87bf68-b631- > 4a00-9eb5-22d32ec37402] Deleting instance files > /var/lib/nova/instances/ed87bf68-b631-4a00-9eb5-22d32ec37402_del > 2021-06-29 12:33:10.152 1310945 INFO nova.virt.libvirt.driver [req-4f6fc6aa- > 04d6-4dc0-921f-2913b40a76a9 2af528fdf3244e15b4f3f8fcfc0889c5 > 890eb2b7d1b8488aa88de7c34d08817a - default default] [instance: ed87bf68-b631- > 4a00-9eb5-22d32ec37402] Deletion of /var/lib/nova/instances/ed87bf68-b631-4a00- > 9eb5-22d32ec37402_del complete > 2021-06-29 12:33:10.258 1310945 ERROR nova.compute.manager [req-4f6fc6aa-04d6- > 4dc0-921f-2913b40a76a9 2af528fdf3244e15b4f3f8fcfc0889c5 > 890eb2b7d1b8488aa88de7c34d08817a - default default] [instance: ed87bf68-b631- > 4a00-9eb5-22d32ec37402] Instance failed to spawn: libvirt.libvirtError: Unable > to write to '/sys/fs/cgroup/cpuset/machine.slice/machine- > qemu\x2d48\x2dinstance\x2d0000026b.scope/emulator/cpuset.cpus': Permission > denied > > Any advise how to fix this permission issue ? > > I have manually created the directory machine-qemu in > /sys/fs/cgroup/cpuset/machine.slice/ but still having the same error. > > I have also tried to set [compute] cpu_shared_set AND [compute] > cpu_dedicated_set  they are also giving the same error. There are quite a few bugs about this [1][2]. It seems most of them are caused by CPUs being offlined. Have you offline CPUs? Are the CPUs listed in the mask all available? Stephen [1] https://bugzilla.redhat.com/show_bug.cgi?id=1609785 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1842716 > Using ubuntu20.04 and qemu-kvm 4.2. > > Ammad > > On Fri, Jun 25, 2021 at 10:54 AM Sean Mooney wrote: > > On Fri, 2021-06-25 at 10:02 +0500, Ammad Syed wrote: > > > Hi, > > > > > > I am using openstack wallaby on ubuntu 20.04 and kvm. I am working to make > > > optimized flavor properties that should provide optimal performance. I was > > > reviewing the document below. > > > > > > https://docs.openstack.org/nova/wallaby/admin/cpu-topologies.html > > > > > > I have two socket AMD compute node. The workload running on nodes are mixed > > > workload. > > > > > > My question is should I use default nova CPU topology and NUMA node that > > > nova deploys instance by default OR should I use hw:cpu_sockets='2' > > > and hw:numa_nodes='2'. > > the latter hw:cpu_sockets='2' and hw:numa_nodes='2' should give you better > > performce > > however you should also set hw:mem_page_size=small or hw:mem_page_size=any > > when you enable virtual numa policies we afinities the guest memory to host > > numa nodes. > > This can lead to Out of memory evnet on the the host numa nodes which can > > result in vms > > being killed by the host kernel memeory reaper if you do not enable numa aware > > memeory > > trackign iin nova which is done by setting hw:mem_page_size. setting  > > hw:mem_page_size has > > the side effect of of disabling memory over commit so you have to bare that in > > mind. > > if you are using numa toplogy you should almost always also use hugepages > > which are enabled > > using  hw:mem_page_size=large this however requires you to configure > > hupgepages in the host > > at boot. > > > > > > Which one from above provide best instance performance ? or any other > > > tuning should I do ? > > > > in the libvirt driver the default cpu toplogy we will genergated > > is 1 thread per core, 1 core per socket and 1 socket per flavor.vcpu. > > (technially this is an undocumeted implemation detail that you should not rely > > on, we have the hw:cpu_* element if you care about the toplogy) > > > > this was more effincet in the early days of qemu/openstack but has may issue > > when software is chagne per sokcet or oepreating systems have > > a limit on socket supported such as windows. > > > > generally i advies that you set hw:cpu_sockets to the typical number of > > sockets on the underlying host. > > simialrly if the flavor will only be run on host with SMT/hypertreading > > enabled on you shoudl set hw:cpu_threads=2 > > > > the flavor.vcpus must be devisable by the product of hw:cpu_sockets, > > hw:cpu_cores and hw:cpu_threads if they are set. > > > > so if you have  hw:cpu_threads=2 it must be devisable by 2 > > if you have  hw:cpu_threads=2 and hw:cpu_sockets=2 flavor.vcpus must be a > > multiple of 4 > > > > > > The note in the URL (CPU topology sesion) suggests that I should stay with > > > default options that nova provides. > > in generaly no you should aling it to the host toplogy if you have similar > > toplogy across your data center. > > the default should always just work but its not nessisarly optimal and window > > sguest might not boot if you have too many sockets. > > windows 10 for exmple only supprot 2 socket so you could only have 2 > > flavor.vcpus if you used the default toplogy. > > > > > > > > Currently it also works with libvirt/QEMU driver but we don’t recommend it > > > in production use cases. This is because vCPUs are actually running in one > > > thread on host in qemu TCG (Tiny Code Generator), which is the backend for > > > libvirt/QEMU driver. Work to enable full multi-threading support for TCG > > > (a.k.a. MTTCG) is on going in QEMU community. Please see this MTTCG project > > > page for detail. > > we do not gnerally recommende using qemu without kvm in produciton. > > the mttcg backend is useful in cases where you want to emulate other plathform > > but that usecsae > > is not currently supported in nova. > > for your deployment you should use libvirt with kvm and you should also > > consider if you want to support > > nested virtualisation or not. > > > > > > > > > Ammad > > > > > > From gmann at ghanshyammann.com Tue Jun 29 15:59:30 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 29 Jun 2021 10:59:30 -0500 Subject: [release][stable] stale old branches probably should be EOL'd In-Reply-To: References: <20210628181225.hdz7ai2vgxgjsdl7@yuggoth.org> Message-ID: <17a587f37cd.c31c3dc3242008.1028223219848352914@ghanshyammann.com> ---- On Tue, 29 Jun 2021 09:03:58 -0500 Herve Beraud wrote ---- > Hello, > I was thinking that our tooling will fail with these deliverables addition and tagging but apparently that's not the case so I would suggest to follow the 1st solution (using the existing tools, such as creating a yaml file etc). > > Le lun. 28 juin 2021 à 20:15, Jeremy Stanley a écrit : > On 2021-06-28 20:03:48 +0200 (+0200), Előd Illés wrote: > [...] > > If we stick to the tagging + deletion, then the next question is how to > > achieve this. There are a couple of options: > > > > 1. use the existing tools: such as creating a yaml file under > > deliverables/$series/ and add the $series-eol tag for the given repositories > > (I understand that these are not 'real' deliverables, but does that cause > > any issue for us?) > > 2. implement some new mechanism, similar to option 1, but clearly indicate > > that the tagging does not create any deliverables > > 3. manual tagging + deletion > > > > I think the 1st option is the easiest and since we already have the whole > > process there, we can simply use the existing tool. > > > > So what do you think? > > - Is that OK to tag these open old stable branches with $series-eol tag and > > then delete them? > > WFM > > - If yes, which option from the above list is acceptable, or what else can > > we do? > > The first one. +1, tagging and deleting will cleanup the things. -gmann > [...] > > My two cents, I think it's okay to tag those old branches and delete > them, and I support option 1 as it helps create a bit of a > breadcrumb trail for future reflection. > > As for how they got overlooked, I expect you're right. In the past, > well before we had any real release automation and tracking, > projects asked the Infra team to delete their old branches, and > quite often did not provide a complete list. > -- > Jeremy Stanley > > > -- > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From marios at redhat.com Tue Jun 29 15:59:59 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 29 Jun 2021 18:59:59 +0300 Subject: [TRIPLEO] - ZUN Support in TripleO In-Reply-To: <44c2d59d18916edab9ec1a6a2aeba129963ba2e8.camel@redhat.com> References: <44c2d59d18916edab9ec1a6a2aeba129963ba2e8.camel@redhat.com> Message-ID: On Tue, Jun 29, 2021 at 4:45 PM Sean Mooney wrote: > > On Tue, 2021-06-29 at 15:52 +0530, Lokendra Rathour wrote: > > Hi Marios, > > Thank you for the information. > > > > With respect to the *second question*, please note: > > "is there any alternative to deploy containerize services in TripleO?" > > > > In our use case, in addition to having our workloads in VNFs, we also want > > to have containerized deployment of certain workloads on top of OpenStack. > > Zun service could give us that flexibility. Is there any reason that > > deployment of Zun with TripleO is not supported? And is there an > > alternative to Zun that the community is using in productions for deploying > > containerized workloads on top of OpenStack? > > i think marios's resopnce missed that zun is the containers as a service project > which provide an alternitive to nova or ironci to provision compute resouces as containers > directly on the physical hosts. in ooo term deploying tenant contaienrs directly on the overcloud host > with docker or podman. indeed I did as I got from Lokendra's reply earlier > > i dont think ooo currently supports this as it is not listed in https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/deployment > so to answer your orginal questrion this does not appear to be currently supported. and this isn't somethign I have seen brought up on the irc meetings, ptg or elsewhere. That isn't to say it's impossible (it may be, i just don't know enough about it to say right now ;)). Of course anyone is free to propose a spec about how this might look and get some feedback before working on it in tripleo. regards, marios > > > > please advise. > > > > Regards, > > Lokendra > > > > > > On Tue, Jun 29, 2021 at 3:16 PM Marios Andreou wrote: > > > > > > > > > > > On Tue, Jun 29, 2021 at 11:36 AM Lokendra Rathour < > > > lokendrarathour at gmail.com> wrote: > > > > > > > > > > > Hello Everyone, > > > > We are curious in understanding the usage of ZUN Sevice in TripleO with > > > > respect to which we have questions as below: > > > > > > > > 1. Does TripleO Support ZUN? > > > > > > > > > > > no > > > > > > > > > > > > > > 1. If not then, is there any alternative to deploy containerize > > > > services in TripleO? > > > > > > > > > > > yes we have been deploying services with containers since queens and this > > > is the default (in fact we have stopped supporting non containerized > > > services altogether for a few releases now). For the default list of > > > containers see [1] and information regarding the deployment can be found in > > > [2] (though note that is community best effort docs so beware it may be a > > > bit outdated in places). > > > > > > hope it helps for now > > > > > > regards, marios > > > > > > [1] > > > https://opendev.org/openstack/tripleo-common/src/commit/5836974cf216f5230843e0c63eea21194b527368/container-images/tripleo_containers.yaml > > > [2] > > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/install_overcloud.html#deploy-the-overcloud > > > > > > > > > Any support with respect to the questions raised will definitely help us > > > > in deciding the tripleO usage. > > > > > > > > -- > > > > ~ Lokendra > > > > skype: lokendrarathour > > > > > > > > > > > > > > > > From syedammad83 at gmail.com Tue Jun 29 16:35:01 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 29 Jun 2021 21:35:01 +0500 Subject: [wallaby][nova] CPU topology and NUMA Nodes In-Reply-To: <34ae50bc2491eaae52e6a340dc96424153fd9531.camel@redhat.com> References: <9b6d248665ced4f826fedddd2ccb4649dd148273.camel@redhat.com> <34ae50bc2491eaae52e6a340dc96424153fd9531.camel@redhat.com> Message-ID: Hi Stephen, I have checked all cpus are online. root at kvm10-a1-khi01:/etc/nova# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 21 I have made below configuration in nova.conf. [compute] cpu_shared_set = 2-7,10-15,18-23,26-31 Below is the xml in nova logs that nova is trying to create domain. 2021-06-29 16:30:56.576 2819 ERROR nova.virt.libvirt.guest [req-c76c6809-1775-43a8-bfb1-70f6726cad9d 2af528fdf3244e15b4f3f8fcfc0889c5 890eb2b7d1b8488aa88de7c34d08817a - default default] Error launching a defined domain with XML: instance-0000026d 06ff4fd5-b21f-4f64-9dde-55e86dd15da6 cpu 2021-06-29 16:30:50 16384 5 0 0 8 admin admin 16777216 16777216 8 8192 OpenStack Foundation OpenStack Nova 23.0.0 06ff4fd5-b21f-4f64-9dde-55e86dd15da6 06ff4fd5-b21f-4f64-9dde-55e86dd15da6 Virtual Machine hvm Opteron_G5 destroy restart destroy /usr/bin/qemu-system-x86_64