From fungi at yuggoth.org Mon Jun 1 01:01:04 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 1 Jun 2020 01:01:04 +0000 Subject: [all] Meetpad case-insensitivity (was: Week Out PTG Details & Registration Reminder) In-Reply-To: Message-ID: <20200601010104.rpxsnewfsyoeacnd@yuggoth.org> Just a quick heads up, we discovered on Friday that Meetpad room names with upper-case letters will cause problems with our Etherpad integration. Use all lower-case pad names to avoid this. If you have an existing pad you want to use with Meetpad and it has some upper-case letters in its name, or punctuation other than hyphen (-) or underscore (_), please reach out to the OpenDev sysadmins in the #opendev IRC channel on Freenode or the service-discuss at lists.opendev.org mailing list and we can rename it for you. For a bit more background, Jitsi-Meet treats room names case-insensitively (due to its XMPP heritage); Etherpad on the other hand treats pad names case-sensitively. If you try to use the "shared document" Etherpad in a Meetpad room with upper-case letters in its name, you'll ultimately wind up getting connected to the wrong pad. The punctuation mention above is regarding a separate limitation. Right now the only punctuation we're configured to support for Meetpad rooms are hyphens and underscores. As a result, you'll currently be unable to use it with Etherpads whose names include other punctuation like periods, commas, or parentheses. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zhang.lei.fly+os-discuss at gmail.com Mon Jun 1 02:35:49 2020 From: zhang.lei.fly+os-discuss at gmail.com (Jeffrey Zhang) Date: Mon, 1 Jun 2020 10:35:49 +0800 Subject: [kolla-ansible] Proposing Doug Szumski as Kolla Ansible core In-Reply-To: References: Message-ID: +++1 On Sat, May 30, 2020 at 1:53 AM Michał Nasiadka wrote: > +1! > > On Fri, 29 May 2020 at 15:19, Radosław Piliszek < > radoslaw.piliszek at gmail.com> wrote: > >> Hi Folks! >> >> This mail serves to propose Doug Szumski from StackHPC (dougsz @IRC, >> CC'ed) as Kolla Ansible core. >> >> Doug coauthored the Nova cells support and helps greatly with monitoring >> and logging facilities available in Kolla. >> >> Please give your feedback in this thread. >> >> If there are no objections, I will add Doug after a week from now (that >> is roughly when PTG is over). >> >> -yoctozepto >> >> -- > Michał Nasiadka > mnasiadka at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Mon Jun 1 05:25:39 2020 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 1 Jun 2020 14:25:39 +0900 Subject: [keystone] is_admin_project=true although no admin project configured Message-ID: <6c57a5f6-551a-9aa1-66ba-166d5166f14f@gmail.com> This is on a stable/ussuri Devstack that I spun up about ten days ago. The documentation for Keystone config option admin_project_name says [1] "If left unset, then there is no admin project". It is not set in my cloud, as evidenced by this: $ sudo journalctl -u devstack at keystone |grep admin_project_name Jun 01 11:49:55 ussuri devstack at keystone.service[806]: DEBUG uwsgi [-] *resource.admin_project_name = None *{{(pid=2063) log_opt_values /usr/local/lib/python3.6/dist-packages/oslo_config/cfg.py:2589}} However, when authenticating with any project, I see 'is_admin_project': True in the log, for example here user /linda /with a project-scoped token for project /moon/: Jun 01 13:55:09 ussuri devstack at keystone.service[806]: DEBUG keystone.server.flask.request_processing.middleware.auth_context [None req-4d730134-9544-4475-a72f-b2394863345e *moon linda*] RBAC: auth_context: {'token': , 'domain_id': None, 'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_name': None, 'group_ids': [], 'user_id': 'a8c3559f67094f38a5f0d2d0b581f159', 'user_domain_id': 'default', 'system_scope': None, 'project_id': '163b41b499aa4ac78f2ed968e7fe2a0d', 'project_domain_id': 'default', 'roles': ['admin', 'reader', 'member'], *'is_admin_project': True*, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} {{(pid=2062) fill_context /opt/stack/keystone/keystone/server/flask/request_processing/middleware/auth_context.py:478}} It gets worse. When I configure admin_project_name=admin and admin_project_domain_name=Default,  I do see is_admin_project: false in the log, as expected. Still, /linda/, who has the admin role in the /moon /project,//seems to have cloud admin powers. I tested this by creating a Cinder volume type and by listing all instances in the cloud. So it seems to me that Keystone's old behaviour is in effect: I have admin powers if I have the /admin /role in any project. To me, this looks like a clash between reality and documentation. Am I missing something? Thanks for comments. Bernd [1] https://docs.openstack.org/keystone/latest/configuration/config-options.html#resource.admin_project_name -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Mon Jun 1 08:43:28 2020 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 1 Jun 2020 17:43:28 +0900 Subject: [keystone] [SOLVED] is_admin_project=true although no admin project configured In-Reply-To: <6c57a5f6-551a-9aa1-66ba-166d5166f14f@gmail.com> References: <6c57a5f6-551a-9aa1-66ba-166d5166f14f@gmail.com> Message-ID: <0c30d7a3-c26e-d601-21e7-a6d33a5cc603@gmail.com> I found that Ussuri implements the old behaviour by default, and the new behaviour must be configured with [oslo_policy] enforce_scope = true enforce_new_defaults = true in each service configuration file (I have not checked whether services other than Keystone and Nova observe these new oslo_policy settings). On 6/1/2020 2:25 PM, Bernd Bausch wrote: > > This is on a stable/ussuri Devstack that I spun up about ten days ago. > > The documentation for Keystone config option admin_project_name says > [1] "If left unset, then there is no admin project". It is not set in > my cloud, as evidenced by this: > > $ sudo journalctl -u devstack at keystone |grep admin_project_name > Jun 01 11:49:55 ussuri devstack at keystone.service[806]: DEBUG uwsgi [-] > *resource.admin_project_name = None *{{(pid=2063) log_opt_values > /usr/local/lib/python3.6/dist-packages/oslo_config/cfg.py:2589}} > > However, when authenticating with any project, I see > 'is_admin_project': True in the log, for example here user /linda > /with a project-scoped token for project /moon/: > > Jun 01 13:55:09 ussuri devstack at keystone.service[806]: DEBUG > keystone.server.flask.request_processing.middleware.auth_context [None > req-4d730134-9544-4475-a72f-b2394863345e *moon linda*] RBAC: > auth_context: {'token': audit_chain_id=['1Ie2AyIdRb2WUkaSjSzDoQ']) at 0x7fca69b75c88>, > 'domain_id': None, 'trust_id': None, 'trustor_id': None, 'trustee_id': > None, 'domain_name': None, 'group_ids': [], 'user_id': > 'a8c3559f67094f38a5f0d2d0b581f159', 'user_domain_id': 'default', > 'system_scope': None, 'project_id': > '163b41b499aa4ac78f2ed968e7fe2a0d', 'project_domain_id': 'default', > 'roles': ['admin', 'reader', 'member'], *'is_admin_project': True*, > 'service_user_id': None, 'service_user_domain_id': None, > 'service_project_id': None, 'service_project_domain_id': None, > 'service_roles': []} {{(pid=2062) fill_context > /opt/stack/keystone/keystone/server/flask/request_processing/middleware/auth_context.py:478}} > > It gets worse. When I configure admin_project_name=admin and > admin_project_domain_name=Default,  I do see is_admin_project: false > in the log, as expected. Still, /linda/, who has the admin role in the > /moon /project,//seems to have cloud admin powers. I tested this by > creating a Cinder volume type and by listing all instances in the cloud. > > So it seems to me that Keystone's old behaviour is in effect: I have > admin powers if I have the /admin /role in any project. To me, this > looks like a clash between reality and documentation. Am I missing > something? > > Thanks for comments. > > Bernd > > [1] > https://docs.openstack.org/keystone/latest/configuration/config-options.html#resource.admin_project_name > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon Jun 1 09:46:11 2020 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 1 Jun 2020 10:46:11 +0100 Subject: [neutron] Neutron Bug Deputy Report May 25-31 Message-ID: Hello: This is the Neutron bug report of week 22 (25 May - 31 May). Untriaged: - "OVN Router sending ARP instead of sending traffic to the gateway" * https://bugs.launchpad.net/neutron/+bug/1881041 - "the accepted-egress-direct-flows can't be deleted when the VM is deleted" * https://bugs.launchpad.net/neutron/+bug/1881070 * Assigned to Li YaJie High: - ""rpc_response_max_timeout" configuration variable not present in Linux Bridge agent" * https://bugs.launchpad.net/neutron/+bug/1880934 * Assigned to Tamerlan Abu * Patch proposed: https://review.opendev.org/#/c/731194 - "[tempest] Error in "test_reuse_ip_address_with_other_fip_on_other_router" with duplicated floating IP" * https://bugs.launchpad.net/neutron/+bug/1880976 * Assigned to Rodolfo * Patch proposed: https://review.opendev.org/731267 - "Neutron ovs agent fails on rpc_loop iteration:1" * https://bugs.launchpad.net/neutron/+bug/1881424 * Assigned to Terry Wilson * Patch proposed: https://review.opendev.org/#/c/732081 Medium: - "interrupted vlan connection after live migration" * https://bugs.launchpad.net/neutron/+bug/1880455 * Unassigned - "SSH issues during ML2/OVS to ML2/OVN migration" * https://bugs.launchpad.net/neutron/+bug/1881029 * Assigned to Brian Haley * Patch proposed: https://review.opendev.org/#/c/731367/ - "[OVN] Router availability zones support" * https://bugs.launchpad.net/neutron/+bug/1881095 * Assigned to Lucas Alvares * Patch proposed: https://review.opendev.org/#/c/727791/ - "[OVS][FW] Remote SG IDs left behind when a SG is removed" * https://bugs.launchpad.net/neutron/+bug/1881157 * Assigned to Rodolfo Low: - "Comments for stateless security group are misleading" * https://bugs.launchpad.net/neutron/+bug/1880691 * Assigned to Slawek * Patch proposed: https://review.opendev.org/#/c/730793/ - "[fullstack] Error assigning IPv4 (network address) in "test_gateway_ip_changed" * https://bugs.launchpad.net/neutron/+bug/1880845 * Assigned to Rodolfo - "Creating FIP takes time" * https://bugs.launchpad.net/neutron/+bug/1880969 * Unassigned - "[OVN] In stable branches we don't run neutron-tempest-plugin tests" * https://bugs.launchpad.net/neutron/+bug/1881283 * Unassigned - "Neutron agents process name changed after neutron-server setproctitle change" * https://bugs.launchpad.net/neutron/+bug/1881297 * Unassigned Wishlist: - "[RFE]L3 Router should support ECMP" * https://bugs.launchpad.net/neutron/+bug/1880532 * Assigned to XiaoYu Zhu * Discussed in the Neutron Drivers Meeting on May 25 Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Jun 1 09:49:44 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 1 Jun 2020 11:49:44 +0200 Subject: [neutron] Team meeting on Tuesday 02.06.2020 cancelled Message-ID: <20200601094944.dyktswfa3rdjuuhn@skaplons-mac> Hi, We have PTG sessions in same time, so lets cancel IRC meeting this week. See You all on the PTG sessions :) -- Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Mon Jun 1 09:50:20 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 1 Jun 2020 11:50:20 +0200 Subject: [neutron] CI meeting on Wednesday 03.06.2020 cancelled Message-ID: <20200601095020.n7xrjz5pxsz6fzcr@skaplons-mac> Hi, It's PTG time so lets cancel this week CI meeting. -- Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Mon Jun 1 09:50:39 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 1 Jun 2020 11:50:39 +0200 Subject: [neutron] Drivers meeting on Friday 5.06.2020 cancelled Message-ID: <20200601095039.eppfkftdu66tw63c@skaplons-mac> Hi, It's PTG time so lets cancel this week meeting. -- Slawek Kaplonski Senior software engineer Red Hat From john at johngarbutt.com Mon Jun 1 09:56:04 2020 From: john at johngarbutt.com (John Garbutt) Date: Mon, 1 Jun 2020 10:56:04 +0100 Subject: [nova][ptg] Documentation in nova In-Reply-To: <8E5WAQ.QVEX9LKM41JX2@est.tech> References: <02e32e11204fb4e91979552c5bcda05bda2dd148.camel@redhat.com> <8E5WAQ.QVEX9LKM41JX2@est.tech> Message-ID: On Mon, 25 May 2020 at 15:30, Balázs Gibizer wrote: > On Mon, May 25, 2020 at 09:15, Artom Lifshitz > wrote: > > On Mon, May 25, 2020 at 7:48 AM Balázs Gibizer > > wrote: > >> > >> > >> > >> On Fri, May 22, 2020 at 21:51, Stephen Finucane > >> > >> wrote: > >> > Hi, > >> > > >> > [This is a topic from the PTG etherpad [0]. As the PTG time is > >> > intentionally kept short, let's try to discuss it or even > >> conclude it > >> > before the PTG] > >> > > >> > Our documentation in nova is suffering from bit rot, the ongoing > >> > effects of the documentation migration during Pike (I think), and > >> > general lack of attention. I've been working to tackle this but > >> > progress has been very slow. I suggested this a couple of PTGs > >> ago, > >> > but > >> > once again I'd like to explore going on a solo run with these by > >> > writing and self-approving (perhaps after a agreed interval) > >> > *multiple* > >> > large doc refactors. I've left some notes below, copied from the > >> > Etherpad, but in summary I believe this is the only realistic way > >> we > >> > will ever be able to fix our documentation. > >> > > >> > Cheers, > >> > Stephen > >> > > >> > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > >> > > >> > --- > >> > > >> > Documentation reviews are appreciated but are generally seen as > >> low > >> > priority. See: > >> > > >> > * https://review.opendev.org/667165 (docs: Rewrite quotas > >> > documentation) > >> > * https://review.opendev.org/667133 (docs: Rewrite host > >> aggregate, > >> > availability zone docs) > >> > * https://review.opendev.org/664396 (docs: Document how to > >> revert, > >> > confirm a cold migration) > >> > * https://review.opendev.org/635243 (docs: Rework the PCI > >> > passthrough > >> > guides) > >> > * https://review.opendev.org/640730 (docs: Rework all things > >> > metadata'y) > >> > * https://review.opendev.org/625878 (doc: Rework 'resize' user > >> doc) > >> > * ... > >> > > >> > >> Thank you working on all these documentations. > >> > >> > I (stephenfin) want permission to iterate on documentation and > >> merge > >> > unilaterally unless someone expresses a clear interest > >> > >> Honestly, self approve feels scary to me as it creates precedent. > >> I'm > >> happy to get pinged, pushed, harassed into reviewing the doc patches > >> instead. > > > > Agreed. FWIW, I'm willing to review those as well (though obviously my > > +1 won't be enough to do anything on its own). > > I can be convinced to easy up the rules for pure doc patches. Maybe one > +2 would be enough for pure doc patches to merge if there are +1 from > SMEs on the patch too. I would prefer one +2 rather than self approve. Totally hear you on the cycle time though. Last cycle I made policy a big review priority for me. I am not against making docs a review priority for me this time(*), given this can make a big difference for operators. (* I have not yet my mind up where I can make the biggest difference) Thanks, johnthetubaguy From john at johngarbutt.com Mon Jun 1 09:59:56 2020 From: john at johngarbutt.com (John Garbutt) Date: Mon, 1 Jun 2020 10:59:56 +0100 Subject: [nova][ptg] Runway process in Victoria In-Reply-To: <3A5WAQ.63TG1GRWBIIP@est.tech> References: <4AAJAQ.8C39QP5J2M3Z@est.tech> <3A5WAQ.63TG1GRWBIIP@est.tech> Message-ID: On Mon, 25 May 2020 at 15:29, Balázs Gibizer wrote: > On Fri, May 22, 2020 at 21:34, Stephen Finucane > wrote: > > On Mon, 2020-05-18 at 17:42 +0200, Balázs Gibizer wrote: > >> Hi, > >> > >> [This is a topic from the PTG etherpad [0]. As the PTG time is > >> intentionally kept short, let's try to discuss it or even conclude > >> it > >> before the PTG] > >> > >> > >> In the last 4 cycles we used a process called runway to focus and > >> timebox of the team's feature review effort. However compared to the > >> previous cycles in ussuri we did not really keep the process > >> running. > >> Just compare the length of the Log section of each etherpad > >> [1][2][3][4] to see the difference. So I have two questions: > >> > >> 1) Do we want to keep the process in Victoria? > >> > >> 2) If yes, how we can make the process running? > >> 2.1) How can we keep the runway etherpad up-to-date? > >> 2.2) How to make sure that the team is focusing on the reviews that > >> are > >> in the runway slots? > >> > >> Personally I don't want to advertise this process for contributors > >> if > >> the core team is not agreed and committed to keep the process > >> running > >> as it would lead to unnecessary disappointment. > > > > I tend to use this as a way to find things to review, though I think > > I'd be equally well served by a gerrit dashboard that filtered on bp > > topics. I've stopped using it because I already do a lot of reviews > > and > > have no issue finding more but also, more importantly, because I > > didn't > > find having *my* items in a runway significantly increased reviews > > from > > anyone != mriedem. If there were sign on from every core to prioritize > > this (perhaps with a reminder during the weekly meeting?) then I'd > > embrace it again. If not though, I'd rather we dropped it than make > > false promises. > > I totally agree to avoid the false promise. As far as I understood you > dropped reviewing things in the runway at least partly because the > focused review of your patches the runway promised you was not > delivered. This support my feeling that the core team needs to re-take > the agreement that we want to follow this process and that we are > willing to prioritize reviews in the slots. > > I as the PTL willing to do the scheduling of the slots (a.k.a > paperwork) and sure I can add a topic for the meeting agenda about > patches in the runway slots. > > I as a core willing to try again the runway process by prioritizing > reviewing things in the slots. > > Let's see other core will join in. I find it useful to know where we are agreeing to focus. I like how it helped get move items done done, a bit like how feature freeze focuses the mind. Although sometimes I found I had zero context on all patches in the list, once looking at the patches, so I moved my attention elsewhere given limited bandwidth. I like the idea of using the new gerrit features instead of etherpads, so its all on one dashboard. Thanks, johnthetubaguy From jungleboyj at gmail.com Mon Jun 1 13:56:33 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 1 Jun 2020 08:56:33 -0500 Subject: [tc] changing meeting time In-Reply-To: References: Message-ID: <95b24527-2d45-f172-8601-76dddba720a3@gmail.com> On 5/29/2020 4:41 PM, Mohammed Naser wrote: > Hello everyone, > > The PTG is happening online next week, on the same day as our TC > monthly meeting. > > Since the PTG is only every six months, and we will meet the month > after, how does everyone feel about skipping this month's meeting and > considering the PTG as our meeting instead? > > Your comments are appreciated, > > Thanks, +1 This approach makes sense to me.  Thanks! From i at liuyulong.me Mon Jun 1 14:21:56 2020 From: i at liuyulong.me (=?utf-8?B?TElVIFl1bG9uZw==?=) Date: Mon, 1 Jun 2020 22:21:56 +0800 Subject: [Neutron] no L3 meeting this week (2020-June-01) Message-ID: Hi there, Due to the PTG meetings this week, we will cancel the L3 meeting this week. See you guys online next week. Regards, LIU Yulong -------------- next part -------------- An HTML attachment was scrubbed... URL: From m2elsakha at gmail.com Mon Jun 1 14:39:54 2020 From: m2elsakha at gmail.com (Mohamed Elsakhawy) Date: Mon, 1 Jun 2020 10:39:54 -0400 Subject: UC meeting this week Message-ID: Hi All, The UC meeting this week will be held as part of the PTG in the Bexar room on June 4th, 13-15 UTC Thanks Mohamed -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Jun 1 14:47:02 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 1 Jun 2020 10:47:02 -0400 Subject: [cinder] PTG schedule update Message-ID: <59755fb8-4178-3007-fd31-88c03de58686@gmail.com> The *times* all remain the same. We'll be using Meetpad (not zoom) for our meetings: https://meetpad.opendev.org/victoria-ptg-cinder About recording -- PLEASE READ: Because Meetpad does not have a native recording function, I will be using a third party chrome plugin to record the meeting. This means that there will not be a fancy red dot to warn you that a recording is happening. So when you join, *you should assume that you are being recorded*. I will remind everyone of this at the beginning of each session, but if you join late, or course, you won't hear the reminder. So for the duration of the PTG, by joining the Cinder PTG Meetpad sessions, you are consenting to be part of the recording. Contact me on #openstack-cinder if you have any questions. Just as a reminder about what the times are: Tuesday-Friday 1300-1600 UTC. The full Cinder schedule is on the official Cinder etherpad for the PTG: https://etherpad.opendev.org/p/victoria-ptg-cinder cheers, brian From rosmaita.fossdev at gmail.com Mon Jun 1 16:23:14 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 1 Jun 2020 12:23:14 -0400 Subject: [cinder] no weekly team meeting on 3 June Message-ID: <27087899-9577-fbe1-c29a-64abeb33ed2f@gmail.com> We won't be holding the weekly team meeting on Wednesday 3 June. Instead, we'll be meeting at the virtual PTG. See the etherpad for more information: https://etherpad.opendev.org/p/victoria-ptg-cinder From Arkady.Kanevsky at dell.com Mon Jun 1 16:27:16 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 1 Jun 2020 16:27:16 +0000 Subject: [swift, interop] Swift API level changes to reflect in Interop Message-ID: As we create new guidelines for Interop, We need to see what changes needed for object storage guidelines. So a few specific questions for Swift team: 1. What new Tempest tests added for Ussuri release? a. APIs for query and accessing older versions? Is it for S3 APIs or for swift API also? b. Any new or modified Tempest test for Etags? c. Any SIGUSR1 test coverage? d. New tempest tests for swift-ring-builder? 2. What are tempest tests deprecated for Ussuri release? a. Any tempest tests removed for auto_create_account_prefix? Any other API test coverage tests missed above? Thanks, Arkady From alifshit at redhat.com Mon Jun 1 17:11:15 2020 From: alifshit at redhat.com (Artom Lifshitz) Date: Mon, 1 Jun 2020 13:11:15 -0400 Subject: [Virtual PTG][nova][sdk] Future of microversion support in SDK Message-ID: tl;dr Artom to work on catching up the SDK to Nova's latest microversion, any new Nova microversion should come with a corresponding SDK patch. Hello folks, In the SDK/CLI room today we discussed making openstackclient (osc form now on) the main client for interacting with an OpenStack cloud and eventually getting rid of the python-*clients. For context, openstacksdk (sdk from now on) currently has microversion support similar to how python-*clients handle it: autonegotiate with the server to the greatest common microversion. Osc, on the other hand, defaults to 2.1 - anything higher than that needs to be specified on the command line. The long term plan is to move osc to consume sdk. In light of all this, we decided that the best way to achieve the goal stated in the first paragraph is to: 1. Catch up sdk to Nova. This isn't as daunting as it looks, as the latest microversion sdk currently supports is 2.72. This means there's "only" two cycles of catchup to do before we reach Nova's current max of 2.87. I (Artom) have signed on to do that work, and Mordred has kindly promised quick review. 2. Officialise a Nova policy of "any new microverison should come with a corresponding SDK patch." This is essentially what [1] is aiming for, and it already has wide buy-in from the Nova community. Converting osc to consume sdk is left to the osc/sdk community, though obviously any help there will be appreciated, I'm sure. Please keep me honest if I said anything wrong :) Cheers! [1] https://review.opendev.org/#/c/717722/ From gmann at ghanshyammann.com Mon Jun 1 17:34:06 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 01 Jun 2020 12:34:06 -0500 Subject: [swift, interop] Swift API level changes to reflect in Interop In-Reply-To: References: Message-ID: <17270f29780.ce5474ae60362.4615355983998475279@ghanshyammann.com> ---- On Mon, 01 Jun 2020 11:27:16 -0500 wrote ---- > As we create new guidelines for Interop, > We need to see what changes needed for object storage guidelines. > > So a few specific questions for Swift team: > > 1. What new Tempest tests added for Ussuri release? > a. APIs for query and accessing older versions? Is it for S3 APIs or for swift API also? > b. Any new or modified Tempest test for Etags? > c. Any SIGUSR1 test coverage? > d. New tempest tests for swift-ring-builder? > > 2. What are tempest tests deprecated for Ussuri release? > a. Any tempest tests removed for auto_create_account_prefix? We do not deprecate the Tempest test anytime. Tests can be removed if it satisfies the Tempest test-removal policy - https://docs.openstack.org/tempest/latest/test_removal.html Also adding test in Tempest is also not necessary to happen when API is introduced, it can be later so it is hard to tell when that API was introduced from the Tempest test addition. So from the Tempest side, it will not be a clear pic on what all API/capabilities are added/deprecated in which cycle. From the Tempest point of view, there is no difference between deprecated vs non-deprecated APIs, we keep testing it until those are not removed. For example, you can still run Tempest for Cinder v2 APIs. I think swift team can tell from their API changes not from what changed in Tempest. -gmann > > > Any other API test coverage tests missed above? > Thanks, > Arkady > From emilien at redhat.com Mon Jun 1 17:39:31 2020 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 1 Jun 2020 13:39:31 -0400 Subject: [tripleo] trying Jitsi for PTG day 2 Message-ID: Hi folks, Tomorrow we'll try to use https://meetpad.opendev.org/tripleo-ptg-victoria instead of Zoom. Today we reached ~50 attendees, let's see if Jitsi can handle it and if not we will roll back to the Zoom link. See you tomorrow! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon Jun 1 17:53:11 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 1 Jun 2020 19:53:11 +0200 Subject: [blazar] Victoria PTG Message-ID: Hello, As a reminder, the Blazar project (Resource Reservation as a Service) will meet several times this week as part of the Victoria virtual PTG. With our contributors located across the world, we have scheduled three separate meetings slots: 1. Tuesday June 2, 2020 6-8 UTC in Bexar room (Asia / Europe) 2. Tuesday June 2, 2020 13-14 UTC in Icehouse room (all timezones) 3. Thursday June 4, 2020 13-15 UTC in Cactus room (Europe / Americas) I would like to hold a Ussuri retrospective during session #2, since all active contributors have confirmed they are able to attend. All details can be found on the following Etherpad: https://etherpad.opendev.org/p/blazar-ptg-victoria Everyone who is interested in Blazar is welcome to join. I am looking forward to talking with you! Cheers, Pierre Riteau (priteau) From rico.lin.guanyu at gmail.com Mon Jun 1 18:27:14 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 2 Jun 2020 02:27:14 +0800 Subject: [multi-arch-sig] Reminder for PTG time table Message-ID: Dear all I just like to remind you that we reserved PTG room for the following schedule. Tuesday 0600-0800 UTC @ Icehouse Tuesday 1400-1600 UTC @ Icehouse room Thursday 0600-0800 UTC @ Icehouse Please join us if you find any valid time for you. For more information: Etherpad: https://etherpad.opendev.org/p/Multi-arch-2020-VPTG rest PTG information http://ptg.openstack.org/ptg.html -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Mon Jun 1 18:32:36 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 1 Jun 2020 18:32:36 +0000 Subject: [swift, interop] Swift API level changes to reflect in Interop In-Reply-To: <17270f29780.ce5474ae60362.4615355983998475279@ghanshyammann.com> References: <17270f29780.ce5474ae60362.4615355983998475279@ghanshyammann.com> Message-ID: <7b5dea25854649ec803099e897c67006@AUSX13MPS308.AMER.DELL.COM> Thanks for clarification Ghanshyam -----Original Message----- From: Ghanshyam Mann Sent: Monday, June 1, 2020 12:34 PM To: Kanevsky, Arkady Cc: openstack-discuss Subject: Re: [swift, interop] Swift API level changes to reflect in Interop [EXTERNAL EMAIL] ---- On Mon, 01 Jun 2020 11:27:16 -0500 wrote ---- > As we create new guidelines for Interop, > We need to see what changes needed for object storage guidelines. > > So a few specific questions for Swift team: > > 1. What new Tempest tests added for Ussuri release? > a. APIs for query and accessing older versions? Is it for S3 APIs or for swift API also? > b. Any new or modified Tempest test for Etags? > c. Any SIGUSR1 test coverage? > d. New tempest tests for swift-ring-builder? > > 2. What are tempest tests deprecated for Ussuri release? > a. Any tempest tests removed for auto_create_account_prefix? We do not deprecate the Tempest test anytime. Tests can be removed if it satisfies the Tempest test-removal policy - https://docs.openstack.org/tempest/latest/test_removal.html Also adding test in Tempest is also not necessary to happen when API is introduced, it can be later so it is hard to tell when that API was introduced from the Tempest test addition. So from the Tempest side, it will not be a clear pic on what all API/capabilities are added/deprecated in which cycle. From the Tempest point of view, there is no difference between deprecated vs non-deprecated APIs, we keep testing it until those are not removed. For example, you can still run Tempest for Cinder v2 APIs. I think swift team can tell from their API changes not from what changed in Tempest. -gmann > > > Any other API test coverage tests missed above? > Thanks, > Arkady > From rosmaita.fossdev at gmail.com Mon Jun 1 19:14:48 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 1 Jun 2020 15:14:48 -0400 Subject: [cinder] virtual happy hour poll Message-ID: <71a4f641-c882-7e17-3256-c346b8b6a93c@gmail.com> There was some interest expressed at the Cinder meeting last week for us to have some informal virtual face-to-face time during the PTG. I've put together a doodle poll to pick a day/time: https://doodle.com/poll/2emp9hru5neqzsei Please take the poll before 21:00 UTC tomorrow (2 June). There will be unlimited virtual beverages and virtual hors d'oeuvres. Unfortunately, if you would like a real beverage or snack, it's strictly bring-your-own. thanks, brian From rosmaita.fossdev at gmail.com Mon Jun 1 20:38:22 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 1 Jun 2020 16:38:22 -0400 Subject: [cinder] yet another PTG schedule update Message-ID: Apologies, hopefully this is the final update. Thanks to Jay for helping me test out recording in meetpad. We decided that it will be a way better experience for all concerned to move the Cinder part of the PTG to BlueJeans, in which we have had good experiences as a team holding virtual meetings in the past. Plus, the resulting recordings have been really good. As usual, they'll be posted to the Cinder Youtube channel, where anyone can access them. So, the key links for the Cinder sessions, Tuesday-Friday 1300-1600 UTC: etherpad: https://etherpad.opendev.org/p/victoria-ptg-cinder meeting: https://bluejeans.com/3228528973 BlueJeans has a groovy red dot that will indicate when the session is being recorded. Contact me on #openstack-cinder if you have any questions. cheers, brian From tburke at nvidia.com Mon Jun 1 21:06:16 2020 From: tburke at nvidia.com (Tim Burke) Date: Mon, 1 Jun 2020 14:06:16 -0700 Subject: [ptg][swift][ops] Operator feedback session tomorrow, 2 Jun 13:00-14:00 UTC Message-ID: Tomorrow(/later today, depending on timezone) the Swift community will have an operator feedback session during our PTG slot [0]. If you're running a swift cluster, large or small, we'd love to see you there! As we've done in the past [1], we'll have an etherpad going [2] -- feel free to start filling it in and adding discussion items today! Even if you won't make the meeting, we'd appreciate your feedback. Looking forward to hearing from you! Tim [0] 1300 UTC in https://www.openstack.org/ptg/rooms/liberty [1] https://wiki.openstack.org/wiki/Swift/Etherpads#List_of_Ops_Feedback_Etherpads [2] https://etherpad.opendev.org/p/swift-victoria-ops-feedback From gouthampravi at gmail.com Mon Jun 1 21:29:29 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 1 Jun 2020 14:29:29 -0700 Subject: [manila][ptg] Schedule change announcement Message-ID: Hello Zorillas, Thank you for joining today's call, and driving interesting and informative discussions! We have a schedule update to accommodate a conflicting session. On Tuesday, 2nd June 2020, we will meet between 15:00 and 15:45 and again between 21:00 and 23:00 UTC. On Friday, 5th June 2020, we will meet between 1300 UTC and 1700 UTC (the Happy Hour is between 1600 UTC and 1700 UTC) The schedule for Tuesday is below for your reference: === Tuesday, 2nd June 2020 === 15:00 - 15:45 UTC - cephfs updates (vkmc) 21:00 - 23:00 UTC - Add/Delete/Update security services for in-use share networks (dviroel/carloss) - Create shares with two (or more) subnets (dviroel) - TC Tags applications (gouthamr) You'll find the rest of the schedule updates made on the planning etherpad [1] Thanks! Goutham Pacha Ravi [1] https://etherpad.opendev.org/p/vancouver-ptg-manila-planning From whayutin at redhat.com Mon Jun 1 21:45:33 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 1 Jun 2020 15:45:33 -0600 Subject: [tripleo] trying Jitsi for PTG day 2 In-Reply-To: References: Message-ID: On Mon, Jun 1, 2020 at 11:40 AM Emilien Macchi wrote: > Hi folks, > > Tomorrow we'll try to use https://meetpad.opendev.org/tripleo-ptg-victoria > instead of Zoom. > Today we reached ~50 attendees, let's see if Jitsi can handle it and if > not we will roll back to the Zoom link. > > See you tomorrow! > -- > Emilien Macchi > Thanks Emilien, The schedule is updated to reflect the use of https://meetpad.opendev.org/tripleo-ptg-victoria Also remember tomorrow is COSTUME DAY :) Come dressed to impress -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Jun 1 22:18:14 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 1 Jun 2020 15:18:14 -0700 Subject: [First Contact] [SIG] PTG Meeting Message-ID: Hello Everyone! I updated our PTG URL to be a meetpad[1] meeting room. If that doesn't work, we can fall back to zoom. There isn't much on the agenda and a lot of it will be shaped by who is around. Hope to see you all there at 23:00 UTC! -Kendall (diablo_rojo) [1] https://meetpad.opendev.org/victoria-ptg-first-contact -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Mon Jun 1 22:34:01 2020 From: dsneddon at redhat.com (dsneddon at redhat.com) Date: Mon, 01 Jun 2020 15:34:01 -0700 Subject: [Openstack-mentoring] Neutron subnet with DHCP relay In-Reply-To: References: Message-ID: You will have to target two IP addresses with DHCP relay if you are using Ironic Inspector. The first is the IP where Ironic Inspector is listening with dnsmasq, usually the IP of the host itself. I know this doesn't lend itself to HA scenarios, but you might also be able to forward to the broadcast IP of the subnet where the Ironic Inspector will be running (I haven't tested this, but it is a common use case for DHCP relay). The second IP address is that of the Neutron DHCP agent, and that will be used for deploying bare metal nodes. IIRC, this IP is shared with the Neutron router for the network if you are using the L3 agent as well. If you are not running Ironic Inspector (and manually entering in baremetal host details instead), then you can forward DHCP relay only to the Neutron DHCP agent. Both of these IP addresses will be on the "root" subnet which is associated with the segment with the controller node(s). It sounds like you created a second subnet, but I'm not sure if you created the second subnet on a different segment from the first subnet. In Neutron routed networking, the segments determine whether a subnet is local or remote to the controller node(s). Typically the first segment would be the one local to the controller(s). Are you sure you enabled the segments plugin and created your second subnet on a new segment? Another approach which does not involve DHCP relay is to deploy DHCP agents locally on compute nodes local to each segment. This way all DHCP will be done within the same L2 domain, and you will not have to configure DHCP relay on your router serving each segment/subnet. See the docs for more info: https://docs.openstack.org/newton/networking-guide/config-routed-networks.html -Dan On Fri, 2020-05-29 at 10:47 -0600, Thomas King wrote: > In the Triple-O docs for unicast DHCP relay, it doesn't exactly say > which IP address to target. Without deploying Triple-O, I'm not clear > if the relay IP should be the bridge interface or the DHCP device. > > The first method makes sense because the gateway for that subnet > wouldn't be connected to the Ironic controller by layer 2 (unless we > used VXLAN over the physical network). > > As an experiment, I created a second subnet on my provisioning > network. The original DHCP device port now has two IP addresses, one > on each subnet. That makes the second method possible if I targeted > its original IP address. > > Thanks for the help and please let me know which method is correct. > > Tom King > > On Fri, May 29, 2020 at 3:15 AM Dan Sneddon > wrote: > > You probably want to enable Neutron segments and use the Neutron > > routed networks feature so you can use different subnets on > > different segments (layer 2 domains AKA VLANs) of the same network. > > You specify different values such as IP allocation pools and router > > address(es) for each subnet, and Ironic and Neutron will do the > > right thing. You need to enable segments in the Neutron > > configuration and restart the Neutron server. I don’t think you > > will have to recreate the network. Behind the scenes, dnsmasq will > > be configured with multiple subnets and address scopes within the > > Neutron DHCP agent and the Ironic Inspector agent. > > > > Each segment/subnet will be given a different VLAN ID. As Dmitry > > mentioned, TripleO uses that method for the provisioning network, > > so you can use that as an example. The provisioning network in > > TripleO is the one referred to as the “control plane” network. > > > > -Dan > > > > On Fri, May 29, 2020 at 12:51 AM Dmitry Tantsur < > > dtantsur at redhat.com> wrote: > > > Hi Tom, > > > > > > I know for sure that people are using DHCP relay with ironic, I > > > think the TripleO documentation may give you some hints (adjusted > > > to your presumably non-TripleO environment): > > > http://tripleo.org/install/advanced_deployment/routed_spine_leaf_network.html#dhcp-relay-configuration > > > > > > Dmitry > > > > > > On Thu, May 28, 2020 at 11:06 PM Amy Marrich > > > wrote: > > > > Hey Tom, > > > > > > > > Forwarding to the OpenStack discuss list where you might get > > > > more assistance. > > > > > > > > Thanks, > > > > > > > > Amy (spotz) > > > > > > > > On Thu, May 28, 2020 at 3:32 PM Thomas King < > > > > thomas.king at gmail.com> wrote: > > > > > Good day, > > > > > > > > > > We have Ironic running and connected via VLANs to nearby > > > > > machines. We want to extend this to other parts of our > > > > > product development lab without extending VLANs. > > > > > > > > > > Using DHCP relay, we would point to a single IP address to > > > > > serve DHCP requests but I'm not entirely sure of the Neutron > > > > > network/subnet configuration, nor which IP address should be > > > > > used for the relay agent on the switch. > > > > > > > > > > Is DHCP relay supported by Neutron? > > > > > > > > > > My guess is to add a subnet in the provisioning network and > > > > > point the relay agent to the linuxbridge interface's IP: > > > > > 14: brq467f6775-be: mtu > > > > > 1500 qdisc noqueue state UP group default qlen 1000 > > > > > link/ether e2:e9:09:7f:89:0b brd ff:ff:ff:ff:ff:ff > > > > > inet 10.10.0.1/16 scope global brq467f6775-be > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::5400:52ff:fe85:d33d/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > Thank you, > > > > > Tom King > > > > > _______________________________________________ > > > > > openstack-mentoring mailing list > > > > > openstack-mentoring at lists.openstack.org > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-mentoring > > > > -- > > Dan Sneddon | Senior Principal Software Engineer > > dsneddon at redhat.com | redhat.com/cloud > > dsneddon:irc | @dxs:twitter -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter From masayuki.igawa at gmail.com Mon Jun 1 23:23:08 2020 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Tue, 02 Jun 2020 08:23:08 +0900 Subject: [qa] No office hour this week Message-ID: <45547f5d-2f3c-4d25-8294-b09fa5976b10@www.fastmail.com> Hi team, We won't be holding the weekly office hour on Tuesday 2 June since we'll be meeting at the virtual PTG instead. Please find the etherpad for more information: https://etherpad.opendev.org/p/qa-victoria-ptg -- Masayuki Igawa From johnsomor at gmail.com Tue Jun 2 00:11:41 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 1 Jun 2020 17:11:41 -0700 Subject: [all][infra] Upcoming removal of preinstalled pip and virtualenv from base images In-Reply-To: <20200530004314.GA1770592@fedora19.localdomain> References: <20200530004314.GA1770592@fedora19.localdomain> Message-ID: Ian, I ran the Octavia scenario jobs with ubuntu-bionic-plain without any errors[1]. Thanks for the heads up! Michael [1] https://review.opendev.org/#/c/732456/ On Fri, May 29, 2020 at 5:47 PM Ian Wienand wrote: > > Hello, > > This is to notify the community of the planned upcoming removal of the > "pip-and-virtualenv" element from our infra image builds. > > This is part of the "cleanup test node python" spec documented at [1]. > > tl;dr > ----- > > - pip and virtualenv tools will no longer be pre-installed on the base > images we use for testing, but will be installed by jobs themselves. > - most current jobs will depend on common Zuul base jobs that have > been updated to install these tools already *i.e. in most cases > there should be nothing to do* > - in general, jobs should ensure they depend on roles that install > these tools if they require them > - if you do find your job failing due to the lack of either of these > tools, use the ensure-pip or ensure-virtualenv roles provided in > zuul-jobs > - we have "-plain" node types that implement this now. If you think > there might be a problem, switch the job to run run against one of > these node types described below for testing. > > History > ------- > > The "pip-and-virtualenv" element is part of diskimage-builder [2] and > installs the latest version of pip and virtualenv, and related tools > like setuptools, into the system environment for the daily builds. > This has a long history of working around various issues, but has > become increasingly problematic to maintain. > > One of the more noticable effects is that to prevent the distribution > packages overwriting the upstream installs, we put various pip, > setuptools and virtualenv packages on "hold" in CI images. This > prevents tests resinstalling packaged tools which can create, in > short, a big mess. Both destroying the extant environment and having > the tools on hold have been problems for various projects at various > times. > > Another other problem is what happens when you call "pip" or > "virtualenv" has diverged. It used to be that "pip" would install the > package under Python2, while pip3 would install under python3. > However, modern distributions now have "pip" installing under Python > 3. To keep having "pip" install Python 2 packages on these platforms > is not just wrong, but drags in Python 2 dependences to our images > that aren't required. > > The addition of Python 3's venv and the split with virtualenv makes > things even more confusing again. > > Future > ------ > > As famously said "the only winning move is not to play". [3] > > By dropping this element, images will not have non-packaged > pip/virtualenv/setuptools/etc. pre-installed. No packages will be on > hold. > > The "ensure-pip" role [4] will ensure that dependencies for "pip:" > commands will work. Because of the way base jobs are setup, it is > most likely that this role has already run. If you wish to use a > virtual environment to install a tool, I would suggest using the > "ensure_pip_virtualenv_cmd" this role exports. This will default to > "python3 -m venv". An example is [5]. > > In a Python 3 word, you probably do not *need* virtualenv; "python3 -m > venv" is available and works after the "ensure-pip" role is run. > However, if you require some of the features that virtualenv provides > that venv does not (explained at [6]) there is a role > "ensure-virtualenv" [7]. For example, we do this on the devstack > branches because it is common to use "virtualenv" there due to the > long history [8]. > > If you need specific versions of pip or virtualenv, etc. beyond the > system-packaged versions installed with the above, you should have > your job configure these. There is absolutely no problem with jobs > installing differing versions of pip/virtuatlenv/etc in any way they > want -- we just don't want the base images to have any of that by > default. Of course, you should consider if you're building/testing > something that is actually useful outside the gate, but that is a > global concern. > > Testing > ------- > > We have built parallel nodes with the suffix "-plain" where you can > test any jobs in the new environment. For example [9] speculatively > tests devstack. The node types available are > > centos-7-plain > centos-8-plain > ubuntu-xenial-plain > ubuntu-bionic-plain > > The newer focal images do not have pip pre-installed, neither do the > faster moving Fedora images, any SUSE images, or any ARM64 images. > > Rollout > ------- > > We would like to make the switch soon, to shake out any issues early > in the cycle. This would mean on or about the 8th June. > > Thanks, > > -i > > [1] https://docs.opendev.org/opendev/infra-specs/latest/specs/cleanup-test-node-python.html > [2] https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/pip-and-virtualenv > [3] https://en.wikipedia.org/wiki/WarGames > [4] https://zuul-ci.org/docs/zuul-jobs/python-roles.html#role-ensure-pip > [5] https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/bindep/tasks/install.yaml#L9 > [6] https://virtualenv.pypa.io/en/latest/ > [7] https://zuul-ci.org/docs/zuul-jobs/python-roles.html#role-ensure-virtualenv > [8] https://opendev.org/openstack/devstack/commit/23cfb9e6ebc63a4da4577c0ef9e3450b9c946fa7 > [9] https://review.opendev.org/#/c/712211/11/.zuul.yaml > > From dsneddon at redhat.com Tue Jun 2 00:39:56 2020 From: dsneddon at redhat.com (dsneddon at redhat.com) Date: Mon, 01 Jun 2020 17:39:56 -0700 Subject: [Openstack-mentoring] Neutron subnet with DHCP relay In-Reply-To: References: Message-ID: <09d910242b7ebb0d7d8c495c3914c25ff24a1bc1.camel@redhat.com> The use case for routed networks is when you have multiple distinct subnets which are not connected at layer 2 and only have connectivity to one another via the router gateways on each network. A segment can be thought of as a VLAN, although depending on topology a different VLAN ID is not always used. The key is that there is no layer 2 connectivity between segments, traffic has to be routed between them. The situation where you would use DHCP relay is when you are not assigning DHCP agents to the compute nodes, and you have compute nodes on segments that the controllers are not attached to. In that case, DHCP requests from all segments that are not attached to the controller(s) need to be forwarded to the controllers via DHCP relay. If you have a flat network, then you have no need for DHCP relay, the DHCP agents can receive and respond to requests over layer 2. This applies even if you have multiple subnets on the same segment. On Mon, 2020-06-01 at 18:02 -0600, Thomas King wrote: > We do have the Ironic inspector enabled but mainly use out-of-band > such as iDRAC. > > I am indeed not using segments. I'll need to research that a bit > more. > > One important note, we are only using provider networks with no > Neutron routers. All routing is done on the physical network which > aligns with the docs for segments. The provisioning subnet is on > 10.10.0.0/16 for the directly attached nodes. As a test, I created a > second subnet, 10.100.0.0/16, on the same Neutron network with DHCP > enabled, so now I have two subnets on the same network and Neutron > DHCP port. However, if DHCP relay requires different segments per > remote network... > > The Networking service defines a segment using the following > > components: > > > > Unique physical network name > > Segmentation type > > Segmentation ID > > Does having unique physical network names also mean unique physical > interfaces? > Does this mean no flat network for segments? > If I create 10.100.0.0/16 in rack A1 and the controller is in D30, am > I pointing the DHCP relay to the DHCP agent's 10.10.0.0/16 IP > address? > > > +--------------------+-------+---------------------------+ > | Agent Type | Alive | Binary | > +--------------------+-------+---------------------------+ > | Baremetal Node | :-) | ironic-neutron-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > | Linux bridge agent | :-) | neutron-linuxbridge-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > | Metering agent | :-) | neutron-metering-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > | DHCP agent | :-) | neutron-dhcp-agent | > | L3 agent | :-) | neutron-l3-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > | Metadata agent | :-) | neutron-metadata-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > | Baremetal Node | :-) | ironic-neutron-agent | > +--------------------+-------+---------------------------+ > > On Mon, Jun 1, 2020 at 4:34 PM wrote: > > You will have to target two IP addresses with DHCP relay if you are > > using Ironic Inspector. The first is the IP where Ironic Inspector > > is > > listening with dnsmasq, usually the IP of the host itself. I know > > this > > doesn't lend itself to HA scenarios, but you might also be able to > > forward to the broadcast IP of the subnet where the Ironic > > Inspector > > will be running (I haven't tested this, but it is a common use case > > for > > DHCP relay). > > > > The second IP address is that of the Neutron DHCP agent, and that > > will > > be used for deploying bare metal nodes. IIRC, this IP is shared > > with > > the Neutron router for the network if you are using the L3 agent as > > well. > > > > If you are not running Ironic Inspector (and manually entering in > > baremetal host details instead), then you can forward DHCP relay > > only > > to the Neutron DHCP agent. > > > > Both of these IP addresses will be on the "root" subnet which is > > associated with the segment with the controller node(s). > > > > It sounds like you created a second subnet, but I'm not sure if you > > created the second subnet on a different segment from the first > > subnet. > > In Neutron routed networking, the segments determine whether a > > subnet > > is local or remote to the controller node(s). Typically the first > > segment would be the one local to the controller(s). Are you sure > > you > > enabled the segments plugin and created your second subnet on a new > > segment? > > > > Another approach which does not involve DHCP relay is to deploy > > DHCP > > agents locally on compute nodes local to each segment. This way all > > DHCP will be done within the same L2 domain, and you will not have > > to > > configure DHCP relay on your router serving each segment/subnet. > > > > See the docs for more info: > > https://docs.openstack.org/newton/networking-guide/config-routed-networks.html > > > > -Dan > > > > On Fri, 2020-05-29 at 10:47 -0600, Thomas King wrote: > > > In the Triple-O docs for unicast DHCP relay, it doesn't exactly > > say > > > which IP address to target. Without deploying Triple-O, I'm not > > clear > > > if the relay IP should be the bridge interface or the DHCP > > device. > > > > > > The first method makes sense because the gateway for that subnet > > > wouldn't be connected to the Ironic controller by layer 2 (unless > > we > > > used VXLAN over the physical network). > > > > > > As an experiment, I created a second subnet on my provisioning > > > network. The original DHCP device port now has two IP addresses, > > one > > > on each subnet. That makes the second method possible if I > > targeted > > > its original IP address. > > > > > > Thanks for the help and please let me know which method is > > correct. > > > > > > Tom King > > > > > > On Fri, May 29, 2020 at 3:15 AM Dan Sneddon > > > wrote: > > > > You probably want to enable Neutron segments and use the > > Neutron > > > > routed networks feature so you can use different subnets on > > > > different segments (layer 2 domains AKA VLANs) of the same > > network. > > > > You specify different values such as IP allocation pools and > > router > > > > address(es) for each subnet, and Ironic and Neutron will do the > > > > right thing. You need to enable segments in the Neutron > > > > configuration and restart the Neutron server. I don’t think > > you > > > > will have to recreate the network. Behind the scenes, dnsmasq > > will > > > > be configured with multiple subnets and address scopes within > > the > > > > Neutron DHCP agent and the Ironic Inspector agent. > > > > > > > > Each segment/subnet will be given a different VLAN ID. As > > Dmitry > > > > mentioned, TripleO uses that method for the provisioning > > network, > > > > so you can use that as an example. The provisioning network in > > > > TripleO is the one referred to as the “control plane” network. > > > > > > > > -Dan > > > > > > > > On Fri, May 29, 2020 at 12:51 AM Dmitry Tantsur < > > > > dtantsur at redhat.com> wrote: > > > > > Hi Tom, > > > > > > > > > > I know for sure that people are using DHCP relay with ironic, > > I > > > > > think the TripleO documentation may give you some hints > > (adjusted > > > > > to your presumably non-TripleO environment): > > > > > > > http://tripleo.org/install/advanced_deployment/routed_spine_leaf_network.html#dhcp-relay-configuration > > > > > > > > > > Dmitry > > > > > > > > > > On Thu, May 28, 2020 at 11:06 PM Amy Marrich > > > > > > > wrote: > > > > > > Hey Tom, > > > > > > > > > > > > Forwarding to the OpenStack discuss list where you might > > get > > > > > > more assistance. > > > > > > > > > > > > Thanks, > > > > > > > > > > > > Amy (spotz) > > > > > > > > > > > > On Thu, May 28, 2020 at 3:32 PM Thomas King < > > > > > > thomas.king at gmail.com> wrote: > > > > > > > Good day, > > > > > > > > > > > > > > We have Ironic running and connected via VLANs to nearby > > > > > > > machines. We want to extend this to other parts of our > > > > > > > product development lab without extending VLANs. > > > > > > > > > > > > > > Using DHCP relay, we would point to a single IP address > > to > > > > > > > serve DHCP requests but I'm not entirely sure of the > > Neutron > > > > > > > network/subnet configuration, nor which IP address should > > be > > > > > > > used for the relay agent on the switch. > > > > > > > > > > > > > > Is DHCP relay supported by Neutron? > > > > > > > > > > > > > > My guess is to add a subnet in the provisioning network > > and > > > > > > > point the relay agent to the linuxbridge interface's IP: > > > > > > > 14: brq467f6775-be: mtu > > > > > > > 1500 qdisc noqueue state UP group default qlen 1000 > > > > > > > link/ether e2:e9:09:7f:89:0b brd ff:ff:ff:ff:ff:ff > > > > > > > inet 10.10.0.1/16 scope global brq467f6775-be > > > > > > > valid_lft forever preferred_lft forever > > > > > > > inet6 fe80::5400:52ff:fe85:d33d/64 scope link > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > Thank you, > > > > > > > Tom King > > > > > > > _______________________________________________ > > > > > > > openstack-mentoring mailing list > > > > > > > openstack-mentoring at lists.openstack.org > > > > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-mentoring > > > > > > > > -- > > > > Dan Sneddon | Senior Principal Software Engineer > > > > dsneddon at redhat.com | redhat.com/cloud > > > > dsneddon:irc | @dxs:twitter -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter From fungi at yuggoth.org Tue Jun 2 01:54:07 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 2 Jun 2020 01:54:07 +0000 Subject: [all] Meetpad CPU Utilization and "Low Bandwidth" Mode In-Reply-To: <20200601010104.rpxsnewfsyoeacnd@yuggoth.org> References: <20200601010104.rpxsnewfsyoeacnd@yuggoth.org> Message-ID: <20200602015407.njfacd6uq6anqsfg@yuggoth.org> I've heard a number of second-hand reports of sound quality issues, but just wanted to convey some practical personal experience: Try going to the vertical ellipsis (⋮) menu in the bottom-right corner of the Jitsi-Meet window, and select the "Manage video quality" option, then select "Low bandwidth" mode. This will disable video streams from all participants for you. You may have some luck with its "Low definition" mode, but for me I needed to just dispense with video entirely on the limited capacity system I was using. A short explanation is that the video codecs used by Jitsi-Meet benefit a lot from hardware acceleration. Many of us running open source operating systems may lack built-in support for these, and older/lower-end hardware can simply be overtaxed by it. When you run out of available CPU cycles to process the audio streams, they start to cut in and out. Closing other CPU-intensive applications can also help, or moving to a dedicated machine if you have the luxury. As mentioned elsewhere, modern WebKit-based browsers like Chromium do somewhat better at handling this, so switching to one of those for your conference window might help too. Further, the Jitsi-Meet mobile app is reported to get good performance on some smartphones. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From soulxu at gmail.com Tue Jun 2 02:09:59 2020 From: soulxu at gmail.com (Alex Xu) Date: Tue, 2 Jun 2020 10:09:59 +0800 Subject: [all] Meetpad CPU Utilization and "Low Bandwidth" Mode In-Reply-To: <20200602015407.njfacd6uq6anqsfg@yuggoth.org> References: <20200601010104.rpxsnewfsyoeacnd@yuggoth.org> <20200602015407.njfacd6uq6anqsfg@yuggoth.org> Message-ID: Looks like the audio doesn't work for PRC user. Anyone else from PRC Jitsi-Meet's audo works? Thanks Alex Jeremy Stanley 于2020年6月2日周二 上午9:58写道: > I've heard a number of second-hand reports of sound quality issues, > but just wanted to convey some practical personal experience: > > Try going to the vertical ellipsis (⋮) menu in the bottom-right > corner of the Jitsi-Meet window, and select the "Manage video > quality" option, then select "Low bandwidth" mode. This will disable > video streams from all participants for you. You may have some luck > with its "Low definition" mode, but for me I needed to just dispense > with video entirely on the limited capacity system I was using. > > A short explanation is that the video codecs used by Jitsi-Meet > benefit a lot from hardware acceleration. Many of us running open > source operating systems may lack built-in support for these, and > older/lower-end hardware can simply be overtaxed by it. When you run > out of available CPU cycles to process the audio streams, they start > to cut in and out. Closing other CPU-intensive applications can also > help, or moving to a dedicated machine if you have the luxury. As > mentioned elsewhere, modern WebKit-based browsers like Chromium do > somewhat better at handling this, so switching to one of those for > your conference window might help too. Further, the Jitsi-Meet > mobile app is reported to get good performance on some smartphones. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yumeng_bao at yahoo.com Tue Jun 2 03:11:35 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Tue, 2 Jun 2020 11:11:35 +0800 Subject: [cyborg][ptg] Cyborg PTG Starts in less than 3 hours! References: Message-ID: Hi team, We got less than three hours to go before our PTG session stars! Please check the up-to-date schedule : https://etherpad.opendev.org/p/cyborg-victoria-goals. You can find topic schedules, zoom/meetpad urls there! See you soon! Regards, Yumeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From yumeng_bao at yahoo.com Tue Jun 2 03:15:03 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Tue, 2 Jun 2020 11:15:03 +0800 Subject: [cyborg] Cyborg IRC weekly meeting cancelled on June 4 References: <4FDE8CAD-303A-427D-B466-4CF562AF97BB.ref@yahoo.com> Message-ID: <4FDE8CAD-303A-427D-B466-4CF562AF97BB@yahoo.com> Hi, Cyborg weekly meeting on June 4 will be cancelled due to the virtual PTG event. All cyborg members and cores will be attending the PTG events. Next week, we will resume the weekly meeting. See you next week! Regards, Yumeng From yasufum.o at gmail.com Tue Jun 2 04:58:04 2020 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 2 Jun 2020 13:58:04 +0900 Subject: [tacker][ptg] PTG schedule update Message-ID: <83c0e129-7b1c-5014-309b-087158d8850b@gmail.com> Hi team, I've added a timetable simply because we have many topics during this vPTG[1]. Thank you for your proposals and joining. I'd like also to skip IRC meeting this week because we can have a talk at vPTG instead. [1] https://etherpad.opendev.org/p/Tacker-PTG-Victoria Thanks, Yasufumi From skaplons at redhat.com Tue Jun 2 07:28:02 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 2 Jun 2020 09:28:02 +0200 Subject: [neutron] Neutron Bug Deputy Report May 25-31 In-Reply-To: References: Message-ID: <20200602072802.5vbt5ybiltwolrby@skaplons-mac> Hi, Thx for the summary Rodolfo. On Mon, Jun 01, 2020 at 10:46:11AM +0100, Rodolfo Alonso Hernandez wrote: > Hello: > > This is the Neutron bug report of week 22 (25 May - 31 May). > > Untriaged: > - "OVN Router sending ARP instead of sending traffic to the gateway" > * https://bugs.launchpad.net/neutron/+bug/1881041 > > - "the accepted-egress-direct-flows can't be deleted when the VM is deleted" > * https://bugs.launchpad.net/neutron/+bug/1881070 > * Assigned to Li YaJie > > High: > - ""rpc_response_max_timeout" configuration variable not present in Linux > Bridge agent" > * https://bugs.launchpad.net/neutron/+bug/1880934 > * Assigned to Tamerlan Abu > * Patch proposed: https://review.opendev.org/#/c/731194 > > - "[tempest] Error in > "test_reuse_ip_address_with_other_fip_on_other_router" with duplicated > floating IP" > * https://bugs.launchpad.net/neutron/+bug/1880976 > * Assigned to Rodolfo > * Patch proposed: https://review.opendev.org/731267 > > - "Neutron ovs agent fails on rpc_loop iteration:1" > * https://bugs.launchpad.net/neutron/+bug/1881424 > * Assigned to Terry Wilson > * Patch proposed: https://review.opendev.org/#/c/732081 > > Medium: > - "interrupted vlan connection after live migration" > * https://bugs.launchpad.net/neutron/+bug/1880455 > * Unassigned > > - "SSH issues during ML2/OVS to ML2/OVN migration" > * https://bugs.launchpad.net/neutron/+bug/1881029 > * Assigned to Brian Haley > * Patch proposed: https://review.opendev.org/#/c/731367/ > > - "[OVN] Router availability zones support" > * https://bugs.launchpad.net/neutron/+bug/1881095 > * Assigned to Lucas Alvares > * Patch proposed: https://review.opendev.org/#/c/727791/ > > - "[OVS][FW] Remote SG IDs left behind when a SG is removed" > * https://bugs.launchpad.net/neutron/+bug/1881157 > * Assigned to Rodolfo > > Low: > - "Comments for stateless security group are misleading" > * https://bugs.launchpad.net/neutron/+bug/1880691 > * Assigned to Slawek > * Patch proposed: https://review.opendev.org/#/c/730793/ > > - "[fullstack] Error assigning IPv4 (network address) in > "test_gateway_ip_changed" > * https://bugs.launchpad.net/neutron/+bug/1880845 > * Assigned to Rodolfo > > - "Creating FIP takes time" > * https://bugs.launchpad.net/neutron/+bug/1880969 > * Unassigned > > - "[OVN] In stable branches we don't run neutron-tempest-plugin tests" > * https://bugs.launchpad.net/neutron/+bug/1881283 > * Unassigned > > - "Neutron agents process name changed after neutron-server setproctitle > change" > * https://bugs.launchpad.net/neutron/+bug/1881297 > * Unassigned > > Wishlist: > - "[RFE]L3 Router should support ECMP" > * https://bugs.launchpad.net/neutron/+bug/1880532 > * Assigned to XiaoYu Zhu > * Discussed in the Neutron Drivers Meeting on May 25 > > Regards. -- Slawek Kaplonski Senior software engineer Red Hat From yasufum.o at gmail.com Tue Jun 2 07:44:53 2020 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 2 Jun 2020 16:44:53 +0900 Subject: [tacker][ptg] PTG schedule update In-Reply-To: <83c0e129-7b1c-5014-309b-087158d8850b@gmail.com> References: <83c0e129-7b1c-5014-309b-087158d8850b@gmail.com> Message-ID: <944c5c6c-77cc-23a2-6d8c-bc2ebfd09ae6@gmail.com> Hi tacker team, I have booked two more days because it seems we cannot complete all topics in two days. Please let me know if you have some topics for discussion but cannot join in the extra two days. I would like to consider to exchange the order of the topics. - 02 June: 6am-8am UTC (3pm-5pm JST/KST) - 03 June: 6am-8am UTC (3pm-5pm JST/KST) - 04 June: 6am-8am UTC (3pm-5pm JST/KST) - 05 June: 6am-8am UTC (3pm-5pm JST/KST) Thanks, Yasufumi On 2020/06/02 13:58, Yasufumi Ogawa wrote: > Hi team, > > I've added a timetable simply because we have many topics during this > vPTG[1]. Thank you for your proposals and joining. > > I'd like also to skip IRC meeting this week because we can have a talk > at vPTG instead. > > [1] https://etherpad.opendev.org/p/Tacker-PTG-Victoria > > Thanks, > Yasufumi From thierry at openstack.org Tue Jun 2 09:06:14 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 2 Jun 2020 11:06:14 +0200 Subject: [all] Meetpad CPU Utilization and "Low Bandwidth" Mode In-Reply-To: References: <20200601010104.rpxsnewfsyoeacnd@yuggoth.org> <20200602015407.njfacd6uq6anqsfg@yuggoth.org> Message-ID: Alex Xu wrote: > Looks like the audio doesn't work for PRC user. Anyone else from PRC > Jitsi-Meet's audo works? Can't comment on usage from PRC, but I hit a case where I had no audio at all... Closing and rejoining the meeting worked around the issue for me. Also it seems that Chrome/Chromium works slightly better than Firefox. -- Thierry Carrez (ttx) From balazs.gibizer at est.tech Tue Jun 2 09:36:05 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 02 Jun 2020 11:36:05 +0200 Subject: [nova] Weekly meeting canceled due to PTG Message-ID: <5CLABQ.9ZSZT41XJJMZ@est.tech> Hi, There won't be weekly IRC meeting this week. Cheers, gibi From balazs.gibizer at est.tech Tue Jun 2 09:43:28 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 02 Jun 2020 11:43:28 +0200 Subject: [Virtual PTG][nova][sdk] Future of microversion support in SDK In-Reply-To: References: Message-ID: On Mon, Jun 1, 2020 at 13:11, Artom Lifshitz wrote: > tl;dr Artom to work on catching up the SDK to Nova's latest > microversion, any new Nova microversion should come with a > corresponding SDK patch. > > Hello folks, > > In the SDK/CLI room today we discussed making openstackclient (osc > form now on) the main client for interacting with an OpenStack cloud > and eventually getting rid of the python-*clients. > > For context, openstacksdk (sdk from now on) currently has microversion > support similar to how python-*clients handle it: autonegotiate with > the server to the greatest common microversion. Osc, on the other > hand, defaults to 2.1 - anything higher than that needs to be > specified on the command line. The long term plan is to move osc to > consume sdk. > > In light of all this, we decided that the best way to achieve the goal > stated in the first paragraph is to: > > 1. Catch up sdk to Nova. This isn't as daunting as it looks, as the > latest microversion sdk currently supports is 2.72. This means there's > "only" two cycles of catchup to do before we reach Nova's current max > of 2.87. I (Artom) have signed on to do that work, and Mordred has > kindly promised quick review. There is more work than that as support for some of the older microversions are also missing. See the etherpad [2] > > 2. Officialise a Nova policy of "any new microverison should come with > a corresponding SDK patch." This is essentially what [1] is aiming > for, and it already has wide buy-in from the Nova community. > > Converting osc to consume sdk is left to the osc/sdk community, though > obviously any help there will be appreciated, I'm sure. > > Please keep me honest if I said anything wrong :) > > Cheers! > > [1] https://review.opendev.org/#/c/717722/ > [2] https://etherpad.opendev.org/p/compute-api-microversion-gap-in-osc From ralonsoh at redhat.com Tue Jun 2 09:58:53 2020 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 2 Jun 2020 10:58:53 +0100 Subject: [neutron][qos] Bi-weekly meeting canceled due to PTG Message-ID: Hello: Today there won't be the Neutron QoS meeting. Regards. Rodolfo Alonso. -------------- next part -------------- An HTML attachment was scrubbed... URL: From licanwei_cn at 163.com Tue Jun 2 11:01:31 2020 From: licanwei_cn at 163.com (licanwei) Date: Tue, 2 Jun 2020 19:01:31 +0800 (GMT+08:00) Subject: [Watcher] no IRC meeting tomorrow Message-ID: <6c081858.c5d1.17274b18a37.Coremail.licanwei_cn@163.com> | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue Jun 2 11:34:42 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 2 Jun 2020 12:34:42 +0100 Subject: [all] OpenStack versions that can't practically be run with Python 3 ? Message-ID: Does anyone know the most recent OpenStack version that _can't_ easily be run with Python 3? I think the full answer to this may have to consider distro packaging, as well as the underlying code support. For example, I was just looking at switching an existing Queens setup, on Ubuntu Bionic, and it can't practically be done because all of the scripts - e.g. /usr/bin/nova-compute - have a hashbang line that says "python2". So IIUC Queens is a no for Python 3, at least in the Ubuntu packaging. Do you know if this is equally true for later versions than Queens? Or alternatively, if something systematic was done to address this problem in later releases? E.g. is there a global USE_PYTHON3 switch somewhere, or was the packaging for later releases changed to hardcode "python3" instead of "python2"? If so, when did that happen? Many thanks for any information around this! Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Jun 2 12:20:43 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 2 Jun 2020 08:20:43 -0400 Subject: [cinder][security] propose Rajat Dhasmana for cinder coresec In-Reply-To: <1ab081dc-ad7a-aa9a-ac49-652dc74888be@gmail.com> References: <1ab081dc-ad7a-aa9a-ac49-652dc74888be@gmail.com> Message-ID: <07db6b26-f2b1-8a6f-0cf8-f168d8b1161e@gmail.com> On 5/27/20 12:03 PM, Brian Rosmaita wrote: > Jay Bryant is stepping down from the Cinder core security team due to > time constraints.  I am proposing to replace him with Rajat Dhasmana > (whoami-rajat on IRC).  Rajat is has been a cinder core since January > 2019 and is a thorough and respected reviewer and developer. > Additionally, he'll extend our time zone coverage so that there will > always be someone available to review/work on security patches around > the clock. > > I intend to add Rajat to cinder coresec on Tuesday, 2 June; please > communicate any concerns to me before then. Having heard only positive responses, I've added Rajat to cinder coresec, with all the privileges and responsibilities pertaining thereto. Congratulations, Rajat! > cheers, > brian From sean.mcginnis at gmx.com Tue Jun 2 12:21:47 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 2 Jun 2020 07:21:47 -0500 Subject: [all] OpenStack versions that can't practically be run with Python 3 ? In-Reply-To: References: Message-ID: On 6/2/20 6:34 AM, Neil Jerram wrote: > Does anyone know the most recent OpenStack version that > _can't_ easily be run with Python 3?  I think the full answer to this > may have to consider distro packaging, as well as the underlying code > support. > > For example, I was just looking at switching an existing Queens setup, > on Ubuntu Bionic, and it can't practically be done because all of the > scripts - e.g. /usr/bin/nova-compute - have a hashbang line that says > "python2". > > So IIUC Queens is a no for Python 3, at least in the Ubuntu packaging. > > Do you know if this is equally true for later versions than Queens?  > Or alternatively, if something systematic was done to address this > problem in later releases?  E.g. is there a global USE_PYTHON3 switch > somewhere, or was the packaging for later releases changed to hardcode > "python3" instead of "python2"?  If so, when did that happen? > Stein was the release where we had a cycle goal to get everyone using Python 3: https://governance.openstack.org/tc/goals/selected/stein/python3-first.html Part of the completion criteria for that goal was that all projects should, at a minimum, be running py3.6 unit tests. So a couple of caveats there - unit tests don't always identify issues that you can run in to actually running full functionality, and not every project was able to complete the cycle goal completely. Most did though. So I think Stein likely should work for you, but of course Train or Ussuri will have had more time to identify any missed issues and the like. I hope this helps. Sean From yumeng_bao at yahoo.com Tue Jun 2 13:02:52 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Tue, 2 Jun 2020 21:02:52 +0800 Subject: =?utf-8?Q?[cyborg][ptg]_Cyborg_PTG_Summary_for_the_Day_=E2=80=94?= =?utf-8?Q?_Come_in_series?= References: <29690014-7276-4435-BAA1-486C1B332968.ref@yahoo.com> Message-ID: <29690014-7276-4435-BAA1-486C1B332968@yahoo.com> Hi all, Thanks for all your participation! We’ve conducted a successful meeting on the first day despite some network crash during the meeting. Zoom Records will be uploaded and shared later at the end of PTG. Today, We’ve mainly discussed: 1. more nova operations support [Agreement] In the victoria release, we will work on rebuild/evcaute, suspend/resume, shelve/unshelve and resize operations. binding/unbinding will be left for future release, but as a longterm goal, we can keep discussion once there are good ideas. [Action] bring up nova operations to nova-cyborg cross project sessions: https://etherpad.opendev.org/p/nova-victoria-ptg 2. placement/cyborg-db data consistency: [Agreement] We need decouple placement data report and cyborg-db update in conductor diff. [Solution discussed] add a new object such as placement_object to sync with placement, and do placement diff independently. But how to collect the cyborg-specific data from placement? get all and filter in cyborg??? We will continue this issue tomorrow. Tomorrow, we will discuss: 1. Placement and cyborg-db consistency issue; 2. Support new drivers: Intel QAT and Inspur FPGA drovers; 3. Driver Program API; 4. Should we support API attribute? See you tomorrow at 6:00 UTC! For More details please see https://etherpad.opendev.org/p/cyborg-victoria-goals Regards, Yumeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Jun 2 13:04:24 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 2 Jun 2020 09:04:24 -0400 Subject: [cinder] survey for backend driver maintainers/developers Message-ID: <55d110e2-d59f-c355-f308-364a74960376@gmail.com> We're scheduled to have a Drivers Retrospective for Ussuri tomorrow (see the etherpad for details, projected time): https://etherpad.opendev.org/p/victoria-ptg-cinder Here is the quick survey (only 3 questions, should only take a few minutes): https://rosmaita.wufoo.com/forms/cinder-ussuri-retrospective-survey-for-drivers/ Please fill out the survey even if you can't attend, the feedback will be helpful. cheers, brian From alifshit at redhat.com Tue Jun 2 13:25:41 2020 From: alifshit at redhat.com (Artom Lifshitz) Date: Tue, 2 Jun 2020 09:25:41 -0400 Subject: [Virtual PTG][nova][sdk] Future of microversion support in SDK In-Reply-To: References: Message-ID: On Tue, Jun 2, 2020 at 5:43 AM Balázs Gibizer wrote: > > > > On Mon, Jun 1, 2020 at 13:11, Artom Lifshitz > wrote: > > tl;dr Artom to work on catching up the SDK to Nova's latest > > microversion, any new Nova microversion should come with a > > corresponding SDK patch. > > > > Hello folks, > > > > In the SDK/CLI room today we discussed making openstackclient (osc > > form now on) the main client for interacting with an OpenStack cloud > > and eventually getting rid of the python-*clients. > > > > For context, openstacksdk (sdk from now on) currently has microversion > > support similar to how python-*clients handle it: autonegotiate with > > the server to the greatest common microversion. Osc, on the other > > hand, defaults to 2.1 - anything higher than that needs to be > > specified on the command line. The long term plan is to move osc to > > consume sdk. > > > > In light of all this, we decided that the best way to achieve the goal > > stated in the first paragraph is to: > > > > 1. Catch up sdk to Nova. This isn't as daunting as it looks, as the > > latest microversion sdk currently supports is 2.72. This means there's > > "only" two cycles of catchup to do before we reach Nova's current max > > of 2.87. I (Artom) have signed on to do that work, and Mordred has > > kindly promised quick review. > > There is more work than that as support for some of the older > microversions are also missing. See the etherpad [2] Yep, I'm aware of that. I had to double-check with the other Artem (gtema) on IRC because you made me doubt myself, but those are gaps in *osc*, not *sdk*. SDK is actually pretty well caught up. And since it's the future, I've specifically made a point of concentrating on SDK, and leaving the work to convert osc to using sdk to Monty and Artem and friends :) > > > > > 2. Officialise a Nova policy of "any new microverison should come with > > a corresponding SDK patch." This is essentially what [1] is aiming > > for, and it already has wide buy-in from the Nova community. > > > > Converting osc to consume sdk is left to the osc/sdk community, though > > obviously any help there will be appreciated, I'm sure. > > > > Please keep me honest if I said anything wrong :) > > > > Cheers! > > > > [1] https://review.opendev.org/#/c/717722/ > > > > [2] https://etherpad.opendev.org/p/compute-api-microversion-gap-in-osc > > > From emilien at redhat.com Tue Jun 2 13:34:57 2020 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 2 Jun 2020 09:34:57 -0400 Subject: [tripleo] trying Jitsi for PTG day 2 In-Reply-To: References: Message-ID: We rolled back to Zoom after we reached 40 attendees, and a lot of folks (myself included) complained about high CPU usage (despite lowering video quality), cutting sound and bad experience in general :-( Feel free to join us today: https://www.openstack.org/ptg/rooms/mitaka Same password as yesterday. Thanks and sorry for that, On Mon, Jun 1, 2020 at 5:45 PM Wesley Hayutin wrote: > > > On Mon, Jun 1, 2020 at 11:40 AM Emilien Macchi wrote: > >> Hi folks, >> >> Tomorrow we'll try to use >> https://meetpad.opendev.org/tripleo-ptg-victoria instead of Zoom. >> Today we reached ~50 attendees, let's see if Jitsi can handle it and if >> not we will roll back to the Zoom link. >> >> See you tomorrow! >> -- >> Emilien Macchi >> > > Thanks Emilien, > The schedule is updated to reflect the use of > https://meetpad.opendev.org/tripleo-ptg-victoria > Also remember tomorrow is COSTUME DAY :) Come dressed to impress > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jun 2 13:46:38 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 02 Jun 2020 14:46:38 +0100 Subject: [cyborg][nova][neutron]Summaries of Smartnic support integration In-Reply-To: <1936985058.312813.1590940419571@mail.yahoo.com> References: <1936985058.312813.1590940419571@mail.yahoo.com> Message-ID: <70d739bb16f192214cd575410498af79c8a22b48.camel@redhat.com> On Sun, 2020-05-31 at 15:53 +0000, yumeng bao wrote: > Hi Sean and Lajos, > > Thank you so much for your quick response, good suggestions and feedbacks! > > @Sean Mooney > > if we want to supprot cyborg/smartnic integration we should add a new > > device-profile extention that intoduces the ablity > > for a non admin user to specify a cyborg device profile name as a new > > attibute on the port. > > +1,Agree. Cyborg likes this suggestion! This will be more clear that this field is for device profile usage. > The reason why we were firstly thinking of using binding:profile is that this is a way with the smallest number of > changes possible in both nova and neutron.But thinking of the non-admin issue, the violation of the one way > comunicaiton of binding:profile, and the possible security risk of breaking nova(which we surely don't want to do > that), we surely prefer giving up binding:profile and finding a better place to put the new device-profile extention. > > > the neutron server could then either retirve the request groups form > > cyborg and pass them as part of the port resouce > > request using the mechanium added for minium bandwidth or it can leave > > that to nova to manage. > > > > i would kind of prefer neutron to do this but both could work. > > > Yes, neutron server can also do that,but given the fact that we already landed the code of retriving request groups > form cyborg in nova, can we reuse this process in nova and add new process in create_resource_requests to create > accelerator resource request from port info? the advantage of neutron doing it is it can merge the cyborg resouce requests with any other resouce requests for the port, if nova does it it need to have slightly different logic the then existing code we have today. the existing code would make the cyborg resouce requests be a seperate placemnt group. we need them to be merged with the port request group. the current nova code also only support one device profile per instance so we need to ensure whatever approch we take we need to ensure that device profiles can co exits in both the flavor and multiple ports and the resource requests are grouped correctly in each case. > > I would be very appreciated if this change can land in nova, as I see the advantages are: > 1) This keeps accelerator request groups things handled in on place, which makes integration clear and simple, Nova > controls the main manage things,the neutron handles network backends integration,and cyborg involves accelerator > management. > 2) Another good thing: this will dispels Lajos's concerns on port-resource-request! im not really sure what that concern is. port-resouce-request was created for initally for qos minimum bandwith support but ideally it is a mechanism for comunicating any placement resouce requirements to nova. my proposal was that neutron would retrive the device profile resouce requests(and cache it) then append those requests too the other port-resouce-requests so that they will be included in the ports request group. > 3) As presented in the proposal(page 3 and 5 of the slide)[0], please don't worry! This will be a tiny change in nova. > Cyborg will be very appreciated if this change can land in nova, for it saves much effort in cyborg-neutron > integration. > > > @Lajos Katona: > > Port-resource-request (see:https://docs.openstack.org/api-ref/network/v2/index.html#port-resource-request) > > is a read-only (and admin-only) field of ports, which is filled > > based on the agent heartbeats. So now there is now polling of agents or > > similar. Adding extra "overload" to this mechanism, like polling cyborg or > > similar looks something out of the original design for me, not to speak about the performance issues to add there is no need for any polling of cyborg. the device-porfile in the cyborg api is immutable you cannot specify the uuid when createing it and the the name is the unique constraint. so even if someone was to delete and recreate the device profile with the same name the uuid would not be the same The first time a device profile is added to the port the neutron server can lookup the device profile once and cache it. so ideally the neutron server could cache the responce of a the "cyborg profile show" e.g. listing the resouce group requests for the profile using the name and uuid. the uuid is only usful to catch the aba problem of people creating , deleting and recreating the profile with the same name. i should note that if you do delete and recreate the device profile its highly likely to break nova so that should not be done. this is because nova is relying on the fact that cybogs api say this cannot change so we are not storing the version the vm was booted with and are relying on cyborg to not change it. neutron can make the same assumtion that a device profile definition will not change. > > > > - API requests towards cyborg (or anything else) to every port GET > > operation > > - store cyborg related information in neutron db which was fetched from > > cyborg (periodically I suppose) to make neutron able to fill > > port-resource-request. > > > As mentioned above,if request group can land in nova, we don't need to concern API request towards cyborg and cyborg > info merged to port-resource-request. this would still have to be done in nova instead. really this is not something nova should have to special case for, im not saying it cant but ideally neuton would leaverage the exitsing port-resource-request feature. > Another question just for curiosity.In my understanding(Please correct me if I'm worng.), I feel that neutron doesn't > need to poll cyborg periodically if neutron fill port-resource-request, just fetch it once port request happens. correct no need to poll, it just need to featch it when the profile is added to the port and it can be cached safely. > because neutron can expect that the cyborg device_profile(provides resource request info for nova scheduler) don't > change very often, it imuntable in the cyborg api so it can expect that for a given name it should never change but it could detect that by looking at both the name and uuid. > it is the flavor of accelerator, and only admin can create/delete them. yes only admin can create and delete them and it does not support update. i think its invalid to delete a device profile if its currently in use by any neutron port or nova instance. its certenly invalid or should to delete it if there is a arq using the device-profile. > > [0]pre-PTG slides update: https://docs.qq.com/slide/DVkxSUlRnVGxnUFR3 > > Regards, > Yumeng > > > > > On Friday, May 29, 2020, 3:21:08 PM GMT+8, Lajos Katona wrote: > > > > > > Hi, > > Port-resource-request (see: https://docs.openstack.org/api-ref/network/v2/index.html#port-resource-requestst ) is a > read-only (and admin-only) field of ports, which is filled > based on the agent heartbeats. So now there is now polling of agents or similar. Adding extra "overload" to this > mechanism, like polling cyborg or similar > looks something out of the original design for me, not to speak about the performance issues to add > * API requests towards cyborg (or anything else) to every port GET operation > * store cyborg related information in neutron db which was fetched from cyborg (periodically I suppose) to make > neutron able to fill port-resource-request. > Regards > Lajos > > Sean Mooney ezt írta (időpont: 2020. máj. 28., Cs, 16:13): > > On Thu, 2020-05-28 at 20:50 +0800, yumeng bao wrote: > > >  > > > Hi all, > > > > > > > > > In cyborg pre-PTG meeting conducted last week[0],shaohe from Intel introduced SmartNIC support integrations,and > > > we've > > > reached some initial agreements: > > > > > > The workflow for a user to create a server with network acceleartor(accelerator is managed by Cyborg) is: > > > > > > 1. create a port with accelerator request specified into binding_profile field > > > NOTE: Putting the accelerator request(device_profile) into binding_profile is one possible solution implemented > > > in > > > our POC. > > > > the binding profile field is not really intended for this. > > > > https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api/definitions/portbindings.py#L31-L34 > > its intended to pass info from nova to neutron but not the other way around. > > it was orgininally introduced so that nova could pass info to the neutron plug in > > specificly the sriov pci address. it was not intended for two way comunicaiton to present infom form neutron > > to nova. > > > > we kindo of broke that with the trusted vf feature but since that was intended to be admin only as its a security > > risk > > in a mulit tenant cloud its a slightl different case. > > i think we should avoid using the binding profile for passing info form neutron to nova and keep it for its orginal > > use of passing info from the virt dirver to the network backend. > > > > > > > Another possible solution,adding a new attribute to port object for cyborg specific use instead of using > > > binding_profile, is discussed in shanghai Summit[1]. > > > This needs check with neutron team, which neutron team would suggest? > > > > from a nova persepctive i would prefer if this was a new extention. > > the binding profile is admin only by default so its not realy a good way to request features be enabled. > > you can use neutron rbac policies to alther that i belive but in genral i dont think we shoudl advocate for non > > admins > > to be able to modify the binding profile as they can break nova. e.g. by modifying the pci addres. > > if we want to supprot cyborg/smartnic integration we should add a new device-profile extention that intoduces the > > ablity > > for a non admin user to specify a cyborg device profile name as a new attibute on the port. > > > > the neutron server could then either retirve the request groups form cyborg and pass them as part of the port > > resouce > > request using the mechanium added for minium bandwidth or it can leave that to nova to manage. > > > > i would kind of prefer neutron to do this but both could work. > > > > > > 2.create a server with the port created > > > > > > Cyborg-nova-neutron integration workflow can be found on page 3 of the slide[2] presented in pre-PTG. > > > > > > And we also record the introduction! Please find the pre-PTG meeting vedio record in [3] and [4], they are the > > > same, > > > just for different region access. > > > > > > > > > [0]http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014987.html > > > [1]https://etherpad.opendev.org/p/Shanghai-Neutron-Cyborg-xproj > > > [2]pre-PTG slides:https://docs.qq.com/slide/DVm5Jakx5ZlJXY3lw > > > [3]pre-PTG vedio records in Youtube:https://www.youtube.com/watch?v=IN4haOK7sQg&feature=youtu.be > > > [4]pre-PTG vedio records in Youku: > > > http://v.youku.com/v_show/id_XNDY5MDA4NjM2NA==.html?x&sharefrom=iphone&sharekey=51459cbd599407990dd09940061b374d4 > > > > > > Regards, > > > Yumeng > > > > > > > > > > > From david.j.ivey at gmail.com Tue Jun 2 14:02:18 2020 From: david.j.ivey at gmail.com (David Ivey) Date: Tue, 2 Jun 2020 10:02:18 -0400 Subject: [all] OpenStack versions that can't practically be run with Python 3 ? In-Reply-To: References: Message-ID: For me, Stein still had a lot of issues with python3 when I tried to use it, but I had tried the upgrade shortly after Stein had released so those issues may have been resolved by now. I ended up reverting back to Rocky and python2.7, My first real stable build with python3 was with the Train release on Ubuntu18.04, so I skipped the Stein release. Someone can correct me if I am wrong, but last I checked, CentOS 7 did not have the python3 packages in RDO. So if using CentOS 7; RDO does not have Ussuri and the latest release there is Train with python2.7. If using CentOS 8 and the Ussuri release; RDO released the python3 packages last week. I have not tried Ussuri on CentOS 8 yet. David On Tue, Jun 2, 2020 at 8:25 AM Sean McGinnis wrote: > On 6/2/20 6:34 AM, Neil Jerram wrote: > > Does anyone know the most recent OpenStack version that > > _can't_ easily be run with Python 3? I think the full answer to this > > may have to consider distro packaging, as well as the underlying code > > support. > > > > For example, I was just looking at switching an existing Queens setup, > > on Ubuntu Bionic, and it can't practically be done because all of the > > scripts - e.g. /usr/bin/nova-compute - have a hashbang line that says > > "python2". > > > > So IIUC Queens is a no for Python 3, at least in the Ubuntu packaging. > > > > Do you know if this is equally true for later versions than Queens? > > Or alternatively, if something systematic was done to address this > > problem in later releases? E.g. is there a global USE_PYTHON3 switch > > somewhere, or was the packaging for later releases changed to hardcode > > "python3" instead of "python2"? If so, when did that happen? > > > Stein was the release where we had a cycle goal to get everyone using > Python 3: > > https://governance.openstack.org/tc/goals/selected/stein/python3-first.html > > Part of the completion criteria for that goal was that all projects > should, at a minimum, be running py3.6 unit tests. So a couple of > caveats there - unit tests don't always identify issues that you can run > in to actually running full functionality, and not every project was > able to complete the cycle goal completely. Most did though. > > So I think Stein likely should work for you, but of course Train or > Ussuri will have had more time to identify any missed issues and the like. > > I hope this helps. > > Sean > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Tue Jun 2 14:21:03 2020 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 2 Jun 2020 16:21:03 +0200 Subject: [all] OpenStack versions that can't practically be run with Python 3 ? In-Reply-To: References: Message-ID: On Tue, Jun 2, 2020 at 4:06 PM David Ivey wrote: > For me, Stein still had a lot of issues with python3 when I tried to use > it, but I had tried the upgrade shortly after Stein had released so those > issues may have been resolved by now. I ended up reverting back to Rocky > and python2.7, My first real stable build with python3 was with the Train > release on Ubuntu18.04, so I skipped the Stein release. > > Someone can correct me if I am wrong, but last I checked, CentOS 7 did not > have the python3 packages in RDO. So if using CentOS 7; RDO does not have > Ussuri and the latest release there is Train with python2.7. If using > CentOS 8 and the Ussuri release; RDO released the python3 packages last > week. > > CentOS 7 has some limited python3 support but at RDO we didn't do any release with python3 on CentOS 7. In RDO you have python3 packages for CentOS 8 for both Train and Ussuri. > I have not tried Ussuri on CentOS 8 yet. > > David > > On Tue, Jun 2, 2020 at 8:25 AM Sean McGinnis > wrote: > >> On 6/2/20 6:34 AM, Neil Jerram wrote: >> > Does anyone know the most recent OpenStack version that >> > _can't_ easily be run with Python 3? I think the full answer to this >> > may have to consider distro packaging, as well as the underlying code >> > support. >> > >> > For example, I was just looking at switching an existing Queens setup, >> > on Ubuntu Bionic, and it can't practically be done because all of the >> > scripts - e.g. /usr/bin/nova-compute - have a hashbang line that says >> > "python2". >> > >> > So IIUC Queens is a no for Python 3, at least in the Ubuntu packaging. >> > >> > Do you know if this is equally true for later versions than Queens? >> > Or alternatively, if something systematic was done to address this >> > problem in later releases? E.g. is there a global USE_PYTHON3 switch >> > somewhere, or was the packaging for later releases changed to hardcode >> > "python3" instead of "python2"? If so, when did that happen? >> > >> Stein was the release where we had a cycle goal to get everyone using >> Python 3: >> >> >> https://governance.openstack.org/tc/goals/selected/stein/python3-first.html >> >> Part of the completion criteria for that goal was that all projects >> should, at a minimum, be running py3.6 unit tests. So a couple of >> caveats there - unit tests don't always identify issues that you can run >> in to actually running full functionality, and not every project was >> able to complete the cycle goal completely. Most did though. >> >> So I think Stein likely should work for you, but of course Train or >> Ussuri will have had more time to identify any missed issues and the like. >> >> I hope this helps. >> >> Sean >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Jun 2 14:45:34 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 2 Jun 2020 14:45:34 +0000 Subject: [all] Meetpad CPU Utilization and "Low Bandwidth" Mode In-Reply-To: References: <20200601010104.rpxsnewfsyoeacnd@yuggoth.org> <20200602015407.njfacd6uq6anqsfg@yuggoth.org> Message-ID: <20200602144533.a5kdjhy35rct5ssz@yuggoth.org> On 2020-06-02 11:06:14 +0200 (+0200), Thierry Carrez wrote: > Alex Xu wrote: > > Looks like the audio doesn't work for PRC user. Anyone else from PRC > > Jitsi-Meet's audo works? > > Can't comment on usage from PRC, but I hit a case where I had no audio at > all... Closing and rejoining the meeting worked around the issue for me. > Also it seems that Chrome/Chromium works slightly better than Firefox. Yes, in my earlier tests I was personally unable to get Firefox working with it at all (no audio or video) even after granting permission to access my microphone and camera, but it's possible that was due to the extensive privacy settings I've configured. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From marcin.juszkiewicz at linaro.org Tue Jun 2 15:05:13 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Tue, 2 Jun 2020 17:05:13 +0200 Subject: [all] OpenStack versions that can't practically be run with Python 3 ? In-Reply-To: References: Message-ID: <0bfacadf-8835-609b-7707-9133ac5360b8@linaro.org> W dniu 02.06.2020 o 13:34, Neil Jerram pisze: > Does anyone know the most recent OpenStack version that _can't_ easily be > run with Python 3? I think the full answer to this may have to consider > distro packaging, as well as the underlying code support. My simple view of OpenStack suggestion: Stein: you can run py2 but should work on getting py3 running Train: you need to run py3 and can still run on py2 Ussuri: what is that py2 people ask about? So if you want py2 then Train is the last hope. Migration path for CentOS (in Kolla, RDO and TripleO) is: 1. upgrade to C7/Train 2. migrate to C8/Train 3. migrate to C8/Ussuri From neil at tigera.io Tue Jun 2 15:08:44 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 2 Jun 2020 16:08:44 +0100 Subject: [all] OpenStack versions that can't practically be run with Python 3 ? In-Reply-To: References: Message-ID: On Tue, Jun 2, 2020 at 3:22 PM Alfredo Moralejo Alonso wrote: > > > On Tue, Jun 2, 2020 at 4:06 PM David Ivey wrote: > >> For me, Stein still had a lot of issues with python3 when I tried to use >> it, but I had tried the upgrade shortly after Stein had released so those >> issues may have been resolved by now. I ended up reverting back to Rocky >> and python2.7, My first real stable build with python3 was with the Train >> release on Ubuntu18.04, so I skipped the Stein release. >> >> Someone can correct me if I am wrong, but last I checked, CentOS 7 did >> not have the python3 packages in RDO. So if using CentOS 7; RDO does not >> have Ussuri and the latest release there is Train with python2.7. If using >> CentOS 8 and the Ussuri release; RDO released the python3 packages last >> week. >> >> > CentOS 7 has some limited python3 support but at RDO we didn't do any > release with python3 on CentOS 7. > > In RDO you have python3 packages for CentOS 8 for both Train and Ussuri. > > >> I have not tried Ussuri on CentOS 8 yet. >> >> David >> >> On Tue, Jun 2, 2020 at 8:25 AM Sean McGinnis >> wrote: >> >>> On 6/2/20 6:34 AM, Neil Jerram wrote: >>> > Does anyone know the most recent OpenStack version that >>> > _can't_ easily be run with Python 3? I think the full answer to this >>> > may have to consider distro packaging, as well as the underlying code >>> > support. >>> > >>> > For example, I was just looking at switching an existing Queens setup, >>> > on Ubuntu Bionic, and it can't practically be done because all of the >>> > scripts - e.g. /usr/bin/nova-compute - have a hashbang line that says >>> > "python2". >>> > >>> > So IIUC Queens is a no for Python 3, at least in the Ubuntu packaging. >>> > >>> > Do you know if this is equally true for later versions than Queens? >>> > Or alternatively, if something systematic was done to address this >>> > problem in later releases? E.g. is there a global USE_PYTHON3 switch >>> > somewhere, or was the packaging for later releases changed to hardcode >>> > "python3" instead of "python2"? If so, when did that happen? >>> > >>> Stein was the release where we had a cycle goal to get everyone using >>> Python 3: >>> >>> >>> https://governance.openstack.org/tc/goals/selected/stein/python3-first.html >>> >>> Part of the completion criteria for that goal was that all projects >>> should, at a minimum, be running py3.6 unit tests. So a couple of >>> caveats there - unit tests don't always identify issues that you can run >>> in to actually running full functionality, and not every project was >>> able to complete the cycle goal completely. Most did though. >>> >>> So I think Stein likely should work for you, but of course Train or >>> Ussuri will have had more time to identify any missed issues and the >>> like. >>> >>> I hope this helps. >>> >>> Sean >>> >> Many thanks Sean, David and Alfredo. So, IIUC, with RDO and CentOS 8 it sounds like Train and Ussuri should be good. Stein should also be fine code-wise, but responses so far don't mention available packaging for that. Also if anyone can point to information about Debian/Ubuntu packaging for these releases with Python 3, that would be great. Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Jun 2 15:44:48 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 2 Jun 2020 17:44:48 +0200 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: References: Message-ID: On Wed, 27 May 2020 at 00:34, Jimmy McArthur wrote: > > > > Goutham Pacha Ravi wrote on 5/26/20 5:21 PM: > > > > Great and thank you for setting this up! > > > > > > I am aware that we can't do this with our Jitsi Meet instance currently. Zoom meetings won't let you record without the host unless you jump through some hoops (https://support.zoom.us/hc/en-us/articles/204101699-Recording-without-the-Host). Do you think we'll still have recordings? > > Hi. The way we're setting up zoom will allow any PTL to set the meeting to record. Should be fairly seamless. Hello, I've enabled Zoom recording in a few PTG sessions so far (two for Blazar and one for Scientific SIG). Each time I selected the "record to cloud" option, but I haven't found a way to access the recordings later. Is the Foundation able to access and share these recordings? Thanks, Pierre Riteau (priteau) From kendall at openstack.org Tue Jun 2 16:13:12 2020 From: kendall at openstack.org (Kendall Waters) Date: Tue, 2 Jun 2020 11:13:12 -0500 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: References: Message-ID: <7B71EF05-C7FF-4FB3-8B0D-5D0F23F621DB@openstack.org> I’ll let Jimmy confirm, but yes, I do believe that the Foundation will have access to the recordings and we can send them to you. Cheers, Kendall Kendall Waters Perez OpenStack Marketing & Events kendall at openstack.org > On Jun 2, 2020, at 10:44 AM, Pierre Riteau wrote: > > On Wed, 27 May 2020 at 00:34, Jimmy McArthur > wrote: >> >> >> >> Goutham Pacha Ravi wrote on 5/26/20 5:21 PM: >> >> >> >> Great and thank you for setting this up! >> >> >> >> >> >> I am aware that we can't do this with our Jitsi Meet instance currently. Zoom meetings won't let you record without the host unless you jump through some hoops (https://support.zoom.us/hc/en-us/articles/204101699-Recording-without-the-Host). Do you think we'll still have recordings? >> >> Hi. The way we're setting up zoom will allow any PTL to set the meeting to record. Should be fairly seamless. > > Hello, > > I've enabled Zoom recording in a few PTG sessions so far (two for > Blazar and one for Scientific SIG). Each time I selected the "record > to cloud" option, but I haven't found a way to access the recordings > later. Is the Foundation able to access and share these recordings? > > Thanks, > Pierre Riteau (priteau) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Jun 2 17:13:18 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 2 Jun 2020 12:13:18 -0500 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: <7B71EF05-C7FF-4FB3-8B0D-5D0F23F621DB@openstack.org> References: <7B71EF05-C7FF-4FB3-8B0D-5D0F23F621DB@openstack.org> Message-ID: Yes, we can also provide a link and a password to download. Kendall Waters wrote on 6/2/20 11:13 AM: > I’ll let Jimmy confirm, but yes, I do believe that the Foundation will > have access to the recordings and we can send them to you. > > Cheers, > Kendall > > Kendall Waters Perez > OpenStack Marketing & Events > kendall at openstack.org > > > > >> On Jun 2, 2020, at 10:44 AM, Pierre Riteau > > wrote: >> >> On Wed, 27 May 2020 at 00:34, Jimmy McArthur > > wrote: >>> >>> >>> >>> Goutham Pacha Ravi wrote on 5/26/20 5:21 PM: >>> >>> >>> >>> Great and thank you for setting this up! >>> >>> >>> >>> >>> >>> I am aware that we can't do this with our Jitsi Meet instance >>> currently. Zoom meetings won't let you record without the host >>> unless you jump through some hoops >>> (https://support.zoom.us/hc/en-us/articles/204101699-Recording-without-the-Host). >>> Do you think we'll still have recordings? >>> >>> Hi.  The way we're setting up zoom will allow any PTL to set the >>> meeting to record.  Should be fairly seamless. >> >> Hello, >> >> I've enabled Zoom recording in a few PTG sessions so far (two for >> Blazar and one for Scientific SIG). Each time I selected the "record >> to cloud" option, but I haven't found a way to access the recordings >> later. Is the Foundation able to access and share these recordings? >> >> Thanks, >> Pierre Riteau (priteau) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From waboring at hemna.com Tue Jun 2 17:27:57 2020 From: waboring at hemna.com (Walter Boring) Date: Tue, 2 Jun 2020 13:27:57 -0400 Subject: [cinder][security] propose Rajat Dhasmana for cinder coresec In-Reply-To: <1ab081dc-ad7a-aa9a-ac49-652dc74888be@gmail.com> References: <1ab081dc-ad7a-aa9a-ac49-652dc74888be@gmail.com> Message-ID: +1 On Wed, May 27, 2020 at 12:08 PM Brian Rosmaita wrote: > Jay Bryant is stepping down from the Cinder core security team due to > time constraints. I am proposing to replace him with Rajat Dhasmana > (whoami-rajat on IRC). Rajat is has been a cinder core since January > 2019 and is a thorough and respected reviewer and developer. > Additionally, he'll extend our time zone coverage so that there will > always be someone available to review/work on security patches around > the clock. > > I intend to add Rajat to cinder coresec on Tuesday, 2 June; please > communicate any concerns to me before then. > > > cheers, > brian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Jun 2 17:36:04 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 02 Jun 2020 12:36:04 -0500 Subject: [all] OpenStack versions that can't practically be run with Python 3 ? In-Reply-To: References: Message-ID: <172761ac1eb.de306d99111474.6144809529980746475@ghanshyammann.com> ---- On Tue, 02 Jun 2020 09:02:18 -0500 David Ivey wrote ---- > For me, Stein still had a lot of issues with python3 when I tried to use it, but I had tried the upgrade shortly after Stein had released so those issues may have been resolved by now. I ended up reverting back to Rocky and python2.7, My first real stable build with python3 was with the Train release on Ubuntu18.04, so I skipped the Stein release. We did migrate the upstream integration testing to Ubuntu 18.04 in Stein [1], so I feel stein should be fine on python3 until we are missing the scenario failing for you in our testing. > Someone can correct me if I am wrong, but last I checked, CentOS 7 did not have the python3 packages in RDO. So if using CentOS 7; RDO does not have Ussuri and the latest release there is Train with python2.7. If using CentOS 8 and the Ussuri release; RDO released the python3 packages last week. > I have not tried Ussuri on CentOS 8 yet. > David > On Tue, Jun 2, 2020 at 8:25 AM Sean McGinnis wrote: > On 6/2/20 6:34 AM, Neil Jerram wrote: > > Does anyone know the most recent OpenStack version that > > _can't_ easily be run with Python 3? I think the full answer to this > > may have to consider distro packaging, as well as the underlying code > > support. > > > > For example, I was just looking at switching an existing Queens setup, > > on Ubuntu Bionic, and it can't practically be done because all of the > > scripts - e.g. /usr/bin/nova-compute - have a hashbang line that says > > "python2". > > > > So IIUC Queens is a no for Python 3, at least in the Ubuntu packaging. > > > > Do you know if this is equally true for later versions than Queens? > > Or alternatively, if something systematic was done to address this > > problem in later releases? E.g. is there a global USE_PYTHON3 switch > > somewhere, or was the packaging for later releases changed to hardcode > > "python3" instead of "python2"? If so, when did that happen? USE_PYTHON3 in devstack was switched to True by default in Ussuri cycle, but we moved (unit test as well as integration tests) on python3 by default: - Unit tests (same as Sean already mentioned)- https://governance.openstack.org/tc/goals/selected/stein/python3-first.html - [1] Integration testing: http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004647.html If something is failing with Stein on python 3 then I will suggest reporting the bug and we can check if that can be fixed. -gmann > > > Stein was the release where we had a cycle goal to get everyone using > Python 3: > > https://governance.openstack.org/tc/goals/selected/stein/python3-first.html > > Part of the completion criteria for that goal was that all projects > should, at a minimum, be running py3.6 unit tests. So a couple of > caveats there - unit tests don't always identify issues that you can run > in to actually running full functionality, and not every project was > able to complete the cycle goal completely. Most did though. > > So I think Stein likely should work for you, but of course Train or > Ussuri will have had more time to identify any missed issues and the like. > > I hope this helps. > > Sean > > > From rajatdhasmana at gmail.com Tue Jun 2 17:37:15 2020 From: rajatdhasmana at gmail.com (Rajat Dhasmana) Date: Tue, 2 Jun 2020 23:07:15 +0530 Subject: [cinder][security] propose Rajat Dhasmana for cinder coresec In-Reply-To: References: <1ab081dc-ad7a-aa9a-ac49-652dc74888be@gmail.com> Message-ID: Thanks everyone. Will try my best to stand upto my new responsibilities. On Tue, Jun 2, 2020, 11:02 PM Walter Boring wrote: > +1 > > On Wed, May 27, 2020 at 12:08 PM Brian Rosmaita < > rosmaita.fossdev at gmail.com> wrote: > >> Jay Bryant is stepping down from the Cinder core security team due to >> time constraints. I am proposing to replace him with Rajat Dhasmana >> (whoami-rajat on IRC). Rajat is has been a cinder core since January >> 2019 and is a thorough and respected reviewer and developer. >> Additionally, he'll extend our time zone coverage so that there will >> always be someone available to review/work on security patches around >> the clock. >> >> I intend to add Rajat to cinder coresec on Tuesday, 2 June; please >> communicate any concerns to me before then. >> >> >> cheers, >> brian >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From waboring at hemna.com Tue Jun 2 17:43:51 2020 From: waboring at hemna.com (Walter Boring) Date: Tue, 2 Jun 2020 13:43:51 -0400 Subject: [cinder] [ironic] Any update on Cinder Ceph ISCSI Driver? In-Reply-To: References: Message-ID: Hey guys, I started this effort some time ago and ran into several technical issues. First of all, the ceph iscsi driver requires several dependent packages for it to work properly, end to end. The biggest problem I ran into was getting the ceph devstack plugin working and acceptable to the community. In order to do ceph iscsi CI, the existing ceph devstack plugin had to be modified to install several required packages to properly do iscsi exports. As it turns out most distros don't have packages for those requirements. (tcmu-runner, targetcli-fb, ceph-iscsi) all had to be installed from source, which was unacceptable to the community. There are no associated packages for those for ubuntu, and so it was deemed "unreleased" software. So the project stalled out. In my spare time I have been looking at writing a new devstack ceph plugin purely for ceph-iscsi using ceph-ansible, as it has native support for installing all deps for fedora, redhat, centos. This is a ton of work and is somewhat a duplication of the existing devstack ceph plugin. I've been running into a host of issues with ceph-ansible, simply being broken and not really maintained to work properly either. https://github.com/ceph/ceph-ansible/issues/5382 https://github.com/ceph/ceph-ansible/issues/5383 At the time I got blocked on the CI issue, the volume driver worked and was able to run tempest against it manually and attach/detach volumes. It just required a lot of manual steps to get everything in place. I even wrote a python client that took care of a lot of the details of talking to the ceph target gw: https://github.com/hemna/rbd-iscsi-client So the bottom line is, I need a way to do CI, which means having an acceptable mechanism for deploying ceph + all the ceph iscsi toolchain on an acceptable distro. Once I can get that working, then we can get a tempest job in place to do the cinder volume driver CI. Walt On Mon, Apr 20, 2020 at 5:36 AM kevinz wrote: > Hi Cinder, > > We are looking forward to the IRONIC boot from Cinder Ceph Volume. I went > through the mailist and found the conversation last year, the main gap for > the Ironic boot from Cinder volume(Ceph BACKEND) is lacking Cinder Ceph > iscsi driver. > http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006041.html > > So I'm sending to ask the current status for this. We are from Linaro, all > our machines are under Ironic and we use Ceph as backend storage. Recently > we are looking at the Diskless server boot plan from the Ceph cluster. > > Any info or update or advice, I really appreciate it. Look forward to > hearing from you. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmendiza at redhat.com Tue Jun 2 17:52:49 2020 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Tue, 2 Jun 2020 12:52:49 -0500 Subject: [barbican] Core Team updates Message-ID: <24faa02f-8e5d-9b45-f2cf-6fcc1d890e3c@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 All, I would like to welcome Moisés Guimarães to the barbican-core group. I'm looking forward to more of Moisés' contributions for the Victoria cycle. I've also removed a few members from the core team that have moved on to other things and are no longer contributing to Barbican. The up-to-date team roster can be found here: https://review.opendev.org/#/admin/groups/178,members I'd also like to invite anyone who is interested in Barbican development to reach out to the team. We would love to see more developers join the core group, and I am personally willing to help mentor folks who are interested. Thanks, Douglas Mendizábal Barbican PTL -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEwcapj5oGTj2zd3XogB6WFOq/OrcFAl7Wke0ACgkQgB6WFOq/ Orfzzw//TcXOOhIYhmEoJm2XlrYQWbcLId8/0tZLieZexDz3U8zZ3+Mox9GQXWtr bU4EPvUU43e+cimDdyIrMU8CWwtIWf5NolzH+YVCDeXe82vPvPvh9s8RTZ2+qmU2 01799bjWYftgK1FLeB6itHH/Mm9vwRWs9IrfUA5lnwzGZDzsjPhgJkWMogYhqsFz nz2RnRG9GWXrxNV+fyMm/Vlmdb0fq+CPnLDWfN0gJVmtCDE4Prz7Beo4Ys3GwQ2K TIU+r6PyZPwTCUetnn5xTZ1zPJ8zmmpDdgzdziX3vgS1X8NRh90D5pqFZl9ggDgO 7DgFU4yEgDFQa66atqm/0gPMHOvJmDYn2ubWnYn0LDsoXMPwDRbBjXX6IFNEXwGF OfuUWcJQIOvvAHp/V8nLGX3DNXXcoCvkY1z/MkeN1vglV0lmBUIpz373Xnm4rbLL 29RGdI+cZVVPVNras3d1TZlXl3YN2sIrZjYc5n7PXAttOIKuqTfgUrn1hFGsGR2O eXkRVWKvPKHp7hSwh+IckdPjdptq3CNIuJdfh2eNrAYvV/jFFsQjMcoEnKXUeYWR jJRYdPJFxEGLUQJMKh0k33jW3n5PZN4Rlo1/940tMReDiVEAVIpHtTyrHfFEos+0 VGl+ykqqTxgV7yGkqfg7nqER8VGvVDF4PyWh23/WG2Ys1hAL7BM= =NP6P -----END PGP SIGNATURE----- From thomas.king at gmail.com Tue Jun 2 00:02:56 2020 From: thomas.king at gmail.com (Thomas King) Date: Mon, 1 Jun 2020 18:02:56 -0600 Subject: [Openstack-mentoring] Neutron subnet with DHCP relay In-Reply-To: References: Message-ID: We do have the Ironic inspector enabled but mainly use out-of-band such as iDRAC. I am indeed not using segments. I'll need to research that a bit more. One important note, we are only using provider networks with no Neutron routers. All routing is done on the physical network which aligns with the docs for segments. The provisioning subnet is on 10.10.0.0/16 for the directly attached nodes. As a test, I created a second subnet, 10.100.0.0/16, on the same Neutron network with DHCP enabled, so now I have two subnets on the same network and Neutron DHCP port. However, if DHCP relay requires different segments per remote network... The Networking service defines a segment using the following components: - Unique physical network name - Segmentation type - Segmentation ID Does having unique physical network names also mean unique physical interfaces? Does this mean no flat network for segments? If I create 10.100.0.0/16 in rack A1 and the controller is in D30, am I pointing the DHCP relay to the DHCP agent's 10.10.0.0/16 IP address? +--------------------+-------+---------------------------+ | Agent Type | Alive | Binary | +--------------------+-------+---------------------------+ | Baremetal Node | :-) | ironic-neutron-agent | | Baremetal Node | :-) | ironic-neutron-agent | | Baremetal Node | :-) | ironic-neutron-agent | | Baremetal Node | :-) | ironic-neutron-agent | | Linux bridge agent | :-) | neutron-linuxbridge-agent | | Baremetal Node | :-) | ironic-neutron-agent | | Metering agent | :-) | neutron-metering-agent | | Baremetal Node | :-) | ironic-neutron-agent | | Baremetal Node | :-) | ironic-neutron-agent | | Baremetal Node | :-) | ironic-neutron-agent | | DHCP agent | :-) | neutron-dhcp-agent | | L3 agent | :-) | neutron-l3-agent | | Baremetal Node | :-) | ironic-neutron-agent | | Baremetal Node | :-) | ironic-neutron-agent | | Metadata agent | :-) | neutron-metadata-agent | | Baremetal Node | :-) | ironic-neutron-agent | | Baremetal Node | :-) | ironic-neutron-agent | | Baremetal Node | :-) | ironic-neutron-agent | +--------------------+-------+---------------------------+ On Mon, Jun 1, 2020 at 4:34 PM wrote: > You will have to target two IP addresses with DHCP relay if you are > using Ironic Inspector. The first is the IP where Ironic Inspector is > listening with dnsmasq, usually the IP of the host itself. I know this > doesn't lend itself to HA scenarios, but you might also be able to > forward to the broadcast IP of the subnet where the Ironic Inspector > will be running (I haven't tested this, but it is a common use case for > DHCP relay). > > The second IP address is that of the Neutron DHCP agent, and that will > be used for deploying bare metal nodes. IIRC, this IP is shared with > the Neutron router for the network if you are using the L3 agent as > well. > > If you are not running Ironic Inspector (and manually entering in > baremetal host details instead), then you can forward DHCP relay only > to the Neutron DHCP agent. > > Both of these IP addresses will be on the "root" subnet which is > associated with the segment with the controller node(s). > > It sounds like you created a second subnet, but I'm not sure if you > created the second subnet on a different segment from the first subnet. > In Neutron routed networking, the segments determine whether a subnet > is local or remote to the controller node(s). Typically the first > segment would be the one local to the controller(s). Are you sure you > enabled the segments plugin and created your second subnet on a new > segment? > > Another approach which does not involve DHCP relay is to deploy DHCP > agents locally on compute nodes local to each segment. This way all > DHCP will be done within the same L2 domain, and you will not have to > configure DHCP relay on your router serving each segment/subnet. > > See the docs for more info: > > https://docs.openstack.org/newton/networking-guide/config-routed-networks.html > > -Dan > > On Fri, 2020-05-29 at 10:47 -0600, Thomas King wrote: > > In the Triple-O docs for unicast DHCP relay, it doesn't exactly say > > which IP address to target. Without deploying Triple-O, I'm not clear > > if the relay IP should be the bridge interface or the DHCP device. > > > > The first method makes sense because the gateway for that subnet > > wouldn't be connected to the Ironic controller by layer 2 (unless we > > used VXLAN over the physical network). > > > > As an experiment, I created a second subnet on my provisioning > > network. The original DHCP device port now has two IP addresses, one > > on each subnet. That makes the second method possible if I targeted > > its original IP address. > > > > Thanks for the help and please let me know which method is correct. > > > > Tom King > > > > On Fri, May 29, 2020 at 3:15 AM Dan Sneddon > > wrote: > > > You probably want to enable Neutron segments and use the Neutron > > > routed networks feature so you can use different subnets on > > > different segments (layer 2 domains AKA VLANs) of the same network. > > > You specify different values such as IP allocation pools and router > > > address(es) for each subnet, and Ironic and Neutron will do the > > > right thing. You need to enable segments in the Neutron > > > configuration and restart the Neutron server. I don’t think you > > > will have to recreate the network. Behind the scenes, dnsmasq will > > > be configured with multiple subnets and address scopes within the > > > Neutron DHCP agent and the Ironic Inspector agent. > > > > > > Each segment/subnet will be given a different VLAN ID. As Dmitry > > > mentioned, TripleO uses that method for the provisioning network, > > > so you can use that as an example. The provisioning network in > > > TripleO is the one referred to as the “control plane” network. > > > > > > -Dan > > > > > > On Fri, May 29, 2020 at 12:51 AM Dmitry Tantsur < > > > dtantsur at redhat.com> wrote: > > > > Hi Tom, > > > > > > > > I know for sure that people are using DHCP relay with ironic, I > > > > think the TripleO documentation may give you some hints (adjusted > > > > to your presumably non-TripleO environment): > > > > > http://tripleo.org/install/advanced_deployment/routed_spine_leaf_network.html#dhcp-relay-configuration > > > > > > > > Dmitry > > > > > > > > On Thu, May 28, 2020 at 11:06 PM Amy Marrich > > > > wrote: > > > > > Hey Tom, > > > > > > > > > > Forwarding to the OpenStack discuss list where you might get > > > > > more assistance. > > > > > > > > > > Thanks, > > > > > > > > > > Amy (spotz) > > > > > > > > > > On Thu, May 28, 2020 at 3:32 PM Thomas King < > > > > > thomas.king at gmail.com> wrote: > > > > > > Good day, > > > > > > > > > > > > We have Ironic running and connected via VLANs to nearby > > > > > > machines. We want to extend this to other parts of our > > > > > > product development lab without extending VLANs. > > > > > > > > > > > > Using DHCP relay, we would point to a single IP address to > > > > > > serve DHCP requests but I'm not entirely sure of the Neutron > > > > > > network/subnet configuration, nor which IP address should be > > > > > > used for the relay agent on the switch. > > > > > > > > > > > > Is DHCP relay supported by Neutron? > > > > > > > > > > > > My guess is to add a subnet in the provisioning network and > > > > > > point the relay agent to the linuxbridge interface's IP: > > > > > > 14: brq467f6775-be: mtu > > > > > > 1500 qdisc noqueue state UP group default qlen 1000 > > > > > > link/ether e2:e9:09:7f:89:0b brd ff:ff:ff:ff:ff:ff > > > > > > inet 10.10.0.1/16 scope global brq467f6775-be > > > > > > valid_lft forever preferred_lft forever > > > > > > inet6 fe80::5400:52ff:fe85:d33d/64 scope link > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > Thank you, > > > > > > Tom King > > > > > > _______________________________________________ > > > > > > openstack-mentoring mailing list > > > > > > openstack-mentoring at lists.openstack.org > > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-mentoring > > > > > > -- > > > Dan Sneddon | Senior Principal Software Engineer > > > dsneddon at redhat.com | redhat.com/cloud > > > dsneddon:irc | @dxs:twitter > -- > Dan Sneddon | Senior Principal Software Engineer > dsneddon at redhat.com | redhat.com/cloud > dsneddon:irc | @dxs:twitter > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.king at gmail.com Tue Jun 2 00:47:42 2020 From: thomas.king at gmail.com (Thomas King) Date: Mon, 1 Jun 2020 18:47:42 -0600 Subject: [Openstack-mentoring] Neutron subnet with DHCP relay In-Reply-To: <09d910242b7ebb0d7d8c495c3914c25ff24a1bc1.camel@redhat.com> References: <09d910242b7ebb0d7d8c495c3914c25ff24a1bc1.camel@redhat.com> Message-ID: Well, I'm using a flat network (i.e., access mode switchport), not a tagged switchport. The remote nodes will be on their own subnet and VLAN/segment that will *not *be attached to the controller. "In that case, DHCP requests from all segments that are not attached to the controller(s) need to be forwarded to the controllers via DHCP relay." Agreed. The question is which DHCP agent IP address? DHCP agent IP #1 = 10.10.1.1/16, corresponds to an attached segment/subnet. DHCP agent IP #2 = 10.100.1.1/16, corresponds to a subnet in a completely separate rack with no direct connection to the controller. Even if I separate IP #2 onto a different segment with its own DHCP agent port, am I sending DHCP relay to IP #1 or #2? I'm assuming #1. Tom On Mon, Jun 1, 2020 at 6:40 PM wrote: > The use case for routed networks is when you have multiple distinct > subnets which are not connected at layer 2 and only have connectivity > to one another via the router gateways on each network. A segment can > be thought of as a VLAN, although depending on topology a different > VLAN ID is not always used. The key is that there is no layer 2 > connectivity between segments, traffic has to be routed between them. > > The situation where you would use DHCP relay is when you are not > assigning DHCP agents to the compute nodes, and you have compute nodes > on segments that the controllers are not attached to. In that case, > DHCP requests from all segments that are not attached to the > controller(s) need to be forwarded to the controllers via DHCP relay. > > If you have a flat network, then you have no need for DHCP relay, the > DHCP agents can receive and respond to requests over layer 2. This > applies even if you have multiple subnets on the same segment. > > On Mon, 2020-06-01 at 18:02 -0600, Thomas King wrote: > > We do have the Ironic inspector enabled but mainly use out-of-band > > such as iDRAC. > > > > I am indeed not using segments. I'll need to research that a bit > > more. > > > > One important note, we are only using provider networks with no > > Neutron routers. All routing is done on the physical network which > > aligns with the docs for segments. The provisioning subnet is on > > 10.10.0.0/16 for the directly attached nodes. As a test, I created a > > second subnet, 10.100.0.0/16, on the same Neutron network with DHCP > > enabled, so now I have two subnets on the same network and Neutron > > DHCP port. However, if DHCP relay requires different segments per > > remote network... > > > The Networking service defines a segment using the following > > > components: > > > > > > Unique physical network name > > > Segmentation type > > > Segmentation ID > > > > Does having unique physical network names also mean unique physical > > interfaces? > > Does this mean no flat network for segments? > > If I create 10.100.0.0/16 in rack A1 and the controller is in D30, am > > I pointing the DHCP relay to the DHCP agent's 10.10.0.0/16 IP > > address? > > > > > > +--------------------+-------+---------------------------+ > > | Agent Type | Alive | Binary | > > +--------------------+-------+---------------------------+ > > | Baremetal Node | :-) | ironic-neutron-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > | Linux bridge agent | :-) | neutron-linuxbridge-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > | Metering agent | :-) | neutron-metering-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > | DHCP agent | :-) | neutron-dhcp-agent | > > | L3 agent | :-) | neutron-l3-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > | Metadata agent | :-) | neutron-metadata-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > | Baremetal Node | :-) | ironic-neutron-agent | > > +--------------------+-------+---------------------------+ > > > > On Mon, Jun 1, 2020 at 4:34 PM wrote: > > > You will have to target two IP addresses with DHCP relay if you are > > > using Ironic Inspector. The first is the IP where Ironic Inspector > > > is > > > listening with dnsmasq, usually the IP of the host itself. I know > > > this > > > doesn't lend itself to HA scenarios, but you might also be able to > > > forward to the broadcast IP of the subnet where the Ironic > > > Inspector > > > will be running (I haven't tested this, but it is a common use case > > > for > > > DHCP relay). > > > > > > The second IP address is that of the Neutron DHCP agent, and that > > > will > > > be used for deploying bare metal nodes. IIRC, this IP is shared > > > with > > > the Neutron router for the network if you are using the L3 agent as > > > well. > > > > > > If you are not running Ironic Inspector (and manually entering in > > > baremetal host details instead), then you can forward DHCP relay > > > only > > > to the Neutron DHCP agent. > > > > > > Both of these IP addresses will be on the "root" subnet which is > > > associated with the segment with the controller node(s). > > > > > > It sounds like you created a second subnet, but I'm not sure if you > > > created the second subnet on a different segment from the first > > > subnet. > > > In Neutron routed networking, the segments determine whether a > > > subnet > > > is local or remote to the controller node(s). Typically the first > > > segment would be the one local to the controller(s). Are you sure > > > you > > > enabled the segments plugin and created your second subnet on a new > > > segment? > > > > > > Another approach which does not involve DHCP relay is to deploy > > > DHCP > > > agents locally on compute nodes local to each segment. This way all > > > DHCP will be done within the same L2 domain, and you will not have > > > to > > > configure DHCP relay on your router serving each segment/subnet. > > > > > > See the docs for more info: > > > > https://docs.openstack.org/newton/networking-guide/config-routed-networks.html > > > > > > -Dan > > > > > > On Fri, 2020-05-29 at 10:47 -0600, Thomas King wrote: > > > > In the Triple-O docs for unicast DHCP relay, it doesn't exactly > > > say > > > > which IP address to target. Without deploying Triple-O, I'm not > > > clear > > > > if the relay IP should be the bridge interface or the DHCP > > > device. > > > > > > > > The first method makes sense because the gateway for that subnet > > > > wouldn't be connected to the Ironic controller by layer 2 (unless > > > we > > > > used VXLAN over the physical network). > > > > > > > > As an experiment, I created a second subnet on my provisioning > > > > network. The original DHCP device port now has two IP addresses, > > > one > > > > on each subnet. That makes the second method possible if I > > > targeted > > > > its original IP address. > > > > > > > > Thanks for the help and please let me know which method is > > > correct. > > > > > > > > Tom King > > > > > > > > On Fri, May 29, 2020 at 3:15 AM Dan Sneddon > > > > wrote: > > > > > You probably want to enable Neutron segments and use the > > > Neutron > > > > > routed networks feature so you can use different subnets on > > > > > different segments (layer 2 domains AKA VLANs) of the same > > > network. > > > > > You specify different values such as IP allocation pools and > > > router > > > > > address(es) for each subnet, and Ironic and Neutron will do the > > > > > right thing. You need to enable segments in the Neutron > > > > > configuration and restart the Neutron server. I don’t think > > > you > > > > > will have to recreate the network. Behind the scenes, dnsmasq > > > will > > > > > be configured with multiple subnets and address scopes within > > > the > > > > > Neutron DHCP agent and the Ironic Inspector agent. > > > > > > > > > > Each segment/subnet will be given a different VLAN ID. As > > > Dmitry > > > > > mentioned, TripleO uses that method for the provisioning > > > network, > > > > > so you can use that as an example. The provisioning network in > > > > > TripleO is the one referred to as the “control plane” network. > > > > > > > > > > -Dan > > > > > > > > > > On Fri, May 29, 2020 at 12:51 AM Dmitry Tantsur < > > > > > dtantsur at redhat.com> wrote: > > > > > > Hi Tom, > > > > > > > > > > > > I know for sure that people are using DHCP relay with ironic, > > > I > > > > > > think the TripleO documentation may give you some hints > > > (adjusted > > > > > > to your presumably non-TripleO environment): > > > > > > > > > > http://tripleo.org/install/advanced_deployment/routed_spine_leaf_network.html#dhcp-relay-configuration > > > > > > > > > > > > Dmitry > > > > > > > > > > > > On Thu, May 28, 2020 at 11:06 PM Amy Marrich > > > > > > > > > wrote: > > > > > > > Hey Tom, > > > > > > > > > > > > > > Forwarding to the OpenStack discuss list where you might > > > get > > > > > > > more assistance. > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > Amy (spotz) > > > > > > > > > > > > > > On Thu, May 28, 2020 at 3:32 PM Thomas King < > > > > > > > thomas.king at gmail.com> wrote: > > > > > > > > Good day, > > > > > > > > > > > > > > > > We have Ironic running and connected via VLANs to nearby > > > > > > > > machines. We want to extend this to other parts of our > > > > > > > > product development lab without extending VLANs. > > > > > > > > > > > > > > > > Using DHCP relay, we would point to a single IP address > > > to > > > > > > > > serve DHCP requests but I'm not entirely sure of the > > > Neutron > > > > > > > > network/subnet configuration, nor which IP address should > > > be > > > > > > > > used for the relay agent on the switch. > > > > > > > > > > > > > > > > Is DHCP relay supported by Neutron? > > > > > > > > > > > > > > > > My guess is to add a subnet in the provisioning network > > > and > > > > > > > > point the relay agent to the linuxbridge interface's IP: > > > > > > > > 14: brq467f6775-be: mtu > > > > > > > > 1500 qdisc noqueue state UP group default qlen 1000 > > > > > > > > link/ether e2:e9:09:7f:89:0b brd ff:ff:ff:ff:ff:ff > > > > > > > > inet 10.10.0.1/16 scope global brq467f6775-be > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > inet6 fe80::5400:52ff:fe85:d33d/64 scope link > > > > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > > > > > > > Thank you, > > > > > > > > Tom King > > > > > > > > _______________________________________________ > > > > > > > > openstack-mentoring mailing list > > > > > > > > openstack-mentoring at lists.openstack.org > > > > > > > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-mentoring > > > > > > > > > > -- > > > > > Dan Sneddon | Senior Principal Software Engineer > > > > > dsneddon at redhat.com | redhat.com/cloud > > > > > dsneddon:irc | @dxs:twitter > -- > Dan Sneddon | Senior Principal Software Engineer > dsneddon at redhat.com | redhat.com/cloud > dsneddon:irc | @dxs:twitter > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Jun 2 18:25:37 2020 From: amy at demarco.com (Amy Marrich) Date: Tue, 2 Jun 2020 13:25:37 -0500 Subject: [diversity] Hour of Healing Message-ID: The OSF Diversity and Inclusion Working Group recognizes that this is a trying time for our communities and colleagues. We would like to invite you to 'An Hour of Healing' on Thursday (June 4th at 17:30 - 18:30 UTC) where you can talk to others in a safe place. We invite you to use this time to express your feelings, or to just be able to talk to others without being judged. This session will adhere to the OSF Code of Conduct and zero tolerance for harassment policy, which means we will not be judging or condemning others (individuals or groups) inside OR outside of our immediate community. We will come together to heal, in mutually respectful dialogue, keeping in mind that while there are many different individual viewpoints, we all share pain collectively and can heal together. We will be using https://meetpad.opendev.org/PTGDiversityAndInclusion for this gathering. The OSF Diversity and Inclusion WG -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Jun 2 20:05:19 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 2 Jun 2020 22:05:19 +0200 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: References: <7B71EF05-C7FF-4FB3-8B0D-5D0F23F621DB@openstack.org> Message-ID: Thank you both! On Tue, 2 Jun 2020 at 19:13, Jimmy McArthur wrote: > > Yes, we can also provide a link and a password to download. > > Kendall Waters wrote on 6/2/20 11:13 AM: > > I’ll let Jimmy confirm, but yes, I do believe that the Foundation will have access to the recordings and we can send them to you. > > Cheers, > Kendall > > Kendall Waters Perez > OpenStack Marketing & Events > kendall at openstack.org > > > > > On Jun 2, 2020, at 10:44 AM, Pierre Riteau wrote: > > On Wed, 27 May 2020 at 00:34, Jimmy McArthur wrote: > > > > > Goutham Pacha Ravi wrote on 5/26/20 5:21 PM: > > > > Great and thank you for setting this up! > > > > > > I am aware that we can't do this with our Jitsi Meet instance currently. Zoom meetings won't let you record without the host unless you jump through some hoops (https://support.zoom.us/hc/en-us/articles/204101699-Recording-without-the-Host). Do you think we'll still have recordings? > > Hi. The way we're setting up zoom will allow any PTL to set the meeting to record. Should be fairly seamless. > > > Hello, > > I've enabled Zoom recording in a few PTG sessions so far (two for > Blazar and one for Scientific SIG). Each time I selected the "record > to cloud" option, but I haven't found a way to access the recordings > later. Is the Foundation able to access and share these recordings? > > Thanks, > Pierre Riteau (priteau) > > > From emilien at redhat.com Tue Jun 2 21:09:41 2020 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 2 Jun 2020 17:09:41 -0400 Subject: [tripleo] PTG day 3 Message-ID: Hi Folks For day 3, we'll stay on Zoom, it seems like the best option for our group so far. The agenda looks like it: 13:00 UTC Topic: Speeding up deployments, updates and upgrades Presenter: Jesse Pretorius (odyssey4me), Sofer Athlan (chem) 13:30 UTC Topic: Ceph requirements Presenter: fmount, fultonj, gfidente 14:00 UTC Topic: Running validations from within a container Presenter: cjeanneret (Tengu) 14:30 UTC Topic: Auto --limit scale-up of Compute nodes Presenter: Luke Short (ekultails) 15:00 UTC Topic: TripleO usability enhancements Presenter: Wes Hayutin 15:30 UTC Topic: Improvements in TLS Everywhere/ CI Presenter: rlandy, alee 16:00 UTC Topic: Container Security in TripleO Presenter: Grzegorz Grasza (xek), alee 16:30 UTC free slot Since we have a free slot, feel free to propose any new topic otherwise we'll finish earlier (could also be appreciated, specially for folks in EMEA/APAC). Thanks and see you tomorrow! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Jun 2 21:15:23 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 2 Jun 2020 17:15:23 -0400 Subject: [cinder] virtual happy hour on Thursday 4 June Message-ID: The People have spoken and they have said that Thursday is the day we should hold the virtual happy hour. It's therefore scheduled for 15:30 UTC in the "Diablo room" (which is what I'm now calling my BlueJeans room): https://bluejeans.com/3228528973 This is an opportunity for us to have some informal virtual face-to-face time. Since the virtual drinks are unlimited, everyone is welcome, even if you don't contribute to cinder in the course of your normal work activities. The event will *not* be recorded :) See you there! brian From zigo at debian.org Tue Jun 2 23:18:43 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 3 Jun 2020 01:18:43 +0200 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: References: <885d5553-958c-0253-3b19-3b3c1ef6fa54@debian.org> Message-ID: <1eb49a94-4ee6-97e6-3250-672442f06f9d@debian.org> On 5/27/20 11:46 PM, Clark Boylan wrote: > On Wed, May 27, 2020, at 1:36 PM, Thomas Goirand wrote: >> On 5/26/20 8:06 PM, Kendall Nelson wrote: >>> For each virtual PTG meeting room, we will provide an official Zoom >>> videoconference room, which will be reused by various groups across the >>> day. > > This is what you snipped out of the previous email: > >>> OpenDev's Jitsi Meet instance is also available at >>> https://meetpad.opendev.org as an alternative tooling option to create >>> team-specific meetings. This is more experimental and may not work as >>> well for larger groups, but has the extra benefit of including etherpad >>> integration (the videoconference doubles as an etherpad document, >>> avoiding jumping between windows). PTGBot can be used to publish the >>> chosen meeting URL if you decide to not use the official Zoom one. The >>> OpenDev Sysadmins can be reached in the #opendev IRC channel if any >>> problems arise. > >> Sorry for ranting, but I can't help it on this topic... >> >> I really think it's a shame that it's been decided on a non-free video >> conference service. Why aren't we self-hosting it, and using free >> software instead of Zoom, which on top of this had major security >> problems recently? > > As noted in the portion of the email you removed: we are. The OpenDev team has been working to ensure that Jitsi Meet is ready to go (just today we added in the ability to scale up jvb processes to balance load), but it is a very new service to us and alternatives are a good thing. We're going to try and estimate usage ahead of time and from that be ready for Monday. If you're testing it this week or having trouble next week please let us know. We're going to do our best, but it will likely be a learning experience. > > As another option my understanding is that Zoom supports quite large rooms which may be necessary for some groups that are meeting. And it is good to have a backup, particularly since we haven't done this before (and that applies in both directions, it is entirely possible some people using Zoom may find meetpad is a better fit and those trying meetpad may want to switch to Zoom for reasons). Zoom provides dial in numbers for much of the world (though not all of it), as well as providing a web client that doesn't require you to install any additional software beyond your web browser. Clark, the only point which I think is acceptable on the above is the fact that you're mentioning Jitsi on opendev infra is new. However, it's not new in many places. My company provides it for free (meet.infomaniak.com), it's also available on Debian infra (https://jitsi.debian.social/), and if you called for help, I'm sure you'd have receive some. Though I don't agree with some other points. 1/ Zoom is the default, Jitsi is the alternative. It should have been the other way around, as now, mostly everyone will be using Zoom. Participants will find me moronic to require something else than Zoom. 2/ Zoom doesn't work for me in in Firefox of Chromium (on Debian Buster). The only thing it wants is to start the (non-free) Zoom app, which I do not want to install, for security and moral reasons. We're doing free software, can't we forbid using non-free as well here? This is very disappointing. I may have to install Zoom in a VM... or just give-up on the virtual-PTG because it's too frustrating of an experience for me. Note that we did a mini-debconf-online this week-end, and it ran ok with many participants in the same room. Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Wed Jun 3 03:41:05 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Jun 2020 03:41:05 +0000 Subject: [all] Week Out PTG Details & Registration Reminder In-Reply-To: <1eb49a94-4ee6-97e6-3250-672442f06f9d@debian.org> References: <885d5553-958c-0253-3b19-3b3c1ef6fa54@debian.org> <1eb49a94-4ee6-97e6-3250-672442f06f9d@debian.org> Message-ID: <20200603034105.haozsl57anqh6aox@yuggoth.org> On 2020-06-03 01:18:43 +0200 (+0200), Thomas Goirand wrote: [...] > 1/ Zoom is the default, Jitsi is the alternative. It should have > been the other way around, as now, mostly everyone will be using > Zoom. Participants will find me moronic to require something else > than Zoom. So far I've been in sessions for 5 different teams, and 4 of them used our Meetpad instance instead of Zoom. There has been some struggle for rooms in the 40 participant range, but we've scaled the cluster to be able to handle many rooms at least. > 2/ Zoom doesn't work for me in in Firefox of Chromium (on Debian > Buster). The only thing it wants is to start the (non-free) Zoom > app, which I do not want to install, for security and moral > reasons. An HTML5 client is enabled for the Zoom rooms, but there's a bit of a trick to accessing it because they'd rather you ran their proprietary binary extension or standalone client. When you first go to the meeting URL, cancel the client download pop-up. Then click the link to download the client and cancel the pop-up again. After the second download cancellation it will add a link to use the browser-based HTML5 client. Yes it's silly and I'm not a fan, but it has at least worked for me (I used Chromium, mainly because I normally use Firefox for everything else and would prefer not to pollute my FF profile with the necessary microphone access permissions and so on). > We're doing free software, can't we forbid using non-free as well > here? This is very disappointing. [...] Software freedom is, in many ways, like actual freedom. We encourage people to use free/libre open source tools, but we can't realistically forbid any teams from using whatever tools they prefer, that would only encourage them to do so secretly. > Note that we did a mini-debconf-online this week-end, and it ran > ok with many participants in the same room. I'd love to get some tips from the folks running that Jitsi-Meet instance, we've definitely been getting a fair number of complaints about ours causing participants' browsers to eat most of their processor capacity, sound cutting in and out or being completely silent and needing to reconnect, et cetera. I don't think it's been the case for a majority of participants, but it's enough that it seems to have driven some teams to choose proprietary alternatives after they gave it a try. We'd love to improve the user experience. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From emccormick at cirrusseven.com Wed Jun 3 04:42:06 2020 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 3 Jun 2020 00:42:06 -0400 Subject: [ops][ptg] Virtual Ops Meetup at the PTG Message-ID: Hello all, Your friendly neighborhood Ops Meetup Team is hosting a first ever (and very experimental) Virtual Ops Meetup Wednesday, June 3rd at 13:00 UTC on the foundation Jitsi server. You can find us here: https://meetpad.opendev.org/victoria-ptg-ops-meetup Meetpad will open an integrated Ehterpad to go along with the meeting, but if you'd prefer to have it separate, just replace "meetpad" with "etherpad" in the URL. We recommend using Chrome wherever possible. If you're on a mobile device, there is also a Jitsi Meet app for iOS and Android that work pretty well. The schedule has us going until 15:00 UTC, but we'll continue as long as people are around and want to keep it going. I look forward to seeing a few (or many!) of you in a few hours! Cheers, Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jun 3 04:53:00 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Jun 2020 04:53:00 +0000 Subject: [ops][ptg] Virtual Ops Meetup at the PTG In-Reply-To: References: Message-ID: <20200603045259.kifpfzpz3wzhdry5@yuggoth.org> On 2020-06-03 00:42:06 -0400 (-0400), Erik McCormick wrote: [...] > on the foundation Jitsi server [...] To be clear, it's not the OSF managing this service, it's run by the OpenDev community. Still, we're happy to see more folks interested in making use of it! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Wed Jun 3 05:31:29 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 03 Jun 2020 06:31:29 +0100 Subject: [cyborg][nova][neutron]Summaries of Smartnic support integration In-Reply-To: <7B5303F69BB16B41BB853647B3E5BD70600A0F92@SHSMSX104.ccr.corp.intel.com> References: <1936985058.312813.1590940419571@mail.yahoo.com> <70d739bb16f192214cd575410498af79c8a22b48.camel@redhat.com> <7B5303F69BB16B41BB853647B3E5BD70600A0F92@SHSMSX104.ccr.corp.intel.com> Message-ID: <2b23e511ac021ca88033571dd362d2775c38fa37.camel@redhat.com> On Wed, 2020-06-03 at 00:27 +0000, Feng, Shaohe wrote: > Yes, we should make sure that device profiles can co exits in both the flavor and multiple ports and the resource > requests are grouped correctly in each case. > We also need to ensure whatever approach we take to make the accelerators request(ARQ) co exits from both flavor and > multiple ports. > Such as we may want to delete a port, so we should distinguish the ARQ from flavor or ports. > > So which approach? Update the ARQ to port "binding:profile", or bind the ARQ with port UUID instead of instance UUID? > Or other approach? i kind of like the idea of using the neutron port uuid for the arq as it makes it very explcit which resouces are mapped to the port. that said while i would like neutron to retrive the port requests i think it make sense for nova to create and bind the ARQs so if we can have multiple ARQs with the same consumer uuid e.g. the vm that would work too. in the long term however i think the neuton port UUID will be simpler. e.g. if we support attach and detach in the future of smartnic interfaces then not having to fine the specfic arq correspondiing to a port to delete and instad just using the port UUID will be nice. > > > > Regards > Shaohe > > -----Original Message----- > From: Sean Mooney > Sent: 2020年6月2日 21:47 > To: yumeng bao ; Lajos Katona > Cc: openstack maillist ; Feng, Shaohe > Subject: Re: [cyborg][nova][neutron]Summaries of Smartnic support integration > > On Sun, 2020-05-31 at 15:53 +0000, yumeng bao wrote: > > Hi Sean and Lajos, > > > > Thank you so much for your quick response, good suggestions and feedbacks! > > > > @Sean Mooney > > > if we want to supprot cyborg/smartnic integration we should add a > > > new device-profile extention that intoduces the ablity for a non > > > admin user to specify a cyborg device profile name as a new attibute > > > on the port. > > > > +1,Agree. Cyborg likes this suggestion! This will be more clear that this field is for device profile usage. > > The reason why we were firstly thinking of using binding:profile is > > that this is a way with the smallest number of changes possible in > > both nova and neutron.But thinking of the non-admin issue, the > > violation of the one way comunicaiton of binding:profile, and the possible security risk of breaking nova(which we > > surely don't want to do that), we surely prefer giving up binding:profile and finding a better place to put the new > > device-profile extention. > > > > > the neutron server could then either retirve the request groups form > > > cyborg and pass them as part of the port resouce request using the > > > mechanium added for minium bandwidth or it can leave that to nova to > > > manage. > > > > > > i would kind of prefer neutron to do this but both could work. > > > > > > Yes, neutron server can also do that,but given the fact that we > > already landed the code of retriving request groups form cyborg in > > nova, can we reuse this process in nova and add new process in create_resource_requests to create accelerator > > resource request from port info? > > the advantage of neutron doing it is it can merge the cyborg resouce requests with any other resouce requests for the > port, if nova does it it need to have slightly different logic the then existing code we have today. > the existing code would make the cyborg resouce requests be a seperate placemnt group. we need them to be merged with > the port request group. the current nova code also only support one device profile per instance so we need to ensure > whatever approch we take we need to ensure that device profiles can co exits in both the flavor and multiple ports and > the resource requests are grouped correctly in each case. > > > > I would be very appreciated if this change can land in nova, as I see the advantages are: > > 1) This keeps accelerator request groups things handled in on place, > > which makes integration clear and simple, Nova controls the main > > manage things,the neutron handles network backends integration,and cyborg involves accelerator management. > > 2) Another good thing: this will dispels Lajos's concerns on port-resource-request! > > im not really sure what that concern is. port-resouce-request was created for initally for qos minimum bandwith > support but ideally it is a mechanism for comunicating any placement resouce requirements to nova. > my proposal was that neutron would retrive the device profile resouce requests(and cache it) then append those > requests too the other port-resouce-requests so that they will be included in the ports request group. > > 3) As presented in the proposal(page 3 and 5 of the slide)[0], please don't worry! This will be a tiny change in > > nova. > > Cyborg will be very appreciated if this change can land in nova, for > > it saves much effort in cyborg-neutron integration. > > > > > > @Lajos Katona: > > > Port-resource-request > > > (see:https://docs.openstack.org/api-ref/network/v2/index.html#port-r > > > esource-request) is a read-only (and admin-only) field of ports, > > > which is filled based on the agent heartbeats. So now there is now > > > polling of agents or similar. Adding extra "overload" to this > > > mechanism, like polling cyborg or similar looks something out of the > > > original design for me, not to speak about the performance issues to > > > add > > there is no need for any polling of cyborg. > the device-porfile in the cyborg api is immutable you cannot specify the uuid when createing it and the the name is > the unique constraint. > so even if someone was to delete and recreate the device profile with the same name the uuid would not be the same > > The first time a device profile is added to the port the neutron server can lookup the device profile once and cache > it. > so ideally the neutron server could cache the responce of a the "cyborg profile show" e.g. listing the resouce group > requests for the profile using the name and uuid. the uuid is only usful to catch the aba problem of people creating , > deleting and recreating the profile with the same name. > > i should note that if you do delete and recreate the device profile its highly likely to break nova so that should not > be done. this is because nova is relying on the fact that cybogs api say this cannot change so we are not storing the > version the vm was booted with and are relying on cyborg to not change it. neutron can make the same assumtion that a > device profile definition will not change. > > > > > > > > - API requests towards cyborg (or anything else) to every port GET > > > operation > > > - store cyborg related information in neutron db which was fetched from > > > cyborg (periodically I suppose) to make neutron able to fill > > > port-resource-request. > > > > > > As mentioned above,if request group can land in nova, we don't need to > > concern API request towards cyborg and cyborg info merged to port-resource-request. > > this would still have to be done in nova instead. really this is not something nova should have to special case for, > im not saying it cant but ideally neuton would leaverage the exitsing port-resource-request feature. > > Another question just for curiosity.In my understanding(Please correct > > me if I'm worng.), I feel that neutron doesn't need to poll cyborg periodically if neutron fill port-resource- > > request, just fetch it once port request happens. > > correct no need to poll, it just need to featch it when the profile is added to the port and it can be cached safely. > > because neutron can expect that the cyborg device_profile(provides > > resource request info for nova scheduler) don't change very often, > > it imuntable in the cyborg api so it can expect that for a given name it should never change but it could detect that > by looking at both the name and uuid. > > it is the flavor of accelerator, and only admin can create/delete them. > > yes only admin can create and delete them and it does not support update. > i think its invalid to delete a device profile if its currently in use by any neutron port or nova instance. > its certenly invalid or should to delete it if there is a arq using the device-profile. > > > > [0]pre-PTG slides update: https://docs.qq.com/slide/DVkxSUlRnVGxnUFR3 > > > > Regards, > > Yumeng > > > > > > > > > > On Friday, May 29, 2020, 3:21:08 PM GMT+8, Lajos Katona wrote: > > > > > > > > > > > > Hi, > > > > Port-resource-request (see: > > https://docs.openstack.org/api-ref/network/v2/index.html#port-resource > > -requestst ) is a read-only (and admin-only) field of ports, which is > > filled based on the agent heartbeats. So now there is now polling of > > agents or similar. Adding extra "overload" to this mechanism, like polling cyborg or similar looks something out of > > the original design for me, not to speak about the performance issues to add > > * API requests towards cyborg (or anything else) to every port GET operation > > * store cyborg related information in neutron db which was fetched > > from cyborg (periodically I suppose) to make neutron able to fill port-resource-request. > > Regards > > Lajos > > > > Sean Mooney ezt írta (időpont: 2020. máj. 28., Cs, 16:13): > > > On Thu, 2020-05-28 at 20:50 +0800, yumeng bao wrote: > > > >  > > > > Hi all, > > > > > > > > > > > > In cyborg pre-PTG meeting conducted last week[0],shaohe from Intel > > > > introduced SmartNIC support integrations,and we've reached some > > > > initial agreements: > > > > > > > > The workflow for a user to create a server with network acceleartor(accelerator is managed by Cyborg) is: > > > > > > > > 1. create a port with accelerator request specified into binding_profile field > > > > NOTE: Putting the accelerator request(device_profile) into > > > > binding_profile is one possible solution implemented in our POC. > > > > > > the binding profile field is not really intended for this. > > > > > > https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api > > > /definitions/portbindings.py#L31-L34 > > > its intended to pass info from nova to neutron but not the other way around. > > > it was orgininally introduced so that nova could pass info to the > > > neutron plug in specificly the sriov pci address. it was not > > > intended for two way comunicaiton to present infom form neutron to nova. > > > > > > we kindo of broke that with the trusted vf feature but since that > > > was intended to be admin only as its a security risk in a mulit > > > tenant cloud its a slightl different case. > > > i think we should avoid using the binding profile for passing info > > > form neutron to nova and keep it for its orginal use of passing info from the virt dirver to the network backend. > > > > > > > > > > Another possible solution,adding a new attribute to port > > > > object for cyborg specific use instead of using binding_profile, is discussed in shanghai Summit[1]. > > > > This needs check with neutron team, which neutron team would suggest? > > > > > > from a nova persepctive i would prefer if this was a new extention. > > > the binding profile is admin only by default so its not realy a good way to request features be enabled. > > > you can use neutron rbac policies to alther that i belive but in > > > genral i dont think we shoudl advocate for non admins to be able to > > > modify the binding profile as they can break nova. e.g. by modifying the pci addres. > > > if we want to supprot cyborg/smartnic integration we should add a > > > new device-profile extention that intoduces the ablity for a non > > > admin user to specify a cyborg device profile name as a new attibute on the port. > > > > > > the neutron server could then either retirve the request groups form > > > cyborg and pass them as part of the port resouce request using the > > > mechanium added for minium bandwidth or it can leave that to nova to manage. > > > > > > i would kind of prefer neutron to do this but both could work. > > > > > > > > 2.create a server with the port created > > > > > > > > Cyborg-nova-neutron integration workflow can be found on page 3 of the slide[2] presented in pre-PTG. > > > > > > > > And we also record the introduction! Please find the pre-PTG > > > > meeting vedio record in [3] and [4], they are the same, just for > > > > different region access. > > > > > > > > > > > > [0]http://lists.openstack.org/pipermail/openstack-discuss/2020-May > > > > /014987.html > > > > [1]https://etherpad.opendev.org/p/Shanghai-Neutron-Cyborg-xproj > > > > [2]pre-PTG slides:https://docs.qq.com/slide/DVm5Jakx5ZlJXY3lw > > > > [3]pre-PTG vedio records in > > > > Youtube:https://www.youtube.com/watch?v=IN4haOK7sQg&feature=youtu. > > > > be [4]pre-PTG vedio records in Youku: > > > > http://v.youku.com/v_show/id_XNDY5MDA4NjM2NA==.html?x&sharefrom=ip > > > > hone&sharekey=51459cbd599407990dd09940061b374d4 > > > > > > > > Regards, > > > > Yumeng > > > > > > > > > > > > > > > > > > > From massimo.sgaravatto at gmail.com Wed Jun 3 06:18:41 2020 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Wed, 3 Jun 2020 08:18:41 +0200 Subject: [nova] [ops] user_id based policy enforcement Message-ID: Hi In my Rocky installation I am preventing users from deleting instances created by other users of the same project. This was implemented setting in the nova policy file: "os_compute_api:servers:delete": "rule:admin_api or user_id:%(user_id)s" This works, even if in the nova log file I see: The user_id attribute isn't supported in the rule 'os_compute_api:servers:delete'. All the user_id based policy enforcement will be removed in the future. Now I would also like preventing user to see the console log file of instances created by other users. I set in the nova policy file: "os_compute_api:os-console-output" : "rule:admin_api or user_id:%(user_id)s" but this doesn't work Any hints ? More in general: were the user_id based policy eventually removed in latest OpenStack releases ? Which are then the possible alternatives to implement my use case ? Thanks, Massimo -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Wed Jun 3 10:00:41 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 03 Jun 2020 12:00:41 +0200 Subject: [nova][neutron][cyborg][ptg] Nova changing from Jitsi to Zoom Message-ID: <55HCBQ.9S3JXJQQJSKK2@est.tech> Hi, As multiple people reported problem with Jitsi from PRC [1][2] the Nova team is switching back to Zoom. We will use the Juno Zoom room [3] during this week. Cheers, gibi [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015207.html [2] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-06-03.log.html#t2020-06-03T06:26:05 [3] https://www.openstack.org/ptg/rooms/juno From xin-ran.wang at intel.com Wed Jun 3 10:22:27 2020 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Wed, 3 Jun 2020 10:22:27 +0000 Subject: [cyborg][nova][neutron]Summaries of Smartnic support integration In-Reply-To: <2b23e511ac021ca88033571dd362d2775c38fa37.camel@redhat.com> References: <1936985058.312813.1590940419571@mail.yahoo.com> <70d739bb16f192214cd575410498af79c8a22b48.camel@redhat.com> <7B5303F69BB16B41BB853647B3E5BD70600A0F92@SHSMSX104.ccr.corp.intel.com> <2b23e511ac021ca88033571dd362d2775c38fa37.camel@redhat.com> Message-ID: Hi all, According to this discussion, we have summarized a documentation, which introduce an initial idea about how Nova, Cyborg and Neutron interact with each other, in terms of supporting network related devices. There are 3 projects involved (Nova, Neutron, Cyborg), please check it and leave your comments wherever you want. https://docs.google.com/document/d/11HkK-cLpDxa5Lku0_O0Nb8Uqh34Jqzx2N7j2aDu05T0/edit?usp=sharing Thanks, Xin-Ran -----Original Message----- From: Sean Mooney Sent: Wednesday, June 3, 2020 1:31 PM To: Feng, Shaohe ; yumeng bao ; Lajos Katona Cc: openstack maillist Subject: Re: [cyborg][nova][neutron]Summaries of Smartnic support integration On Wed, 2020-06-03 at 00:27 +0000, Feng, Shaohe wrote: > Yes, we should make sure that device profiles can co exits in both the > flavor and multiple ports and the resource requests are grouped correctly in each case. > We also need to ensure whatever approach we take to make the > accelerators request(ARQ) co exits from both flavor and multiple ports. > Such as we may want to delete a port, so we should distinguish the ARQ from flavor or ports. > > So which approach? Update the ARQ to port "binding:profile", or bind the ARQ with port UUID instead of instance UUID? > Or other approach? i kind of like the idea of using the neutron port uuid for the arq as it makes it very explcit which resouces are mapped to the port. that said while i would like neutron to retrive the port requests i think it make sense for nova to create and bind the ARQs so if we can have multiple ARQs with the same consumer uuid e.g. the vm that would work too. in the long term however i think the neuton port UUID will be simpler. e.g. if we support attach and detach in the future of smartnic interfaces then not having to fine the specfic arq correspondiing to a port to delete and instad just using the port UUID will be nice. > > > > Regards > Shaohe > > -----Original Message----- > From: Sean Mooney > Sent: 2020年6月2日 21:47 > To: yumeng bao ; Lajos Katona > > Cc: openstack maillist ; Feng, > Shaohe > Subject: Re: [cyborg][nova][neutron]Summaries of Smartnic support > integration > > On Sun, 2020-05-31 at 15:53 +0000, yumeng bao wrote: > > Hi Sean and Lajos, > > > > Thank you so much for your quick response, good suggestions and feedbacks! > > > > @Sean Mooney > > > if we want to supprot cyborg/smartnic integration we should add a > > > new device-profile extention that intoduces the ablity for a non > > > admin user to specify a cyborg device profile name as a new > > > attibute on the port. > > > > +1,Agree. Cyborg likes this suggestion! This will be more clear that this field is for device profile usage. > > The reason why we were firstly thinking of using binding:profile is > > that this is a way with the smallest number of changes possible in > > both nova and neutron.But thinking of the non-admin issue, the > > violation of the one way comunicaiton of binding:profile, and the > > possible security risk of breaking nova(which we surely don't want > > to do that), we surely prefer giving up binding:profile and finding a better place to put the new device-profile extention. > > > > > the neutron server could then either retirve the request groups > > > form cyborg and pass them as part of the port resouce request > > > using the mechanium added for minium bandwidth or it can leave > > > that to nova to manage. > > > > > > i would kind of prefer neutron to do this but both could work. > > > > > > Yes, neutron server can also do that,but given the fact that we > > already landed the code of retriving request groups form cyborg in > > nova, can we reuse this process in nova and add new process in > > create_resource_requests to create accelerator resource request from port info? > > the advantage of neutron doing it is it can merge the cyborg resouce > requests with any other resouce requests for the port, if nova does it it need to have slightly different logic the then existing code we have today. > the existing code would make the cyborg resouce requests be a seperate > placemnt group. we need them to be merged with the port request group. > the current nova code also only support one device profile per > instance so we need to ensure whatever approch we take we need to ensure that device profiles can co exits in both the flavor and multiple ports and the resource requests are grouped correctly in each case. > > > > I would be very appreciated if this change can land in nova, as I see the advantages are: > > 1) This keeps accelerator request groups things handled in on place, > > which makes integration clear and simple, Nova controls the main > > manage things,the neutron handles network backends integration,and cyborg involves accelerator management. > > 2) Another good thing: this will dispels Lajos's concerns on port-resource-request! > > im not really sure what that concern is. port-resouce-request was > created for initally for qos minimum bandwith support but ideally it is a mechanism for comunicating any placement resouce requirements to nova. > my proposal was that neutron would retrive the device profile resouce > requests(and cache it) then append those requests too the other port-resouce-requests so that they will be included in the ports request group. > > 3) As presented in the proposal(page 3 and 5 of the slide)[0], > > please don't worry! This will be a tiny change in nova. > > Cyborg will be very appreciated if this change can land in nova, for > > it saves much effort in cyborg-neutron integration. > > > > > > @Lajos Katona: > > > Port-resource-request > > > (see:https://docs.openstack.org/api-ref/network/v2/index.html#port > > > -r > > > esource-request) is a read-only (and admin-only) field of ports, > > > which is filled based on the agent heartbeats. So now there is now > > > polling of agents or similar. Adding extra "overload" to this > > > mechanism, like polling cyborg or similar looks something out of > > > the original design for me, not to speak about the performance > > > issues to add > > there is no need for any polling of cyborg. > the device-porfile in the cyborg api is immutable you cannot specify > the uuid when createing it and the the name is the unique constraint. > so even if someone was to delete and recreate the device profile with > the same name the uuid would not be the same > > The first time a device profile is added to the port the neutron > server can lookup the device profile once and cache it. > so ideally the neutron server could cache the responce of a the > "cyborg profile show" e.g. listing the resouce group requests for the > profile using the name and uuid. the uuid is only usful to catch the aba problem of people creating , deleting and recreating the profile with the same name. > > i should note that if you do delete and recreate the device profile > its highly likely to break nova so that should not be done. this is > because nova is relying on the fact that cybogs api say this cannot > change so we are not storing the version the vm was booted with and are relying on cyborg to not change it. neutron can make the same assumtion that a device profile definition will not change. > > > > > > > > - API requests towards cyborg (or anything else) to every port GET > > > operation > > > - store cyborg related information in neutron db which was fetched from > > > cyborg (periodically I suppose) to make neutron able to fill > > > port-resource-request. > > > > > > As mentioned above,if request group can land in nova, we don't need > > to concern API request towards cyborg and cyborg info merged to port-resource-request. > > this would still have to be done in nova instead. really this is not > something nova should have to special case for, im not saying it cant but ideally neuton would leaverage the exitsing port-resource-request feature. > > Another question just for curiosity.In my understanding(Please > > correct me if I'm worng.), I feel that neutron doesn't need to poll > > cyborg periodically if neutron fill port-resource- request, just fetch it once port request happens. > > correct no need to poll, it just need to featch it when the profile is added to the port and it can be cached safely. > > because neutron can expect that the cyborg device_profile(provides > > resource request info for nova scheduler) don't change very often, > > it imuntable in the cyborg api so it can expect that for a given name > it should never change but it could detect that by looking at both the name and uuid. > > it is the flavor of accelerator, and only admin can create/delete them. > > yes only admin can create and delete them and it does not support update. > i think its invalid to delete a device profile if its currently in use by any neutron port or nova instance. > its certenly invalid or should to delete it if there is a arq using the device-profile. > > > > [0]pre-PTG slides update: > > https://docs.qq.com/slide/DVkxSUlRnVGxnUFR3 > > > > Regards, > > Yumeng > > > > > > > > > > On Friday, May 29, 2020, 3:21:08 PM GMT+8, Lajos Katona wrote: > > > > > > > > > > > > Hi, > > > > Port-resource-request (see: > > https://docs.openstack.org/api-ref/network/v2/index.html#port-resour > > ce -requestst ) is a read-only (and admin-only) field of ports, > > which is filled based on the agent heartbeats. So now there is now > > polling of agents or similar. Adding extra "overload" to this > > mechanism, like polling cyborg or similar looks something out of the > > original design for me, not to speak about the performance issues to add > > * API requests towards cyborg (or anything else) to every port GET operation > > * store cyborg related information in neutron db which was > > fetched from cyborg (periodically I suppose) to make neutron able to fill port-resource-request. > > Regards > > Lajos > > > > Sean Mooney ezt írta (időpont: 2020. máj. 28., Cs, 16:13): > > > On Thu, 2020-05-28 at 20:50 +0800, yumeng bao wrote: > > > >  > > > > Hi all, > > > > > > > > > > > > In cyborg pre-PTG meeting conducted last week[0],shaohe from > > > > Intel introduced SmartNIC support integrations,and we've reached > > > > some initial agreements: > > > > > > > > The workflow for a user to create a server with network acceleartor(accelerator is managed by Cyborg) is: > > > > > > > > 1. create a port with accelerator request specified into binding_profile field > > > > NOTE: Putting the accelerator request(device_profile) into > > > > binding_profile is one possible solution implemented in our POC. > > > > > > the binding profile field is not really intended for this. > > > > > > https://github.com/openstack/neutron-lib/blob/master/neutron_lib/a > > > pi > > > /definitions/portbindings.py#L31-L34 > > > its intended to pass info from nova to neutron but not the other way around. > > > it was orgininally introduced so that nova could pass info to the > > > neutron plug in specificly the sriov pci address. it was not > > > intended for two way comunicaiton to present infom form neutron to nova. > > > > > > we kindo of broke that with the trusted vf feature but since that > > > was intended to be admin only as its a security risk in a mulit > > > tenant cloud its a slightl different case. > > > i think we should avoid using the binding profile for passing info > > > form neutron to nova and keep it for its orginal use of passing info from the virt dirver to the network backend. > > > > > > > > > > Another possible solution,adding a new attribute to port > > > > object for cyborg specific use instead of using binding_profile, is discussed in shanghai Summit[1]. > > > > This needs check with neutron team, which neutron team would suggest? > > > > > > from a nova persepctive i would prefer if this was a new extention. > > > the binding profile is admin only by default so its not realy a good way to request features be enabled. > > > you can use neutron rbac policies to alther that i belive but in > > > genral i dont think we shoudl advocate for non admins to be able > > > to modify the binding profile as they can break nova. e.g. by modifying the pci addres. > > > if we want to supprot cyborg/smartnic integration we should add a > > > new device-profile extention that intoduces the ablity for a non > > > admin user to specify a cyborg device profile name as a new attibute on the port. > > > > > > the neutron server could then either retirve the request groups > > > form cyborg and pass them as part of the port resouce request > > > using the mechanium added for minium bandwidth or it can leave that to nova to manage. > > > > > > i would kind of prefer neutron to do this but both could work. > > > > > > > > 2.create a server with the port created > > > > > > > > Cyborg-nova-neutron integration workflow can be found on page 3 of the slide[2] presented in pre-PTG. > > > > > > > > And we also record the introduction! Please find the pre-PTG > > > > meeting vedio record in [3] and [4], they are the same, just for > > > > different region access. > > > > > > > > > > > > [0]http://lists.openstack.org/pipermail/openstack-discuss/2020-M > > > > ay > > > > /014987.html > > > > [1]https://etherpad.opendev.org/p/Shanghai-Neutron-Cyborg-xproj > > > > [2]pre-PTG slides:https://docs.qq.com/slide/DVm5Jakx5ZlJXY3lw > > > > [3]pre-PTG vedio records in > > > > Youtube:https://www.youtube.com/watch?v=IN4haOK7sQg&feature=youtu. > > > > be [4]pre-PTG vedio records in Youku: > > > > http://v.youku.com/v_show/id_XNDY5MDA4NjM2NA==.html?x&sharefrom= > > > > ip > > > > hone&sharekey=51459cbd599407990dd09940061b374d4 > > > > > > > > Regards, > > > > Yumeng > > > > > > > > > > > > > > > > > > > From berendt at betacloud-solutions.de Wed Jun 3 13:08:44 2020 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Wed, 3 Jun 2020 15:08:44 +0200 Subject: [kolla-ansible] Proposing Doug Szumski as Kolla Ansible core In-Reply-To: References: Message-ID: +1 > Am 29.05.2020 um 15:18 schrieb Radosław Piliszek : > > Hi Folks! > > This mail serves to propose Doug Szumski from StackHPC (dougsz @IRC, CC'ed) as Kolla Ansible core. > > Doug coauthored the Nova cells support and helps greatly with monitoring and logging facilities available in Kolla. > > Please give your feedback in this thread. > > If there are no objections, I will add Doug after a week from now (that is roughly when PTG is over). > > -yoctozepto -- Christian Berendt Geschäftsführer Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From knikolla at bu.edu Wed Jun 3 13:30:41 2020 From: knikolla at bu.edu (Nikolla, Kristi) Date: Wed, 3 Jun 2020 13:30:41 +0000 Subject: [keystone] Keystone PTG Schedule In-Reply-To: <10FA4BC8-3139-425E-9F86-5F02F756DF61@bu.edu> References: <10FA4BC8-3139-425E-9F86-5F02F756DF61@bu.edu> Message-ID: <80D6617D-D4BD-4A3C-9839-CD81E9330B14@bu.edu> With regards to the Meetup, let's do Friday 1700 UTC, after our last session! Best, Kristi > On May 31, 2020, at 2:13 PM, Nikolla, Kristi wrote: > > Hi all, > > The Keystone team will be meeting Thursday and Friday 1300-1700 UTC. The schedule[0] can be found below and provides information on how to register and attend. > > In addition, I'm hoping to organize a team meetup Wednesday, Thursday or Friday on 1700UTC. Please fill in the doodle at [1]. > > More general PTG information can be found at [2]. > > [0]. https://etherpad.opendev.org/p/victoria-ptg-keystone > [1]. https://doodle.com/poll/z7diwe6qyq4mbghe > [2]. http://lists.openstack.org/pipermail/openstack-discuss/2020-May/015054.html From gmann at ghanshyammann.com Wed Jun 3 13:53:52 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 03 Jun 2020 08:53:52 -0500 Subject: [nova] [ops] user_id based policy enforcement In-Reply-To: References: Message-ID: <1727a75adab.eeb22290154432.8664045094288843124@ghanshyammann.com> ---- On Wed, 03 Jun 2020 01:18:41 -0500 Massimo Sgaravatto wrote ---- > Hi > In my Rocky installation I am preventing users from deleting instances created by other users of the same project.This was implemented setting in the nova policy file: > > "os_compute_api:servers:delete": "rule:admin_api or user_id:%(user_id)s" > > This works, even if in the nova log file I see: > The user_id attribute isn't supported in the rule 'os_compute_api:servers:delete'. All the user_id based policy enforcement will be removed in the future. > > Now I would also like preventing user to see the console log file of instances created by other users. I set in the nova policy file: > "os_compute_api:os-console-output" : "rule:admin_api or user_id:%(user_id)s" Nova does not restrict the policy by user_id except keypairs API or a few of the destructive actions( which I think we supported for backwards compatiblity and intent to remove it later that is why you can see the warning). I remember we discussed this in 2016 but I could not find the ML thread for that but the consensus that time was we do not intend to support user_id based restriction permission in the API. On the same note, ussuri onwards you can enforce some user-level restriction based on the role, but not by user_id. In the Ussuri cycle, we have implemented the keystone new defaults roles in nova policy. You can assign read and write roles for users and achieve the user's isolation within same project. Please refer this doc to know more details on those new policies - https://docs.openstack.org/nova/latest/configuration/policy-concepts.html -gmann > > but this doesn't work > Any hints ? > More in general: were the user_id based policy eventually removed in latest OpenStack releases ?Which are then the possible alternatives to implement my use case ? > Thanks, Massimo From david.j.ivey at gmail.com Wed Jun 3 14:52:32 2020 From: david.j.ivey at gmail.com (David Ivey) Date: Wed, 3 Jun 2020 10:52:32 -0400 Subject: [all] OpenStack versions that can't practically be run with Python 3 ? In-Reply-To: <172761ac1eb.de306d99111474.6144809529980746475@ghanshyammann.com> References: <172761ac1eb.de306d99111474.6144809529980746475@ghanshyammann.com> Message-ID: My apologies, I did not mean to make it sound like Stein did not support python 3. My issues were not necessarily bugs. It was more dependency issues I was working through and most of that is because I do not use typical deployment methods like TrippleO, Kolla-ansible, etc. as we build/maintain our own deployment tool. I just didn't have the time to work through them and ended up reverting due to time constraints. David On Tue, Jun 2, 2020 at 1:36 PM Ghanshyam Mann wrote: > ---- On Tue, 02 Jun 2020 09:02:18 -0500 David Ivey < > david.j.ivey at gmail.com> wrote ---- > > For me, Stein still had a lot of issues with python3 when I tried to > use it, but I had tried the upgrade shortly after Stein had released so > those issues may have been resolved by now. I ended up reverting back to > Rocky and python2.7, My first real stable build with python3 was with the > Train release on Ubuntu18.04, so I skipped the Stein release. > > We did migrate the upstream integration testing to Ubuntu 18.04 in Stein > [1], so > I feel stein should be fine on python3 until we are missing the scenario > failing for you in our testing. > > > Someone can correct me if I am wrong, but last I checked, CentOS 7 did > not have the python3 packages in RDO. So if using CentOS 7; RDO does not > have Ussuri and the latest release there is Train with python2.7. If using > CentOS 8 and the Ussuri release; RDO released the python3 packages last > week. > > I have not tried Ussuri on CentOS 8 yet. > > David > > On Tue, Jun 2, 2020 at 8:25 AM Sean McGinnis > wrote: > > On 6/2/20 6:34 AM, Neil Jerram wrote: > > > Does anyone know the most recent OpenStack version that > > > _can't_ easily be run with Python 3? I think the full answer to this > > > may have to consider distro packaging, as well as the underlying code > > > support. > > > > > > For example, I was just looking at switching an existing Queens setup, > > > on Ubuntu Bionic, and it can't practically be done because all of the > > > scripts - e.g. /usr/bin/nova-compute - have a hashbang line that says > > > "python2". > > > > > > So IIUC Queens is a no for Python 3, at least in the Ubuntu packaging. > > > > > > Do you know if this is equally true for later versions than Queens? > > > Or alternatively, if something systematic was done to address this > > > problem in later releases? E.g. is there a global USE_PYTHON3 switch > > > somewhere, or was the packaging for later releases changed to hardcode > > > "python3" instead of "python2"? If so, when did that happen? > > USE_PYTHON3 in devstack was switched to True by default in Ussuri cycle, > but we moved > (unit test as well as integration tests) on python3 by default: > > - Unit tests (same as Sean already mentioned)- > https://governance.openstack.org/tc/goals/selected/stein/python3-first.html > - [1] Integration testing: > http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004647.html > > If something is failing with Stein on python 3 then I will suggest > reporting the bug and we can check if that can be fixed. > > -gmann > > > > > > Stein was the release where we had a cycle goal to get everyone using > > Python 3: > > > > > https://governance.openstack.org/tc/goals/selected/stein/python3-first.html > > > > Part of the completion criteria for that goal was that all projects > > should, at a minimum, be running py3.6 unit tests. So a couple of > > caveats there - unit tests don't always identify issues that you can run > > in to actually running full functionality, and not every project was > > able to complete the cycle goal completely. Most did though. > > > > So I think Stein likely should work for you, but of course Train or > > Ussuri will have had more time to identify any missed issues and the > like. > > > > I hope this helps. > > > > Sean > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Jun 3 16:03:24 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 3 Jun 2020 12:03:24 -0400 Subject: [OSSN-0086] Dell EMC ScaleIO/VxFlex OS Backend Credentials Exposure Message-ID: <87ccb7d8-b662-4df5-ecba-81fffcb7d202@gmail.com> Dell EMC ScaleIO/VxFlex OS Backend Credentials Exposure --- ### Summary ### This vulnerability is present when using Cinder with a Dell EMC ScaleIO or VxFlex OS storage backend. Note: The Dell EMC "ScaleIO" driver was rebranded as "VxFlex OS" in the Train release. ### Affected Services / Software ### Cinder / Ocata, Pike, Queens, Rocky, Stein, Train, Ussuri This vulnerability applies only when using a Dell EMC ScaleIO/VxFlex OS Backend with Cinder. Other drivers are not impacted. ### Discussion ### When using Cinder with the Dell EMC ScaleIO or VxFlex OS backend storage driver, credentials for the entire backend are exposed in the ``connection_info`` element in all Block Storage v3 Attachments API calls containing that element. This enables an end user to create a volume, make an API call to show the attachment detail information, and retrieve a username and password that may be used to connect to another user's volume. Additionally, these credentials are valid for the ScaleIO or VxFlex OS Management API, should an attacker discover the Management API endpoint. This issue was reported by David Hill and Eric Harney of Red Hat. ### Recommended Actions ### Remediation of this issue consists of the following: 1. Patching the ScaleIO or VxFlex OS Cinder driver so that it no longer provides the password to Cinder when a Block Storage v3 Attachments API response is constructed. 2. Patching the ScaleIO connector in the os-brick library so that it retrieves the password from a configuration file readable only by root. (Note: the connector was not rebranded; both ScaleIO and VxFlex OS backends use the 'scaleio' os-brick connector.) 3. Patching the ScaleIO os-brick privileged file that allows the scaleio connector to escalate privileges for specific operations; this is necessary to allow the connector process to access the configuration file that is readable only by root. 4. Deploying a configuration file containing the password (and replication password, if applicable) to all compute nodes, cinder nodes, and anywhere you would perform a volume attachment in your deployment. To refresh database information, all volumes should be detached and reattached. Because this remediation consists of deploying credentials in a root-readable-only file, it is not suitable for the use case of attaching a volume to a bare metal host. Thus, the Dell EMC ScaleIO/VxFlex OS storage backend for Cinder is *not recommended* for use with bare metal hosts. Note: The Ocata, Pike, Queens, and Rocky branches of OpenStack are in the Extended Maintenance phase. Point releases are no longer made from these branches and security patches are produced only on a reasonable effort basis. Patches for Queens and Rocky are provided as a courtesy. Patches for Ocata and Pike are not available. #### Patches #### Both cinder and os-brick must be patched. Documentation is provided as part of the cinder patch concerning the new configuration file that must be deployed to all compute nodes, cinder nodes, and anywhere you would perform a volume attachment in your deployment. Queens * cinder: https://review.opendev.org/733110 * os-brick: https://review.opendev.org/733104 Rocky * cinder: https://review.opendev.org/733109 * os-brick: https://review.opendev.org/733103 Stein * cinder: https://review.opendev.org/733108 * os-brick: https://review.opendev.org/733102 Train * cinder: https://review.opendev.org/733107 * os-brick: https://review.opendev.org/733100 Ussuri * cinder: https://review.opendev.org/733106 * os-brick: https://review.opendev.org/733099 Alternatively, point releases for Stein, Train, and Ussuri will be made as soon as possible. These will be: Stein: cinder 14.0.5, requires os-brick 2.8.5 Train: cinder 15.1.1, requires os-brick 2.10.3 Ussuri: cinder 16.0.1, requires os-brick 3.0.2 ### Contacts / References ### Author: Brian Rosmaita, Red Hat This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0086 Original LaunchPad Bug : https://bugs.launchpad.net/cinder/+bug/1823200 Mailing List : [Security] tag on openstack-discuss at lists.openstack.org OpenStack Security Project : https://launchpad.net/~openstack-ossg CVE: CVE-2020-10755 From Scott.Solkhon at gresearch.co.uk Wed Jun 3 16:03:53 2020 From: Scott.Solkhon at gresearch.co.uk (Scott Solkhon) Date: Wed, 3 Jun 2020 16:03:53 +0000 Subject: [kolla-ansible] Proposing Doug Szumski as Kolla Ansible core In-Reply-To: References: Message-ID: <5af3ed8d71fc40cd857cd44a297ff2e1@mailserver.local> +1 -----Original Message----- From: Christian Berendt Sent: 03 June 2020 14:09 To: Radosław Piliszek Cc: openstack-discuss ; Doug Szumski Subject: Re: [kolla-ansible] Proposing Doug Szumski as Kolla Ansible core +1 > Am 29.05.2020 um 15:18 schrieb Radosław Piliszek : > > Hi Folks! > > This mail serves to propose Doug Szumski from StackHPC (dougsz @IRC, CC'ed) as Kolla Ansible core. > > Doug coauthored the Nova cells support and helps greatly with monitoring and logging facilities available in Kolla. > > Please give your feedback in this thread. > > If there are no objections, I will add Doug after a week from now (that is roughly when PTG is over). > > -yoctozepto -- Christian Berendt Geschäftsführer Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 -------------- G-RESEARCH believes the information provided herein is reliable. While every care has been taken to ensure accuracy, the information is furnished to the recipients with no warranty as to the completeness and accuracy of its contents and on condition that any errors or omissions shall not be made the basis of any claim, demand or cause of action. The information in this email is intended only for the named recipient. If you are not the intended recipient please notify us immediately and do not copy, distribute or take action based on this e-mail. All messages sent to and from this e-mail address will be logged by G-RESEARCH and are subject to archival storage, monitoring, review and disclosure. For information about how G-RESEARCH uses your personal data, please refer to our Privacy Policy at https://www.gresearch.co.uk/privacy-policy/. G-RESEARCH is the trading name of Trenchant Limited, 5th Floor, Whittington House, 19-30 Alfred Place, London WC1E 7EA. Trenchant Limited is a company registered in England with company number 08127121. -------------- From shaohe.feng at intel.com Wed Jun 3 00:27:17 2020 From: shaohe.feng at intel.com (Feng, Shaohe) Date: Wed, 3 Jun 2020 00:27:17 +0000 Subject: [cyborg][nova][neutron]Summaries of Smartnic support integration In-Reply-To: <70d739bb16f192214cd575410498af79c8a22b48.camel@redhat.com> References: <1936985058.312813.1590940419571@mail.yahoo.com> <70d739bb16f192214cd575410498af79c8a22b48.camel@redhat.com> Message-ID: <7B5303F69BB16B41BB853647B3E5BD70600A0F92@SHSMSX104.ccr.corp.intel.com> Yes, we should make sure that device profiles can co exits in both the flavor and multiple ports and the resource requests are grouped correctly in each case. We also need to ensure whatever approach we take to make the accelerators request(ARQ) co exits from both flavor and multiple ports. Such as we may want to delete a port, so we should distinguish the ARQ from flavor or ports. So which approach? Update the ARQ to port "binding:profile", or bind the ARQ with port UUID instead of instance UUID? Or other approach? Regards Shaohe -----Original Message----- From: Sean Mooney Sent: 2020年6月2日 21:47 To: yumeng bao ; Lajos Katona Cc: openstack maillist ; Feng, Shaohe Subject: Re: [cyborg][nova][neutron]Summaries of Smartnic support integration On Sun, 2020-05-31 at 15:53 +0000, yumeng bao wrote: > Hi Sean and Lajos, > > Thank you so much for your quick response, good suggestions and feedbacks! > > @Sean Mooney > > if we want to supprot cyborg/smartnic integration we should add a > > new device-profile extention that intoduces the ablity for a non > > admin user to specify a cyborg device profile name as a new attibute > > on the port. > > +1,Agree. Cyborg likes this suggestion! This will be more clear that this field is for device profile usage. > The reason why we were firstly thinking of using binding:profile is > that this is a way with the smallest number of changes possible in > both nova and neutron.But thinking of the non-admin issue, the > violation of the one way comunicaiton of binding:profile, and the possible security risk of breaking nova(which we surely don't want to do that), we surely prefer giving up binding:profile and finding a better place to put the new device-profile extention. > > > the neutron server could then either retirve the request groups form > > cyborg and pass them as part of the port resouce request using the > > mechanium added for minium bandwidth or it can leave that to nova to > > manage. > > > > i would kind of prefer neutron to do this but both could work. > > > Yes, neutron server can also do that,but given the fact that we > already landed the code of retriving request groups form cyborg in > nova, can we reuse this process in nova and add new process in create_resource_requests to create accelerator resource request from port info? the advantage of neutron doing it is it can merge the cyborg resouce requests with any other resouce requests for the port, if nova does it it need to have slightly different logic the then existing code we have today. the existing code would make the cyborg resouce requests be a seperate placemnt group. we need them to be merged with the port request group. the current nova code also only support one device profile per instance so we need to ensure whatever approch we take we need to ensure that device profiles can co exits in both the flavor and multiple ports and the resource requests are grouped correctly in each case. > > I would be very appreciated if this change can land in nova, as I see the advantages are: > 1) This keeps accelerator request groups things handled in on place, > which makes integration clear and simple, Nova controls the main > manage things,the neutron handles network backends integration,and cyborg involves accelerator management. > 2) Another good thing: this will dispels Lajos's concerns on port-resource-request! im not really sure what that concern is. port-resouce-request was created for initally for qos minimum bandwith support but ideally it is a mechanism for comunicating any placement resouce requirements to nova. my proposal was that neutron would retrive the device profile resouce requests(and cache it) then append those requests too the other port-resouce-requests so that they will be included in the ports request group. > 3) As presented in the proposal(page 3 and 5 of the slide)[0], please don't worry! This will be a tiny change in nova. > Cyborg will be very appreciated if this change can land in nova, for > it saves much effort in cyborg-neutron integration. > > > @Lajos Katona: > > Port-resource-request > > (see:https://docs.openstack.org/api-ref/network/v2/index.html#port-r > > esource-request) is a read-only (and admin-only) field of ports, > > which is filled based on the agent heartbeats. So now there is now > > polling of agents or similar. Adding extra "overload" to this > > mechanism, like polling cyborg or similar looks something out of the > > original design for me, not to speak about the performance issues to > > add there is no need for any polling of cyborg. the device-porfile in the cyborg api is immutable you cannot specify the uuid when createing it and the the name is the unique constraint. so even if someone was to delete and recreate the device profile with the same name the uuid would not be the same The first time a device profile is added to the port the neutron server can lookup the device profile once and cache it. so ideally the neutron server could cache the responce of a the "cyborg profile show" e.g. listing the resouce group requests for the profile using the name and uuid. the uuid is only usful to catch the aba problem of people creating , deleting and recreating the profile with the same name. i should note that if you do delete and recreate the device profile its highly likely to break nova so that should not be done. this is because nova is relying on the fact that cybogs api say this cannot change so we are not storing the version the vm was booted with and are relying on cyborg to not change it. neutron can make the same assumtion that a device profile definition will not change. > > > > - API requests towards cyborg (or anything else) to every port GET > > operation > > - store cyborg related information in neutron db which was fetched from > > cyborg (periodically I suppose) to make neutron able to fill > > port-resource-request. > > > As mentioned above,if request group can land in nova, we don't need to > concern API request towards cyborg and cyborg info merged to port-resource-request. this would still have to be done in nova instead. really this is not something nova should have to special case for, im not saying it cant but ideally neuton would leaverage the exitsing port-resource-request feature. > Another question just for curiosity.In my understanding(Please correct > me if I'm worng.), I feel that neutron doesn't need to poll cyborg periodically if neutron fill port-resource-request, just fetch it once port request happens. correct no need to poll, it just need to featch it when the profile is added to the port and it can be cached safely. > because neutron can expect that the cyborg device_profile(provides > resource request info for nova scheduler) don't change very often, it imuntable in the cyborg api so it can expect that for a given name it should never change but it could detect that by looking at both the name and uuid. > it is the flavor of accelerator, and only admin can create/delete them. yes only admin can create and delete them and it does not support update. i think its invalid to delete a device profile if its currently in use by any neutron port or nova instance. its certenly invalid or should to delete it if there is a arq using the device-profile. > > [0]pre-PTG slides update: https://docs.qq.com/slide/DVkxSUlRnVGxnUFR3 > > Regards, > Yumeng > > > > > On Friday, May 29, 2020, 3:21:08 PM GMT+8, Lajos Katona wrote: > > > > > > Hi, > > Port-resource-request (see: > https://docs.openstack.org/api-ref/network/v2/index.html#port-resource > -requestst ) is a read-only (and admin-only) field of ports, which is > filled based on the agent heartbeats. So now there is now polling of > agents or similar. Adding extra "overload" to this mechanism, like polling cyborg or similar looks something out of the original design for me, not to speak about the performance issues to add > * API requests towards cyborg (or anything else) to every port GET operation > * store cyborg related information in neutron db which was fetched > from cyborg (periodically I suppose) to make neutron able to fill port-resource-request. > Regards > Lajos > > Sean Mooney ezt írta (időpont: 2020. máj. 28., Cs, 16:13): > > On Thu, 2020-05-28 at 20:50 +0800, yumeng bao wrote: > > >  > > > Hi all, > > > > > > > > > In cyborg pre-PTG meeting conducted last week[0],shaohe from Intel > > > introduced SmartNIC support integrations,and we've reached some > > > initial agreements: > > > > > > The workflow for a user to create a server with network acceleartor(accelerator is managed by Cyborg) is: > > > > > > 1. create a port with accelerator request specified into binding_profile field > > > NOTE: Putting the accelerator request(device_profile) into > > > binding_profile is one possible solution implemented in our POC. > > > > the binding profile field is not really intended for this. > > > > https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api > > /definitions/portbindings.py#L31-L34 > > its intended to pass info from nova to neutron but not the other way around. > > it was orgininally introduced so that nova could pass info to the > > neutron plug in specificly the sriov pci address. it was not > > intended for two way comunicaiton to present infom form neutron to nova. > > > > we kindo of broke that with the trusted vf feature but since that > > was intended to be admin only as its a security risk in a mulit > > tenant cloud its a slightl different case. > > i think we should avoid using the binding profile for passing info > > form neutron to nova and keep it for its orginal use of passing info from the virt dirver to the network backend. > > > > > > > Another possible solution,adding a new attribute to port > > > object for cyborg specific use instead of using binding_profile, is discussed in shanghai Summit[1]. > > > This needs check with neutron team, which neutron team would suggest? > > > > from a nova persepctive i would prefer if this was a new extention. > > the binding profile is admin only by default so its not realy a good way to request features be enabled. > > you can use neutron rbac policies to alther that i belive but in > > genral i dont think we shoudl advocate for non admins to be able to > > modify the binding profile as they can break nova. e.g. by modifying the pci addres. > > if we want to supprot cyborg/smartnic integration we should add a > > new device-profile extention that intoduces the ablity for a non > > admin user to specify a cyborg device profile name as a new attibute on the port. > > > > the neutron server could then either retirve the request groups form > > cyborg and pass them as part of the port resouce request using the > > mechanium added for minium bandwidth or it can leave that to nova to manage. > > > > i would kind of prefer neutron to do this but both could work. > > > > > > 2.create a server with the port created > > > > > > Cyborg-nova-neutron integration workflow can be found on page 3 of the slide[2] presented in pre-PTG. > > > > > > And we also record the introduction! Please find the pre-PTG > > > meeting vedio record in [3] and [4], they are the same, just for > > > different region access. > > > > > > > > > [0]http://lists.openstack.org/pipermail/openstack-discuss/2020-May > > > /014987.html > > > [1]https://etherpad.opendev.org/p/Shanghai-Neutron-Cyborg-xproj > > > [2]pre-PTG slides:https://docs.qq.com/slide/DVm5Jakx5ZlJXY3lw > > > [3]pre-PTG vedio records in > > > Youtube:https://www.youtube.com/watch?v=IN4haOK7sQg&feature=youtu. > > > be [4]pre-PTG vedio records in Youku: > > > http://v.youku.com/v_show/id_XNDY5MDA4NjM2NA==.html?x&sharefrom=ip > > > hone&sharekey=51459cbd599407990dd09940061b374d4 > > > > > > Regards, > > > Yumeng > > > > > > > > > > > From massimo.sgaravatto at gmail.com Wed Jun 3 16:17:41 2020 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Wed, 3 Jun 2020 18:17:41 +0200 Subject: [nova] [ops] user_id based policy enforcement In-Reply-To: <1727a75adab.eeb22290154432.8664045094288843124@ghanshyammann.com> References: <1727a75adab.eeb22290154432.8664045094288843124@ghanshyammann.com> Message-ID: Thank you ! The "destructive actions" I guess are the ones listed here: https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/user-id-based-policy-enforcement.html In this page there is also the link to the ML thread you were talking about So the user_id based policy enforcement for those destructive actions is supposed to work till Train ? Did I get it right ? Thanks, Massimo On Wed, Jun 3, 2020 at 3:53 PM Ghanshyam Mann wrote: > ---- On Wed, 03 Jun 2020 01:18:41 -0500 Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> wrote ---- > > Hi > > In my Rocky installation I am preventing users from deleting instances > created by other users of the same project.This was implemented setting in > the nova policy file: > > > > "os_compute_api:servers:delete": "rule:admin_api or user_id:%(user_id)s" > > > > This works, even if in the nova log file I see: > > The user_id attribute isn't supported in the rule > 'os_compute_api:servers:delete'. All the user_id based policy enforcement > will be removed in the future. > > > > Now I would also like preventing user to see the console log file of > instances created by other users. I set in the nova policy file: > > "os_compute_api:os-console-output" : "rule:admin_api or > user_id:%(user_id)s" > > Nova does not restrict the policy by user_id except keypairs API or a few > of the destructive actions( which I think we supported for backwards > compatiblity and > intent to remove it later that is why you can see the warning). I > remember we discussed this in 2016 but I could not find the ML thread for > that but > the consensus that time was we do not intend to support user_id based > restriction permission in the API. > > On the same note, ussuri onwards you can enforce some user-level > restriction based on the role, but not by user_id. In the Ussuri cycle, we > have implemented > the keystone new defaults roles in nova policy. You can assign read and > write roles for users and achieve the user's isolation within same project. > Please refer this doc to know more details on those new policies > - > https://docs.openstack.org/nova/latest/configuration/policy-concepts.html > > -gmann > > > > > but this doesn't work > > Any hints ? > > More in general: were the user_id based policy eventually removed in > latest OpenStack releases ?Which are then the possible alternatives to > implement my use case ? > > Thanks, Massimo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Jun 3 16:40:14 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 3 Jun 2020 17:40:14 +0100 Subject: [kolla] voting on Victoria priorities Message-ID: Hi, Thanks to everyone who attended the Kolla PTG sessions, we covered a lot of ground. The Etherpad is available at [1]. At the end of the sessions we agreed on some candidate features for prioritisation during the Victoria cycle. I have moved these across to a second Etherpad [2]. Now is the time to vote! Please add your name against a maximum of 12 items across the 3 deliverables (kolla, kolla-ansible, kayobe) in the priorities [2] Etherpad. [1] https://etherpad.opendev.org/p/kolla-victoria-ptg [2] https://etherpad.opendev.org/p/kolla-victoria-priorities Thanks, Mark From gmann at ghanshyammann.com Wed Jun 3 17:18:26 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 03 Jun 2020 12:18:26 -0500 Subject: [nova] [ops] user_id based policy enforcement In-Reply-To: References: <1727a75adab.eeb22290154432.8664045094288843124@ghanshyammann.com> Message-ID: <1727b30fa7e.10a365053165665.8186518928353619673@ghanshyammann.com> ---- On Wed, 03 Jun 2020 11:17:41 -0500 Massimo Sgaravatto wrote ---- > Thank you ! > The "destructive actions" I guess are the ones listed here: > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/user-id-based-policy-enforcement.html > > In this page there is also the link to the ML thread you were talking about > So the user_id based policy enforcement for those destructive actions is supposed to work till Train ? Did I get it right ? Yes, that is the exact list we kept the user_id restriction support for backword compatilbity. They will keep running till Ussuri for sure. Please ntoe, we might cleanup those in Victoria cycle in favor of new defaults. -gmann > Thanks, Massimo > > On Wed, Jun 3, 2020 at 3:53 PM Ghanshyam Mann wrote: > ---- On Wed, 03 Jun 2020 01:18:41 -0500 Massimo Sgaravatto wrote ---- > > Hi > > In my Rocky installation I am preventing users from deleting instances created by other users of the same project.This was implemented setting in the nova policy file: > > > > "os_compute_api:servers:delete": "rule:admin_api or user_id:%(user_id)s" > > > > This works, even if in the nova log file I see: > > The user_id attribute isn't supported in the rule 'os_compute_api:servers:delete'. All the user_id based policy enforcement will be removed in the future. > > > > Now I would also like preventing user to see the console log file of instances created by other users. I set in the nova policy file: > > "os_compute_api:os-console-output" : "rule:admin_api or user_id:%(user_id)s" > > Nova does not restrict the policy by user_id except keypairs API or a few of the destructive actions( which I think we supported for backwards compatiblity and > intent to remove it later that is why you can see the warning). I remember we discussed this in 2016 but I could not find the ML thread for that but > the consensus that time was we do not intend to support user_id based restriction permission in the API. > > On the same note, ussuri onwards you can enforce some user-level restriction based on the role, but not by user_id. In the Ussuri cycle, we have implemented > the keystone new defaults roles in nova policy. You can assign read and write roles for users and achieve the user's isolation within same project. > Please refer this doc to know more details on those new policies > - https://docs.openstack.org/nova/latest/configuration/policy-concepts.html > > -gmann > > > > > but this doesn't work > > Any hints ? > > More in general: were the user_id based policy eventually removed in latest OpenStack releases ?Which are then the possible alternatives to implement my use case ? > > Thanks, Massimo > From pramchan at yahoo.com Wed Jun 3 17:22:31 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Wed, 3 Jun 2020 17:22:31 +0000 (UTC) Subject: Fw: [all][InteropWG] Please verify if Core Project needs any updates for Interop testing of user APIs In-Reply-To: <1622152971.635835.1591152149533@mail.yahoo.com> References: <1622152971.635835.1591152149533.ref@mail.yahoo.com> <1622152971.635835.1591152149533@mail.yahoo.com> Message-ID: <199801741.1150791.1591204951688@mail.yahoo.com> Hi OpenStack Core team members, Please bring this to your PTG meetings if needed.  Aassign a volunteer to complete the verification and/or submit patch (if updates are needed). We are trying to run our ambassadors or volunteers  to your meetings, but if we miss please reply with "Confirmed OK" to submit these draft to be merged for "OpenStack Powered Compute" approval. Drfat to patch if you need updates is shown below.https://opendev.org/openstack/interop/src/branch/master/2020.06.json Line 100-178 (has all core API capability listed for you to verify)https://opendev.org/openstack/interop/src/branch/master/2020.06.json#L100 All API Ussuri Release notes available on - https://docs.openstack.org/api-ref/ Following Core Team members can Confirm OK? Yes if you do not have any updatex Keystone:ConfirmOK ?  or Submit patch for 2020.06.json ? Glance:ConfirmOK ?  or Submit patch for 2020.06.json ? Nova:ConfirmOK ?  or Submit patch for 2020.06.json ? Neutron:ConfirmOK ?  or Submit patch for 2020.06.json ? Cinder:ConfirmOK ?  or Submit patch for 2020.06.json ? We are trying to run our ambassadors to your meetings, but if we miss please reply with "Confirmed OK - Yes" to submit these draft for approvals to merge asap. ThanksInterop WG chair: Prakash& Vice chair: Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Wed Jun 3 17:24:47 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Wed, 3 Jun 2020 17:24:47 +0000 (UTC) Subject: [all][InteropWG] Please verify if Swift Project needs any updates for Interop testing of user APIs In-Reply-To: <1617120026.1853380.1591152244867@mail.yahoo.com> References: <1617120026.1853380.1591152244867.ref@mail.yahoo.com> <1617120026.1853380.1591152244867@mail.yahoo.com> Message-ID: <302651857.2189087.1591205088007@mail.yahoo.com> Hi Swift team core members, We are trying to run our ambassadors or volunteers  to your meetings, but if we miss please reply with "Confirmed OK" to submit these draft to be merged for "OpenStack Powered Storage" approval. Please bring this to your PTG meetings if needed to assign a volunteer to complete the verification and patch if updates are needed. Swift: We test 20 capability with 60 tests -do you need any changes to 2020.06.json  swift api  draft before seeking approvals?https://docs.openstack.org/api-ref/object-store/ https://opendev.org/openstack/interop/src/branch/master/2020.06.json#L179Check Line 179-214 and confim that tests is capability tests 1-20 will suffcie or submit patch to inerop. - identity-v3-tokens-create [1/1] - identity-v3-tokens-delete [1/1] - objectstore-account-list [12/12] - objectstore-account-quotas [2/2] - objectstore-container-acl [2/2] - objectstore-container-create [2/2] - objectstore-container-delete [1/1] - objectstore-container-list [11/11] - objectstore-container-metadata [5/5] - objectstore-container-quotas [3/3] - objectstore-dlo-support [2/2] - objectstore-info-request [1/1] - objectstore-object-copy [4/4] - objectstore-object-create [2/2] - objectstore-object-delete [1/1] - objectstore-object-get [3/3] - objectstore-object-versioned [1/1] - objectstore-slo-support [4/4] - objectstore-temp-url-get [1/1] - objectstore-temp-url-put [1/1] ThanksInterop WG chai Prakash& Vice chair Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Wed Jun 3 17:26:03 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Wed, 3 Jun 2020 17:26:03 +0000 (UTC) Subject: [all][InteropWG] Please verify if Desigane+Heat Project needs any updates for Interop testing of user APIs In-Reply-To: <121235370.1850681.1591152390432@mail.yahoo.com> References: <121235370.1850681.1591152390432.ref@mail.yahoo.com> <121235370.1850681.1591152390432@mail.yahoo.com> Message-ID: <2022482043.1152201.1591205163046@mail.yahoo.com> HI all, This is second call to OpenStack Project Teams Core member for identfying any user APIs needed to submit patches for Brnading program.https://www.openstack.org/brand/interop/ Can Designate, Heat PTLs confirm that add-ons listed are all you need for add-on Branding programs for Orchestartion and DNS respetcively. Hi Designate & Heat Core team members, Hi add-ons  (Heat + Designate teams) We are trying to run our ambassadors or volunteers  to your meetings, but if we miss please reply with "Confirmed OK" to submit these draft to be merged for "OpenStack Powered Storage" approval. Please bring this your PTG meetings if needed to assign a volunteer to complete the verification and submit patch (if updates are needed). Add-ons: DNS: designate -  Release notes - https://docs.openstack.org/releasenotes/designate/ussuri.htmladd-on program codes -  https://opendev.org/openstack/interop/src/branch/master/add-ons/dns.2020.06.json Confirmed OK - ? patch? Orchestration: heat - Release notes - https://docs.openstack.org/api-ref/orchestration/v1/index.htmladd-on program codes - https://opendev.org/openstack/interop/src/branch/master/add-ons/orchestration.2020.06.json Confirmed OK - ? Patch? ThanksInterop Chair: PrakashVice Chair: Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Wed Jun 3 17:39:58 2020 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Wed, 3 Jun 2020 19:39:58 +0200 Subject: [nova] [ops] user_id based policy enforcement In-Reply-To: <1727b30fa7e.10a365053165665.8186518928353619673@ghanshyammann.com> References: <1727a75adab.eeb22290154432.8664045094288843124@ghanshyammann.com> <1727b30fa7e.10a365053165665.8186518928353619673@ghanshyammann.com> Message-ID: Thanks again ! Out of curiosity, why the operation to access the console of an instance wasn't considered a "destructive action" ? This allows to send a ctrl-alt-del ... Cheers, Massimo On Wed, Jun 3, 2020 at 7:18 PM Ghanshyam Mann wrote: > ---- On Wed, 03 Jun 2020 11:17:41 -0500 Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> wrote ---- > > Thank you ! > > The "destructive actions" I guess are the ones listed here: > > > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/user-id-based-policy-enforcement.html > > > > In this page there is also the link to the ML thread you were talking > about > > So the user_id based policy enforcement for those destructive actions > is supposed to work till Train ? Did I get it right ? > > Yes, that is the exact list we kept the user_id restriction support for > backword compatilbity. They will keep running till Ussuri for sure. > Please ntoe, we might cleanup those in Victoria cycle in favor of new > defaults. > > -gmann > > > Thanks, Massimo > > > > On Wed, Jun 3, 2020 at 3:53 PM Ghanshyam Mann > wrote: > > ---- On Wed, 03 Jun 2020 01:18:41 -0500 Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> wrote ---- > > > Hi > > > In my Rocky installation I am preventing users from deleting > instances created by other users of the same project.This was implemented > setting in the nova policy file: > > > > > > "os_compute_api:servers:delete": "rule:admin_api or > user_id:%(user_id)s" > > > > > > This works, even if in the nova log file I see: > > > The user_id attribute isn't supported in the rule > 'os_compute_api:servers:delete'. All the user_id based policy enforcement > will be removed in the future. > > > > > > Now I would also like preventing user to see the console log file of > instances created by other users. I set in the nova policy file: > > > "os_compute_api:os-console-output" : "rule:admin_api or > user_id:%(user_id)s" > > > > Nova does not restrict the policy by user_id except keypairs API or a > few of the destructive actions( which I think we supported for backwards > compatiblity and > > intent to remove it later that is why you can see the warning). I > remember we discussed this in 2016 but I could not find the ML thread for > that but > > the consensus that time was we do not intend to support user_id based > restriction permission in the API. > > > > On the same note, ussuri onwards you can enforce some user-level > restriction based on the role, but not by user_id. In the Ussuri cycle, we > have implemented > > the keystone new defaults roles in nova policy. You can assign read and > write roles for users and achieve the user's isolation within same project. > > Please refer this doc to know more details on those new policies > > - > https://docs.openstack.org/nova/latest/configuration/policy-concepts.html > > > > -gmann > > > > > > > > but this doesn't work > > > Any hints ? > > > More in general: were the user_id based policy eventually removed in > latest OpenStack releases ?Which are then the possible alternatives to > implement my use case ? > > > Thanks, Massimo > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Jun 3 20:05:40 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 3 Jun 2020 16:05:40 -0400 Subject: [cinder] retrospective survey Message-ID: <2a5884a8-f20e-1082-4314-ea5590729351@gmail.com> We're scheduled to hold our Cinder team Ussuri cycle retrospective tomorrow (Thursday, 4 June, see the etherpad for details and projected time): https://etherpad.opendev.org/p/victoria-ptg-cinder I posted a brief survey to help get you thinking. It's optional, but like I said, it may stimulate the thought process. Or if you won't be able to attend the retrospective, you can fill out the survey and we'll still be able to take your feedback into account. Here it is: https://rosmaita.wufoo.com/forms/cinder-ussuri-retrospective/ The survey is "sort of" anonymous (it collects your IP number). cheers, brian From emccormick at cirrusseven.com Wed Jun 3 21:29:42 2020 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 3 Jun 2020 17:29:42 -0400 Subject: [ops][ptg] Virtual Ops Meetup at the PTG In-Reply-To: <20200603045259.kifpfzpz3wzhdry5@yuggoth.org> References: <20200603045259.kifpfzpz3wzhdry5@yuggoth.org> Message-ID: On Wed, Jun 3, 2020 at 12:54 AM Jeremy Stanley wrote: > On 2020-06-03 00:42:06 -0400 (-0400), Erik McCormick wrote: > [...] > > on the foundation Jitsi server > [...] > > To be clear, it's not the OSF managing this service, it's run by the > OpenDev community. Still, we're happy to see more folks interested > in making use of it! > -- > Thanks for the clarification! Just a followup for Jeremy and anyone else interested in using / polishing the meetpad system, we managed a meeting of 20 people max (17 average) with no reported issues. Most of us kept our video off and that seemed to keep things running nice and smooth. I think most had the etherpad up on the screen a lot anyway, so didn't need video so much. All in all I think it was a very positive experience. -Erik > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jun 3 22:49:56 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Jun 2020 22:49:56 +0000 Subject: [ops][ptg] Virtual Ops Meetup at the PTG In-Reply-To: References: <20200603045259.kifpfzpz3wzhdry5@yuggoth.org> Message-ID: <20200603224955.u2syohgqz776im7r@yuggoth.org> On 2020-06-03 17:29:42 -0400 (-0400), Erik McCormick wrote: [...] > we managed a meeting of 20 people max (17 average) with no > reported issues. Most of us kept our video off and that seemed to > keep things running nice and smooth. I think most had the etherpad > up on the screen a lot anyway, so didn't need video so much. All > in all I think it was a very positive experience. [...] Thanks for the feedback! We're definitely still hoping to better tune things, including talking to some other free/libre open source software communities who are running similar services to see if we can share and pool expertise in this area. We've made a couple of minor changes today during a break between sessions, but anyone with interest or experience in Jitsi-Meet is welcome to pitch in and help us polish it (our deployment automation and configuration is all in public Git repositories on opendev.org and driven by Zuul jobs). Unfortunately a lot of the video performance and CPU consumption has to do with how browsers have implemented their support for WebRTC API and standards, so will depend on browser maintainers improving efficiency to solve some of the user experience challenges. Seems video has been working well for small teams and people blessed with suitable hardware acceleration, but larger groups are best off just switching to "low bandwidth" mode and going audio-only for now (that doesn't block the shared Etherpad document). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amotoki at gmail.com Thu Jun 4 09:40:45 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 4 Jun 2020 18:40:45 +0900 Subject: [doc] installation guide maintenance Message-ID: Hi, During the doc migration, the installation guide was moved to individual project repos. I see problems in installation guide maintenance after the migration. - The installation guide is not maintained well perhaps in many projects. AFAIK they are not verified well at least in horizon and neutron. - Even if we try to verify it, it is a tough thing because we need to prepare base distribution and setup other projects together (of course it depends on projects). This leads to a development bandwidth and priority issue. - We sometimes receive bug reports on the installation guide, but it is not easy for the upstream team confirm them and verify fixes. I guess the installation guides are not being maintained well from these reasons. Any thoughts on this situation? (This is my first question.) If a project team has no bandwidth to maintain it, what is a recommended way? I see several options: - Drop the installation guide (per OS or as a whole) -- If drop what should the criteria be? - Keep the installation guide with warnings like "the upstream team does not maintain it and just host it". - Keep it as-is (unmaintained) Finally, I am not sure we need to maintain step-by-step guides on installations and I wonder we need to drop them at some time. Most users deploy OpenStack using deployment projects (or their own deployment tools). Step-by-step guides might be useful from educational perspective but unmaintained guides are not useful. Thanks in advance, -- Akihiro Motoki (amotoki) From balazs.gibizer at est.tech Thu Jun 4 11:50:55 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 04 Jun 2020 13:50:55 +0200 Subject: [nova][ptg] Feature Liaison In-Reply-To: References: <1724c61d79c.e3730188581.5966946061881788712@ghanshyammann.com> Message-ID: On Mon, May 25, 2020 at 17:40, Balázs Gibizer wrote: > > > On Mon, May 25, 2020 at 10:09, Ghanshyam Mann > wrote: >> ---- On Mon, 25 May 2020 04:31:41 -0500 Sylvain Bauza >>  wrote ---- >> > >> > >> > On Mon, May 18, 2020 at 5:56 PM Balázs Gibizer >>  wrote: >> > Hi, >> > >> > [This is a topic from the PTG etherpad [0]. As the PTG time is >> > intentionally kept short, let's try to discuss it or even >> conclude it >> > before the PTG] >> > >> > Last cycle we introduced the Feature Liaison process [1]. I think >> this >> > is time to reflect on it. >> > Did it helped? >> > Do we need to tweak it? >> > >> > Personally for me it did not help much but I think this is a >> fairly low >> > cost process so I'm OK to keep it as is. >> > >> > >> > Cool with me. Maybe we could just tell it's optional, so we could >> better see who would like to get some mentor for them. >> >> +1 on optional and keep it for new contributors which can be any >> time in the future. So at least if anyone asks we can tell this is >> how you can get some dedicated Core for your code review/help. > > I see that those who responded so far are pretty aligned about the > future of the Feature Liaison, so I proposed an update for the > Victoria spec template [1] to make this optional. Feel free to > continue the discussion here or directly in the review. We touched this during the PTG session yesterday. I think we are in agreement to make this optional. I made one more update to the spec template about finding a liaison. I think the template update patch [1] is ready to be merged. Cheers, gibi [1] https://review.opendev.org/#/c/730638 > > Cheers, > gibi > > [1] https://review.opendev.org/#/c/730638 > >> >> -gmann >> >> > -Sylvain >> > Cheers, >> > gibi >> > >> > >> > [0] https://etherpad.opendev.org/p/nova-victoria-ptg >> > [1] https://review.opendev.org/#/c/685857/ >> > >> > >> > >> > > > > From balazs.gibizer at est.tech Thu Jun 4 11:58:50 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 04 Jun 2020 13:58:50 +0200 Subject: [nova][ptg] Runway process in Victoria In-Reply-To: <4AAJAQ.8C39QP5J2M3Z@est.tech> References: <4AAJAQ.8C39QP5J2M3Z@est.tech> Message-ID: <2AHEBQ.GY190QTHAYHH@est.tech> On Mon, May 18, 2020 at 17:42, Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > > In the last 4 cycles we used a process called runway to focus and > timebox of the team's feature review effort. However compared to the > previous cycles in ussuri we did not really keep the process running. > Just compare the length of the Log section of each etherpad > [1][2][3][4] to see the difference. So I have two questions: > > 1) Do we want to keep the process in Victoria? > > 2) If yes, how we can make the process running? > 2.1) How can we keep the runway etherpad up-to-date? > 2.2) How to make sure that the team is focusing on the reviews that > are in the runway slots? > > > Personally I don't want to advertise this process for contributors if > the core team is not agreed and committed to keep the process running > as it would lead to unnecessary disappointment. We made the following agreement during the yesterday's PTG sessions. * gibi will drive the runway slot effort during V by: * creating and keeping the etherpad up-to-date * announce changes in slot occupancy on IRC * try to find cores _before_ adding a patch series to an empty slot * mention features in the slots on the weekly meeting * We will try not to have two meaty features in the slots at the same time * We will handle the two weeks timeout as a soft rule by looking at the series before kicking it out from the slot. * We agree to keep 3 slots Cheers, gibi > > Cheers, > gibi > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] https://etherpad.opendev.org/p/nova-runways-rocky > [2] https://etherpad.opendev.org/p/nova-runways-stein > [3] https://etherpad.opendev.org/p/nova-runways-train > [4] https://etherpad.opendev.org/p/nova-runways-ussuri > > From gmann at ghanshyammann.com Thu Jun 4 12:27:41 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 04 Jun 2020 07:27:41 -0500 Subject: [nova] [ops] user_id based policy enforcement In-Reply-To: References: <1727a75adab.eeb22290154432.8664045094288843124@ghanshyammann.com> <1727b30fa7e.10a365053165665.8186518928353619673@ghanshyammann.com> Message-ID: <1727f4d2563.d7df3a79201157.2416917486114554702@ghanshyammann.com> ---- On Wed, 03 Jun 2020 12:39:58 -0500 Massimo Sgaravatto wrote ---- > Thanks again ! > Out of curiosity, why the operation to access the console of an instance wasn't considered a "destructive action" ?This allows to send a ctrl-alt-del ... Yeah, and there are other APIs also you can perform the destructive things. The idea of supporting the listed action for the user restriction was due to keeping backward compatibility and give the operators some time to migrate and not to move the permission things from projects to user level. That was around 4 years back and should have been removed now but we did not do. I think after new defaults it makes sense to remove those now. I will check in PTG if all are ok to remove it. -gmann > Cheers, Massimo > On Wed, Jun 3, 2020 at 7:18 PM Ghanshyam Mann wrote: > ---- On Wed, 03 Jun 2020 11:17:41 -0500 Massimo Sgaravatto wrote ---- > > Thank you ! > > The "destructive actions" I guess are the ones listed here: > > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/user-id-based-policy-enforcement.html > > > > In this page there is also the link to the ML thread you were talking about > > So the user_id based policy enforcement for those destructive actions is supposed to work till Train ? Did I get it right ? > > Yes, that is the exact list we kept the user_id restriction support for backword compatilbity. They will keep running till Ussuri for sure. > Please ntoe, we might cleanup those in Victoria cycle in favor of new defaults. > > -gmann > > > Thanks, Massimo > > > > On Wed, Jun 3, 2020 at 3:53 PM Ghanshyam Mann wrote: > > ---- On Wed, 03 Jun 2020 01:18:41 -0500 Massimo Sgaravatto wrote ---- > > > Hi > > > In my Rocky installation I am preventing users from deleting instances created by other users of the same project.This was implemented setting in the nova policy file: > > > > > > "os_compute_api:servers:delete": "rule:admin_api or user_id:%(user_id)s" > > > > > > This works, even if in the nova log file I see: > > > The user_id attribute isn't supported in the rule 'os_compute_api:servers:delete'. All the user_id based policy enforcement will be removed in the future. > > > > > > Now I would also like preventing user to see the console log file of instances created by other users. I set in the nova policy file: > > > "os_compute_api:os-console-output" : "rule:admin_api or user_id:%(user_id)s" > > > > Nova does not restrict the policy by user_id except keypairs API or a few of the destructive actions( which I think we supported for backwards compatiblity and > > intent to remove it later that is why you can see the warning). I remember we discussed this in 2016 but I could not find the ML thread for that but > > the consensus that time was we do not intend to support user_id based restriction permission in the API. > > > > On the same note, ussuri onwards you can enforce some user-level restriction based on the role, but not by user_id. In the Ussuri cycle, we have implemented > > the keystone new defaults roles in nova policy. You can assign read and write roles for users and achieve the user's isolation within same project. > > Please refer this doc to know more details on those new policies > > - https://docs.openstack.org/nova/latest/configuration/policy-concepts.html > > > > -gmann > > > > > > > > but this doesn't work > > > Any hints ? > > > More in general: were the user_id based policy eventually removed in latest OpenStack releases ?Which are then the possible alternatives to implement my use case ? > > > Thanks, Massimo > > > From massimo.sgaravatto at gmail.com Thu Jun 4 13:51:31 2020 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Thu, 4 Jun 2020 15:51:31 +0200 Subject: [ops] [cinder] [nova] How can I understand if "Delete volume on instance delete" was selected ? Message-ID: Hi I need to delete an instance created from volume, but I don't want to delete such volume. How can I know if the option "Delete volume on instance delete" was selected when creating such instance ? I can't find any information in openstack server/volume show outputs Thanks, Massimo -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jun 4 14:08:05 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 4 Jun 2020 14:08:05 +0000 Subject: [doc] installation guide maintenance In-Reply-To: References: Message-ID: <20200604140803.n6v64kstgzgevddm@yuggoth.org> On 2020-06-04 18:40:45 +0900 (+0900), Akihiro Motoki wrote: > During the doc migration, the installation guide was moved to > individual project repos. > I see problems in installation guide maintenance after the migration. > > - The installation guide is not maintained well perhaps in many projects. > AFAIK they are not verified well at least in horizon and neutron. > - Even if we try to verify it, it is a tough thing because we need to > prepare base distribution > and setup other projects together (of course it depends on projects). > This leads to a development bandwidth and priority issue. > - We sometimes receive bug reports on the installation guide, but it > is not easy for the > upstream team confirm them and verify fixes. > > I guess the installation guides are not being maintained well from > these reasons. > Any thoughts on this situation? (This is my first question.) [...] This could be an ambitious proposal, but the way the Zuul community has approached the problem is that it has a CI job which mirrors its "quick start" deployment guide, with a review policy and embedded comments indicating that any time one is changed the other must also be changed to match. In essence, run automatic tests of the exact same steps you're documenting, or as many of them as you possibly can at least, and keep the two in sync. Since its inception, OpenStack has been distinguished by how its approach to automated testing is superior to the obsolete practice of a human sitting in front of a computer manually trying the same sets of documented steps over and over... which is exactly how the installation guides were still being tested (or more to the point, why they've not been tested with any real consistency for years). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corey.bryant at canonical.com Thu Jun 4 14:24:58 2020 From: corey.bryant at canonical.com (Corey Bryant) Date: Thu, 4 Jun 2020 10:24:58 -0400 Subject: [Openstack] OpenStack Ussuri for Ubuntu 20.04 LTS and 18.04 LTS Message-ID: The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Ussuri on Ubuntu 20.04 LTS and on Ubuntu 18.04 LTS via the Ubuntu Cloud Archive. Details of the Ussuri release can be found at: https://www.openstack.org/software/ussuri To get access to the Ubuntu Ussuri packages: == Ubuntu 20.04 LTS == OpenStack Ussuri is available by default for installation on Ubuntu 20.04. == Ubuntu 18.04 LTS == The Ubuntu Cloud Archive pocket for OpenStack Ussuri can be enabled on Ubuntu 18.04 by running the following commands: sudo add-apt-repository cloud-archive:ussuri The Ubuntu Cloud Archive for Ussuri includes updates for: aodh, barbican, ceilometer, ceph octopus (15.2.1), cinder, designate, designate-dashboard, dpdk (19.11.1), glance, gnocchi, heat, heat-dashboard, horizon, ironic, keystone, libvirt (6.0.0), magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-fwaas-dashboard, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, trove-dashboard, openvswitch (2.13.0), ovn (20.03.0), ovn-octavia-provider, panko, placement, qemu (4.2), sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, watcher-dashboard, and zaqar. For a full list of packages and versions, please refer to: http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/ussuri_versions.html == Branch package builds == If you would like to try out the latest updates to branches, we deliver continuously integrated packages on each upstream commit via the following PPA’s: sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka sudo add-apt-repository ppa:openstack-ubuntu-testing/queens sudo add-apt-repository ppa:openstack-ubuntu-testing/rocky sudo add-apt-repository ppa:openstack-ubuntu-testing/stein sudo add-apt-repository ppa:openstack-ubuntu-testing/train sudo add-apt-repository ppa:openstack-ubuntu-testing/ussuri == Reporting bugs == If you have any issues please report bugs using the 'ubuntu-bug' tool to ensure that bugs get logged in the right place in Launchpad: sudo ubuntu-bug nova-conductor Thank you to everyone who contributed to OpenStack Ussuri. Enjoy and see you in Victoria! Corey (on behalf of the Ubuntu OpenStack Engineering team) -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Thu Jun 4 14:37:56 2020 From: neil at tigera.io (Neil Jerram) Date: Thu, 4 Jun 2020 15:37:56 +0100 Subject: [Openstack] OpenStack Ussuri for Ubuntu 20.04 LTS and 18.04 LTS In-Reply-To: References: Message-ID: On Thu, Jun 4, 2020 at 3:26 PM Corey Bryant wrote: > The Ubuntu OpenStack team at Canonical is pleased to announce the general > availability of OpenStack Ussuri on Ubuntu 20.04 LTS and on Ubuntu 18.04 > LTS via the Ubuntu Cloud Archive. Details of the Ussuri release can be > found at: https://www.openstack.org/software/ussuri > > To get access to the Ubuntu Ussuri packages: > > == Ubuntu 20.04 LTS == > > OpenStack Ussuri is available by default for installation on Ubuntu 20.04. > > == Ubuntu 18.04 LTS == > > The Ubuntu Cloud Archive pocket for OpenStack Ussuri can be enabled on > Ubuntu 18.04 by running the following commands: > > sudo add-apt-repository cloud-archive:ussuri > > The Ubuntu Cloud Archive for Ussuri includes updates for: > > aodh, barbican, ceilometer, ceph octopus (15.2.1), cinder, designate, > designate-dashboard, dpdk (19.11.1), glance, gnocchi, heat, heat-dashboard, > horizon, ironic, keystone, libvirt (6.0.0), magnum, manila, manila-ui, > mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, > networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, > networking-odl, networking-sfc, neutron, neutron-dynamic-routing, > neutron-fwaas, neutron-fwaas-dashboard, neutron-vpnaas, nova, octavia, > octavia-dashboard, openstack-trove, trove-dashboard, openvswitch (2.13.0), > ovn (20.03.0), ovn-octavia-provider, panko, placement, qemu (4.2), sahara, > sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, > watcher-dashboard, and zaqar. > > For a full list of packages and versions, please refer to: > > > http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/ussuri_versions.html > > == Branch package builds == > > > If you would like to try out the latest updates to branches, we deliver > continuously integrated packages on each upstream commit via the following > PPA’s: > > sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka > > sudo add-apt-repository ppa:openstack-ubuntu-testing/queens > > sudo add-apt-repository ppa:openstack-ubuntu-testing/rocky > > sudo add-apt-repository ppa:openstack-ubuntu-testing/stein > > sudo add-apt-repository ppa:openstack-ubuntu-testing/train > > sudo add-apt-repository ppa:openstack-ubuntu-testing/ussuri > > == Reporting bugs == > > > If you have any issues please report bugs using the 'ubuntu-bug' tool to > ensure that bugs get logged in the right place in Launchpad: > > sudo ubuntu-bug nova-conductor > > Thank you to everyone who contributed to OpenStack Ussuri. Enjoy and see > you in Victoria! > > Corey > > (on behalf of the Ubuntu OpenStack Engineering team) > Thanks Corey, that's very timely, I was just beginning to try to install Ussuri on 18.04. I hit this: root at neil-fv-0-ubuntu-ussuri-control-node-region-one:~# add-apt-repository cloud-archive:ussuri + add-apt-repository cloud-archive:ussuri 'ussuri': not a valid cloud-archive name. Must be one of ['folsom', 'folsom-proposed', 'grizzly', 'grizzly-proposed', 'havana', 'havana-proposed', 'icehouse', 'icehouse-proposed', 'juno', 'juno-proposed', 'kilo', 'kilo-proposed', 'liberty', 'liberty-proposed', 'mitaka', 'mitaka-proposed', 'newton', 'newton-proposed', 'ocata', 'ocata-proposed', 'pike', 'pike-proposed', 'queens', 'queens-proposed', 'rocky', 'rocky-proposed', 'tools', 'tools-proposed'] But I presume that's because I need to update the 18.04 install first. (This was with the ubuntu-1804-bionic-v20181114 cloud image, which I guess is way old now.) Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Thu Jun 4 14:43:08 2020 From: corey.bryant at canonical.com (Corey Bryant) Date: Thu, 4 Jun 2020 10:43:08 -0400 Subject: [Openstack] OpenStack Ussuri for Ubuntu 20.04 LTS and 18.04 LTS In-Reply-To: References: Message-ID: On Thu, Jun 4, 2020 at 10:38 AM Neil Jerram wrote: > On Thu, Jun 4, 2020 at 3:26 PM Corey Bryant > wrote: > >> The Ubuntu OpenStack team at Canonical is pleased to announce the general >> availability of OpenStack Ussuri on Ubuntu 20.04 LTS and on Ubuntu 18.04 >> LTS via the Ubuntu Cloud Archive. Details of the Ussuri release can be >> found at: https://www.openstack.org/software/ussuri >> >> To get access to the Ubuntu Ussuri packages: >> >> == Ubuntu 20.04 LTS == >> >> OpenStack Ussuri is available by default for installation on Ubuntu 20.04. >> >> == Ubuntu 18.04 LTS == >> >> The Ubuntu Cloud Archive pocket for OpenStack Ussuri can be enabled on >> Ubuntu 18.04 by running the following commands: >> >> sudo add-apt-repository cloud-archive:ussuri >> >> The Ubuntu Cloud Archive for Ussuri includes updates for: >> >> aodh, barbican, ceilometer, ceph octopus (15.2.1), cinder, designate, >> designate-dashboard, dpdk (19.11.1), glance, gnocchi, heat, heat-dashboard, >> horizon, ironic, keystone, libvirt (6.0.0), magnum, manila, manila-ui, >> mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, >> networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, >> networking-odl, networking-sfc, neutron, neutron-dynamic-routing, >> neutron-fwaas, neutron-fwaas-dashboard, neutron-vpnaas, nova, octavia, >> octavia-dashboard, openstack-trove, trove-dashboard, openvswitch (2.13.0), >> ovn (20.03.0), ovn-octavia-provider, panko, placement, qemu (4.2), sahara, >> sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, >> watcher-dashboard, and zaqar. >> >> For a full list of packages and versions, please refer to: >> >> >> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/ussuri_versions.html >> >> == Branch package builds == >> >> >> If you would like to try out the latest updates to branches, we deliver >> continuously integrated packages on each upstream commit via the following >> PPA’s: >> >> sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka >> >> sudo add-apt-repository ppa:openstack-ubuntu-testing/queens >> >> sudo add-apt-repository ppa:openstack-ubuntu-testing/rocky >> >> sudo add-apt-repository ppa:openstack-ubuntu-testing/stein >> >> sudo add-apt-repository ppa:openstack-ubuntu-testing/train >> >> sudo add-apt-repository ppa:openstack-ubuntu-testing/ussuri >> >> == Reporting bugs == >> >> >> If you have any issues please report bugs using the 'ubuntu-bug' tool to >> ensure that bugs get logged in the right place in Launchpad: >> >> sudo ubuntu-bug nova-conductor >> >> Thank you to everyone who contributed to OpenStack Ussuri. Enjoy and see >> you in Victoria! >> >> Corey >> >> (on behalf of the Ubuntu OpenStack Engineering team) >> > > Thanks Corey, that's very timely, I was just beginning to try to install > Ussuri on 18.04. > > I hit this: > > root at neil-fv-0-ubuntu-ussuri-control-node-region-one:~# > add-apt-repository cloud-archive:ussuri > + add-apt-repository cloud-archive:ussuri > 'ussuri': not a valid cloud-archive name. > Must be one of ['folsom', 'folsom-proposed', 'grizzly', > 'grizzly-proposed', 'havana', 'havana-proposed', 'icehouse', > 'icehouse-proposed', 'juno', 'juno-proposed', 'kilo', 'kilo-proposed', > 'liberty', 'liberty-proposed', 'mitaka', 'mitaka-proposed', 'newton', > 'newton-proposed', 'ocata', 'ocata-proposed', 'pike', 'pike-proposed', > 'queens', 'queens-proposed', 'rocky', 'rocky-proposed', 'tools', > 'tools-proposed'] > > But I presume that's because I need to update the 18.04 install first. > (This was with the ubuntu-1804-bionic-v20181114 cloud image, which I guess > is way old now.) > > Thanks for giving it a go! Yes very likely, let me know if there's still an issue after an 'apt update'. Corey Best wishes, > Neil > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Thu Jun 4 14:44:27 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Thu, 4 Jun 2020 16:44:27 +0200 Subject: [Openstack] OpenStack Ussuri for Ubuntu 20.04 LTS and 18.04 LTS In-Reply-To: References: Message-ID: W dniu 04.06.2020 o 16:24, Corey Bryant pisze: > The Ubuntu OpenStack team at Canonical is pleased to announce the general > availability of OpenStack Ussuri on Ubuntu 20.04 LTS and on Ubuntu 18.04 > LTS via the Ubuntu Cloud Archive. Any words on when Victoria packages for 20.04 land in UCA? So in Kolla we could switch from 18.04 to 20.04 as base for Ubuntu images. From corey.bryant at canonical.com Thu Jun 4 14:48:47 2020 From: corey.bryant at canonical.com (Corey Bryant) Date: Thu, 4 Jun 2020 10:48:47 -0400 Subject: [Openstack] OpenStack Ussuri for Ubuntu 20.04 LTS and 18.04 LTS In-Reply-To: References: Message-ID: On Thu, Jun 4, 2020 at 10:43 AM Corey Bryant wrote: > > > On Thu, Jun 4, 2020 at 10:38 AM Neil Jerram wrote: > >> On Thu, Jun 4, 2020 at 3:26 PM Corey Bryant >> wrote: >> >>> The Ubuntu OpenStack team at Canonical is pleased to announce the >>> general availability of OpenStack Ussuri on Ubuntu 20.04 LTS and on Ubuntu >>> 18.04 LTS via the Ubuntu Cloud Archive. Details of the Ussuri release can >>> be found at: https://www.openstack.org/software/ussuri >>> >>> To get access to the Ubuntu Ussuri packages: >>> >>> == Ubuntu 20.04 LTS == >>> >>> OpenStack Ussuri is available by default for installation on Ubuntu >>> 20.04. >>> >>> == Ubuntu 18.04 LTS == >>> >>> The Ubuntu Cloud Archive pocket for OpenStack Ussuri can be enabled on >>> Ubuntu 18.04 by running the following commands: >>> >>> sudo add-apt-repository cloud-archive:ussuri >>> >>> The Ubuntu Cloud Archive for Ussuri includes updates for: >>> >>> aodh, barbican, ceilometer, ceph octopus (15.2.1), cinder, designate, >>> designate-dashboard, dpdk (19.11.1), glance, gnocchi, heat, heat-dashboard, >>> horizon, ironic, keystone, libvirt (6.0.0), magnum, manila, manila-ui, >>> mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, >>> networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, >>> networking-odl, networking-sfc, neutron, neutron-dynamic-routing, >>> neutron-fwaas, neutron-fwaas-dashboard, neutron-vpnaas, nova, octavia, >>> octavia-dashboard, openstack-trove, trove-dashboard, openvswitch (2.13.0), >>> ovn (20.03.0), ovn-octavia-provider, panko, placement, qemu (4.2), sahara, >>> sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, >>> watcher-dashboard, and zaqar. >>> >>> For a full list of packages and versions, please refer to: >>> >>> >>> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/ussuri_versions.html >>> >>> == Branch package builds == >>> >>> >>> If you would like to try out the latest updates to branches, we deliver >>> continuously integrated packages on each upstream commit via the following >>> PPA’s: >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/queens >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/rocky >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/stein >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/train >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/ussuri >>> >>> == Reporting bugs == >>> >>> >>> If you have any issues please report bugs using the 'ubuntu-bug' tool to >>> ensure that bugs get logged in the right place in Launchpad: >>> >>> sudo ubuntu-bug nova-conductor >>> >>> Thank you to everyone who contributed to OpenStack Ussuri. Enjoy and see >>> you in Victoria! >>> >>> Corey >>> >>> (on behalf of the Ubuntu OpenStack Engineering team) >>> >> >> Thanks Corey, that's very timely, I was just beginning to try to install >> Ussuri on 18.04. >> >> I hit this: >> >> root at neil-fv-0-ubuntu-ussuri-control-node-region-one:~# >> add-apt-repository cloud-archive:ussuri >> + add-apt-repository cloud-archive:ussuri >> 'ussuri': not a valid cloud-archive name. >> Must be one of ['folsom', 'folsom-proposed', 'grizzly', >> 'grizzly-proposed', 'havana', 'havana-proposed', 'icehouse', >> 'icehouse-proposed', 'juno', 'juno-proposed', 'kilo', 'kilo-proposed', >> 'liberty', 'liberty-proposed', 'mitaka', 'mitaka-proposed', 'newton', >> 'newton-proposed', 'ocata', 'ocata-proposed', 'pike', 'pike-proposed', >> 'queens', 'queens-proposed', 'rocky', 'rocky-proposed', 'tools', >> 'tools-proposed'] >> >> But I presume that's because I need to update the 18.04 install first. >> (This was with the ubuntu-1804-bionic-v20181114 cloud image, which I guess >> is way old now.) >> >> > Thanks for giving it a go! Yes very likely, let me know if there's still > an issue after an 'apt update'. > > Sorry, 'apt update && apt install software-properties-common' should get you updated. Corey > > Best wishes, >> Neil >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Thu Jun 4 14:50:25 2020 From: neil at tigera.io (Neil Jerram) Date: Thu, 4 Jun 2020 15:50:25 +0100 Subject: [Openstack] OpenStack Ussuri for Ubuntu 20.04 LTS and 18.04 LTS In-Reply-To: References: Message-ID: On Thu, Jun 4, 2020 at 3:43 PM Corey Bryant wrote: > > > On Thu, Jun 4, 2020 at 10:38 AM Neil Jerram wrote: > >> On Thu, Jun 4, 2020 at 3:26 PM Corey Bryant >> wrote: >> >>> The Ubuntu OpenStack team at Canonical is pleased to announce the >>> general availability of OpenStack Ussuri on Ubuntu 20.04 LTS and on Ubuntu >>> 18.04 LTS via the Ubuntu Cloud Archive. Details of the Ussuri release can >>> be found at: https://www.openstack.org/software/ussuri >>> >>> To get access to the Ubuntu Ussuri packages: >>> >>> == Ubuntu 20.04 LTS == >>> >>> OpenStack Ussuri is available by default for installation on Ubuntu >>> 20.04. >>> >>> == Ubuntu 18.04 LTS == >>> >>> The Ubuntu Cloud Archive pocket for OpenStack Ussuri can be enabled on >>> Ubuntu 18.04 by running the following commands: >>> >>> sudo add-apt-repository cloud-archive:ussuri >>> >>> The Ubuntu Cloud Archive for Ussuri includes updates for: >>> >>> aodh, barbican, ceilometer, ceph octopus (15.2.1), cinder, designate, >>> designate-dashboard, dpdk (19.11.1), glance, gnocchi, heat, heat-dashboard, >>> horizon, ironic, keystone, libvirt (6.0.0), magnum, manila, manila-ui, >>> mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, >>> networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, >>> networking-odl, networking-sfc, neutron, neutron-dynamic-routing, >>> neutron-fwaas, neutron-fwaas-dashboard, neutron-vpnaas, nova, octavia, >>> octavia-dashboard, openstack-trove, trove-dashboard, openvswitch (2.13.0), >>> ovn (20.03.0), ovn-octavia-provider, panko, placement, qemu (4.2), sahara, >>> sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, >>> watcher-dashboard, and zaqar. >>> >>> For a full list of packages and versions, please refer to: >>> >>> >>> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/ussuri_versions.html >>> >>> == Branch package builds == >>> >>> >>> If you would like to try out the latest updates to branches, we deliver >>> continuously integrated packages on each upstream commit via the following >>> PPA’s: >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/queens >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/rocky >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/stein >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/train >>> >>> sudo add-apt-repository ppa:openstack-ubuntu-testing/ussuri >>> >>> == Reporting bugs == >>> >>> >>> If you have any issues please report bugs using the 'ubuntu-bug' tool to >>> ensure that bugs get logged in the right place in Launchpad: >>> >>> sudo ubuntu-bug nova-conductor >>> >>> Thank you to everyone who contributed to OpenStack Ussuri. Enjoy and see >>> you in Victoria! >>> >>> Corey >>> >>> (on behalf of the Ubuntu OpenStack Engineering team) >>> >> >> Thanks Corey, that's very timely, I was just beginning to try to install >> Ussuri on 18.04. >> >> I hit this: >> >> root at neil-fv-0-ubuntu-ussuri-control-node-region-one:~# >> add-apt-repository cloud-archive:ussuri >> + add-apt-repository cloud-archive:ussuri >> 'ussuri': not a valid cloud-archive name. >> Must be one of ['folsom', 'folsom-proposed', 'grizzly', >> 'grizzly-proposed', 'havana', 'havana-proposed', 'icehouse', >> 'icehouse-proposed', 'juno', 'juno-proposed', 'kilo', 'kilo-proposed', >> 'liberty', 'liberty-proposed', 'mitaka', 'mitaka-proposed', 'newton', >> 'newton-proposed', 'ocata', 'ocata-proposed', 'pike', 'pike-proposed', >> 'queens', 'queens-proposed', 'rocky', 'rocky-proposed', 'tools', >> 'tools-proposed'] >> >> But I presume that's because I need to update the 18.04 install first. >> (This was with the ubuntu-1804-bionic-v20181114 cloud image, which I guess >> is way old now.) >> >> > Thanks for giving it a go! Yes very likely, let me know if there's still > an issue after an 'apt update'. > To confirm: Yes, it's fine after an apt update and upgrade. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Thu Jun 4 14:57:11 2020 From: corey.bryant at canonical.com (Corey Bryant) Date: Thu, 4 Jun 2020 10:57:11 -0400 Subject: [Openstack] OpenStack Ussuri for Ubuntu 20.04 LTS and 18.04 LTS In-Reply-To: References: Message-ID: On Thu, Jun 4, 2020 at 10:46 AM Marcin Juszkiewicz < marcin.juszkiewicz at linaro.org> wrote: > W dniu 04.06.2020 o 16:24, Corey Bryant pisze: > > The Ubuntu OpenStack team at Canonical is pleased to announce the general > > availability of OpenStack Ussuri on Ubuntu 20.04 LTS and on Ubuntu 18.04 > > LTS via the Ubuntu Cloud Archive. > > Any words on when Victoria packages for 20.04 land in UCA? So in Kolla we > could switch from 18.04 to 20.04 as base for Ubuntu images. > > I would say within 2 weeks. I'm going to start on opening the victoria UCA tomorrow. Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Thu Jun 4 15:17:02 2020 From: amy at demarco.com (Amy Marrich) Date: Thu, 4 Jun 2020 10:17:02 -0500 Subject: [doc] installation guide maintenance In-Reply-To: <20200604140803.n6v64kstgzgevddm@yuggoth.org> References: <20200604140803.n6v64kstgzgevddm@yuggoth.org> Message-ID: I just ran through an install last week and found 3 issues. One was a bad link which I put a patch up for, the other there was a Horizon bug for, and the last was Placement and I've reached out in #openstack-placement but need to follow up I'm willing to patch the docs but not sure where as there's 2 options. I only did Keystone, Nova, Glance, Neutron, Placement, and Cinder so not sure the state of any others Thanks, Amy (spotz) On Thu, Jun 4, 2020 at 9:11 AM Jeremy Stanley wrote: > On 2020-06-04 18:40:45 +0900 (+0900), Akihiro Motoki wrote: > > During the doc migration, the installation guide was moved to > > individual project repos. > > I see problems in installation guide maintenance after the migration. > > > > - The installation guide is not maintained well perhaps in many projects. > > AFAIK they are not verified well at least in horizon and neutron. > > - Even if we try to verify it, it is a tough thing because we need to > > prepare base distribution > > and setup other projects together (of course it depends on projects). > > This leads to a development bandwidth and priority issue. > > - We sometimes receive bug reports on the installation guide, but it > > is not easy for the > > upstream team confirm them and verify fixes. > > > > I guess the installation guides are not being maintained well from > > these reasons. > > Any thoughts on this situation? (This is my first question.) > [...] > > This could be an ambitious proposal, but the way the Zuul community > has approached the problem is that it has a CI job which mirrors its > "quick start" deployment guide, with a review policy and embedded > comments indicating that any time one is changed the other must also > be changed to match. In essence, run automatic tests of the exact > same steps you're documenting, or as many of them as you possibly > can at least, and keep the two in sync. > > Since its inception, OpenStack has been distinguished by how its > approach to automated testing is superior to the obsolete practice > of a human sitting in front of a computer manually trying the same > sets of documented steps over and over... which is exactly how the > installation guides were still being tested (or more to the point, > why they've not been tested with any real consistency for years). > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Jun 4 15:40:11 2020 From: melwittt at gmail.com (melanie witt) Date: Thu, 4 Jun 2020 08:40:11 -0700 Subject: [ops] [cinder] [nova] How can I understand if "Delete volume on instance delete" was selected ? In-Reply-To: References: Message-ID: On 6/4/20 06:51, Massimo Sgaravatto wrote: > Hi > > I need to delete an instance  created from volume, but I don't want to > delete such volume. > How can I know if the option "Delete volume on instance delete" was > selected when creating such instance ? > I can't find any information in openstack server/volume show outputs What you want to see is the 'delete_on_termination' field which was added in API microversion 2.3 [1] and with OSC, you need to pass the needed microversion to the CLI, for example: openstack --os-compute-api-version 2.3 server show And as of API microversion 2.85 [2], it's possible to change the value of 'delete_on_termination' if desired. If you're on an older version than has 2.85 and 'delete_on_termination' is True, then maybe you could migrate the volume in cinder before deleting the instance as a way to preserve the volume? Hope this helps, -melanie [1] https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-kilo [2] https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id78 From openstack at nemebean.com Thu Jun 4 15:57:00 2020 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 4 Jun 2020 10:57:00 -0500 Subject: [oslo][keystone][nova] Spec for moving policy format default to YAML Message-ID: <539084be-316a-7bbe-3ed6-4e40defbe20c@nemebean.com> One of the outcomes of the Oslo PTG session on Monday was that we need to make YAML the official default for olso.policy instead of just the unofficial default as it has been since policy-in-code happened. The reason this hasn't happened before now is that it is complex and fraught with security concerns, but the RBAC work going on now has made it clear that we need do it anyway. To that end, I've written a spec[0] that I believe captures the plan we outlined in the PTG session. If this is relevant to your interests, please take a look and leave feedback. Thanks. -Ben 0: https://review.opendev.org/733650 From massimo.sgaravatto at gmail.com Thu Jun 4 16:03:57 2020 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Thu, 4 Jun 2020 18:03:57 +0200 Subject: [ops] [cinder] [nova] How can I understand if "Delete volume on instance delete" was selected ? In-Reply-To: References: Message-ID: Yes" it helps ! Thanks a lot ! On Thu, Jun 4, 2020 at 5:40 PM melanie witt wrote: > On 6/4/20 06:51, Massimo Sgaravatto wrote: > > Hi > > > > I need to delete an instance created from volume, but I don't want to > > delete such volume. > > How can I know if the option "Delete volume on instance delete" was > > selected when creating such instance ? > > I can't find any information in openstack server/volume show outputs > > What you want to see is the 'delete_on_termination' field which was > added in API microversion 2.3 [1] and with OSC, you need to pass the > needed microversion to the CLI, for example: > > openstack --os-compute-api-version 2.3 server show > > And as of API microversion 2.85 [2], it's possible to change the value > of 'delete_on_termination' if desired. > > If you're on an older version than has 2.85 and 'delete_on_termination' > is True, then maybe you could migrate the volume in cinder before > deleting the instance as a way to preserve the volume? > > Hope this helps, > -melanie > > [1] > > https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-kilo > [2] > > https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id78 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Jun 4 17:17:02 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 04 Jun 2020 12:17:02 -0500 Subject: [oslo][keystone][nova] Spec for moving policy format default to YAML In-Reply-To: <539084be-316a-7bbe-3ed6-4e40defbe20c@nemebean.com> References: <539084be-316a-7bbe-3ed6-4e40defbe20c@nemebean.com> Message-ID: <17280560c89.11944c7c9218620.4878094085646249160@ghanshyammann.com> ---- On Thu, 04 Jun 2020 10:57:00 -0500 Ben Nemec wrote ---- > One of the outcomes of the Oslo PTG session on Monday was that we need > to make YAML the official default for olso.policy instead of just the > unofficial default as it has been since policy-in-code happened. The > reason this hasn't happened before now is that it is complex and fraught > with security concerns, but the RBAC work going on now has made it clear > that we need do it anyway. > > To that end, I've written a spec[0] that I believe captures the plan we > outlined in the PTG session. If this is relevant to your interests, > please take a look and leave feedback. Thanks, Ben for composing the spec, I added one comment about warning on having default rules in the file. Also, we will be tracking this in policy-popup team also as these are the things to finish before other projects ship the new policy. -gmann > > Thanks. > > -Ben > > 0: https://review.opendev.org/733650 > > From whayutin at redhat.com Thu Jun 4 19:07:02 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 4 Jun 2020 13:07:02 -0600 Subject: [tripleo] Thanks for a great PTG :) Message-ID: Thanks all! Great topics, participation, discussion etc! Notes and a summary are in the works. And now your moment of zen. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: openstack-ptg-victoria-tripleo-pic2.png Type: image/png Size: 816187 bytes Desc: not available URL: From gouthampravi at gmail.com Thu Jun 4 21:49:49 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 4 Jun 2020 14:49:49 -0700 Subject: [manila][ptg] Minutes, recordings, final day items and Happy Hour! Message-ID: Hello Zorillas and interested Stackers, It's Friday already, but it feels like we've been there every day this week! Thanks to note takers, we have the meeting etherpad up-to-date wrt the meetings we've had thus far. [1] Tomorrow, 5th June, Friday, we meet in the same room () between 1300 UTC and 1700 UTC - the schedule details are in the etherpad [1], however the main items are as below: - Optimize the quota processing logic for manage share (haixin) (1300-1330 UTC) - Share and share size quotas/limits per share server (carloss/dviroel) (1340-1410 UTC) - Share server migration (dviroel) (1420 - 1450 UTC) - Manila Container Storage Interface project update/roadmap/demo (tbarron) (1500-1530 UTC) - Manila OSC/CLI "common commands" discussion (maaritamm/vkmc) (1530-1550 UTC) - Virtual Happy Hour (all) (1600-1700 UTC) You'll find the recordings of our past discussions uploaded to YouTube [2]. Check the descriptions to skip ahead to specific topics. Please let me know if you have any difficulty accessing these videos. Hope to see you tomorrow! Thanks, Goutham Pacha Ravi [1] https://etherpad.opendev.org/p/victoria-ptg-manila [2] https://www.youtube.com/playlist?list=PLnpzT0InFrqBKkyIAQdA9RFJnx-geS3lp -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Jun 4 22:48:42 2020 From: zigo at debian.org (Thomas Goirand) Date: Fri, 5 Jun 2020 00:48:42 +0200 Subject: [doc] installation guide maintenance In-Reply-To: References: Message-ID: On 6/4/20 11:40 AM, Akihiro Motoki wrote: > - Drop the installation guide (per OS or as a whole) Please don't do that. > Most users deploy OpenStack using deployment projects (or their own > deployment tools). Then how would the person writing the deployment tool do the work? > Step-by-step guides might be useful from educational perspective They are. > but unmaintained guides are not useful. What I would very much prefer would be a more general guide on how to setup a MySQL db for a project, and how to setup an endpoint. Then the individual project documentation would just point at that doc, and inform the reader what is the default for: - service name - port number - endpoint URL Then we need a more general descriptions of what services are for, and where to deploy then. For example, in Neutron, it'd be enough to explain that: - neutron-rpc-server should be deployed on 3 controllers - neutron-api should go on 3 controllers, with HAproxy in front - dhcp-agent can be deployed anywhere, but then l3 & l2 agent must be on the same server as well - metadata-agent, ovs-agent and l3-agent must be on each compute This type of explanation is a way more useful than a step-by-step: apt-get install neutron-l3-agent where we're giving zero explanation of what the l3 agent does, where it should be installed, and why. >From the Debian perspective, I would have like to have a specific place where we could have explained how the Debconf system works, and how it has been designed in the package (ie: in a way so that a full cluster deployment could be done only with preseed, but also in a way so that it doesn't bother anyone using a deployment tool like Puppet or Ansible). Currently, there's no place where to write about this. Though it was documented, and well documented, at the time. I believe the install-guide for Debian was kind of nearly perfect around in 2016, with these specific chapters, and lots of debconf screenshots to explain it all. That is just *GONE*. This is frustrating, and a disservice for our users. More generally, I found the move of the install guide to each individual project was a huge regression, and since then, it has a lot less value. All the work I did at the time to document things for Debian has been slowly removed. I have absolutely no idea why contributors have been doing that, and I don't think that's fair to have done it. I also found that the conditionals we had at the time (ie: if osname=Ubuntu or Debian, that kind of things...) was very helpful. These days, we're duplicating, that's really silly. Back at the time, the decision was made because some core contributors to the install guide just left the project. It was at the time said that moving the maintenance to each individual projects would scale nicer. I am now convince that this is the exact opposite that has happened: docs are maintained a lot less now, with lower quality, and less uniformity. So I am convince that we did a very bad move at the time, one that we shouldn't have made. If we were going back to what we would do in 2016, I would contribute again. But considering what the install guide has become, this really isn't motivating. Your thoughts? Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Thu Jun 4 23:35:12 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 4 Jun 2020 23:35:12 +0000 Subject: [doc] installation guide maintenance In-Reply-To: References: Message-ID: <20200604233512.c7m5rgtthicmijui@yuggoth.org> On 2020-06-05 00:48:42 +0200 (+0200), Thomas Goirand wrote: [...] > Back at the time, the decision was made because some core > contributors to the install guide just left the project. It wasn't just "some core contributors to the install guide" but rather the entirety of the documentation team, or very nearly so. The one or two people who remained barely had time to help maintain tooling around generating documentation and no time whatsoever to review content. > It was at the time said that moving the maintenance to each > individual projects would scale nicer. It scaled at least as well as the alternative, which was no longer updating the documentation at all. There were calls for help over many months, in lots of places, and yet no new volunteers stepped forward to take up the task. > I am now convince that this is the exact opposite that has > happened: docs are maintained a lot less now, with lower quality, > and less uniformity. So I am convince that we did a very bad move > at the time, one that we shouldn't have made. [...] Less documentation maintenance was inevitable no matter what choice was made. The employer of basically all the technical writers in the OpenStack community decided there were better places for it to focus that time and money. The only people left who could write and review documentation were the developers in each project. The task fell on them not because they wanted it, but because there was no one else. That they often don't find a lot of time to devote to documentation is unsurprising, and in that regard representative of most open source software communities I've ever known. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From arne.wiebalck at cern.ch Fri Jun 5 09:04:46 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Fri, 5 Jun 2020 11:04:46 +0200 Subject: [baremetal-sig][ironic] Baremetal whitepaper: the final chapter Message-ID: Dear all, The bare metal white paper [0] is almost finished, thanks to everyone who helped during the past weeks! We plan to have a (hopefully final) session to address the remaining open issues and to round things off during one of slots proposed in the doodle available at https://doodle.com/poll/afwgy9zs8fi55wqe Everyone is still welcome to give input or feedback. Like before, I will send out the call details once we have settled on the time slot. Cheers, Arne [0] https://docs.google.com/document/d/1BmB2JL_oG3lWXId_NXT9KWcBJjqgtnbmixIcNsfGooA/edit -- Arne Wiebalck CERN IT From zhaoxiaolin at loongson.cn Fri Jun 5 09:51:06 2020 From: zhaoxiaolin at loongson.cn (=?UTF-8?B?6LW15pmT55Cz?=) Date: Fri, 5 Jun 2020 17:51:06 +0800 (GMT+08:00) Subject: [nova][libvirt] Support for MIPS architecture? Message-ID: <48158965.124e2.17283e426e5.Coremail.zhaoxiaolin@loongson.cn> Hi I'm trying to run openstack on a host with MIPS architecture, but got some errors, and I have fixed them. Many people around me use hosts with MIPS architecture. We hope the official can add support for MIPS and we can maintain it. Thanks, xiaolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Jun 5 11:35:31 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 05 Jun 2020 12:35:31 +0100 Subject: [nova][libvirt] Support for MIPS architecture? In-Reply-To: <48158965.124e2.17283e426e5.Coremail.zhaoxiaolin@loongson.cn> References: <48158965.124e2.17283e426e5.Coremail.zhaoxiaolin@loongson.cn> Message-ID: <092808511d77eac2cf993708eba8a7acf770f233.camel@redhat.com> On Fri, 2020-06-05 at 17:51 +0800, 赵晓琳 wrote: > Hi I'm trying to run openstack on a host with MIPS architecture, but got some errors, and I have fixed > them. Many people around me use hosts with MIPS architecture. We hope the official can add support for MIPS and we > can maintain it. Thanks, xiaolin to state that mips is fully supported would require an automated ci running on mips hardware. if you can provide ci resources either to the first party ci or via a third party ci that projects can consume we may be able to test that the basic functionality works. in the long run support for other architectures really required a concerted effort from a vendor or community that runs openstack on that architecture. there recently has been a effort to add more aarch64 testing but traditionally anything that was not x86 fell to third party hardware vendors to test via third party ci. i.e. the ibm provided powerVM and power KVM CIs to test nova on power pc. From balazs.gibizer at est.tech Fri Jun 5 12:44:30 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 05 Jun 2020 14:44:30 +0200 Subject: [nova][ptg] bumping compute RPC to 6.0 In-Reply-To: References: Message-ID: <62EGBQ.7B8FTYKD5IK53@est.tech> On Mon, May 25, 2020 at 12:05, Artom Lifshitz wrote: > On Mon, May 25, 2020 at 11:58 AM Balázs Gibizer > wrote: >> >> Hi, >> >> [This is a topic from the PTG etherpad [0]. As the PTG time is >> intentionally kept short, let's try to discuss it or even conclude >> it >> before the PTG] >> >> Do we want to bump the compute RPC to 6.0 in Victoria? > > Like it or not, fast-forward upgrades where one or more releases are > skipped is something we have to live with and take into account. What > would be the FFWD upgrade implications of bumping RPC to 6.0 in > Victoria, specifically for someone upgrading from Train to W or later? My understanding is that during FFU you stop both the control plane and the compute services. If you need rolling upgrade of computes then you cannot do FFU but you have to roll forward one release at a time. Cheers, gibi > >> >> We did not have the full view what we can gain[2] with such bump so >> Stephen and I searched through nova and collected the things we >> could >> clean up [1] eventually if we bump the compute RPC to 6.0 in >> Victoria. >> >> I can work on the RPC 6.0 patch during V and I think I can also >> help in >> with the possible cleanups later. >> >> Cheers, >> gibi >> >> [0] https://etherpad.opendev.org/p/nova-victoria-ptg >> [1] https://etherpad.opendev.org/p/compute-rpc-6.0 >> [2] >> >> http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-04-16-16.00.log.html#l-103 >> >> >> > From balazs.gibizer at est.tech Fri Jun 5 12:45:40 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 05 Jun 2020 14:45:40 +0200 Subject: [nova][ptg] bumping compute RPC to 6.0 In-Reply-To: References: Message-ID: <44EGBQ.NC63BW3NY7HD2@est.tech> On Mon, May 25, 2020 at 17:54, Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > Do we want to bump the compute RPC to 6.0 in Victoria? > > We did not have the full view what we can gain[2] with such bump so > Stephen and I searched through nova and collected the things we could > clean up [1] eventually if we bump the compute RPC to 6.0 in Victoria. > > I can work on the RPC 6.0 patch during V and I think I can also help > in with the possible cleanups later. On the PTG we agreed that gibi will try to propose a RPC 6.0 bump patch before M3 and we will make a decision at M3 if we merge it or not. Cheers, gibi > > Cheers, > gibi > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] https://etherpad.opendev.org/p/compute-rpc-6.0 > [2] > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-04-16-16.00.log.html#l-103 > > From balazs.gibizer at est.tech Fri Jun 5 12:47:02 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 05 Jun 2020 14:47:02 +0200 Subject: [nova][ptg] Can we close old bugs? In-Reply-To: References: Message-ID: On Mon, May 18, 2020 at 16:11, Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > We have more than 800 open bugs in nova [1] and the oldest is 8 years > old. > Can we close old bugs? > If yes, what would be the closing criteria? Age and status? > > Personally I would close every bug that is not updated in the last 3 > years and not in INPROGRESS state. We agreed on the PTG that it does not worth the churn to close old bugs. Cheers, gibi > > Cheers, > gibi > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] > https://bugs.launchpad.net/nova/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=INCOMPLETE_WITH_RESPONSE > From jeremyfreudberg at gmail.com Fri Jun 5 13:42:45 2020 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Fri, 5 Jun 2020 09:42:45 -0400 Subject: [nova][libvirt] Support for MIPS architecture? In-Reply-To: <092808511d77eac2cf993708eba8a7acf770f233.camel@redhat.com> References: <48158965.124e2.17283e426e5.Coremail.zhaoxiaolin@loongson.cn> <092808511d77eac2cf993708eba8a7acf770f233.camel@redhat.com> Message-ID: OpenStack also has a Multi-Arch Special Interest Group: https://docs.openstack.org/multi-arch-sig/latest/index.html Although (as Sean says) most non-x86 efforts are currently around aarch64, the Multi-Arch Special Interest Group is still interested in tracking (and, if possible, facilitating) MIPS-related efforts. On Fri, Jun 5, 2020 at 7:39 AM Sean Mooney wrote: > > On Fri, 2020-06-05 at 17:51 +0800, 赵晓琳 wrote: > > Hi I'm trying to run openstack on a host with MIPS architecture, but got some errors, and I have fixed > > them. Many people around me use hosts with MIPS architecture. We hope the official can add support for MIPS and we > > can maintain it. Thanks, xiaolin > to state that mips is fully supported would require an automated ci running on mips hardware. > if you can provide ci resources either to the first party ci or via a third party ci that projects can consume > we may be able to test that the basic functionality works. in the long run support for other architectures really > required a concerted effort from a vendor or community that runs openstack on that architecture. > > there recently has been a effort to add more aarch64 testing but traditionally anything that was not x86 fell to > third party hardware vendors to test via third party ci. i.e. the ibm provided powerVM and power KVM CIs to test nova on > power pc. > > From radoslaw.piliszek at gmail.com Fri Jun 5 14:20:48 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=c5=82aw_Piliszek?=) Date: Fri, 5 Jun 2020 16:20:48 +0200 Subject: [kolla-ansible] Proposing Doug Szumski as Kolla Ansible core In-Reply-To: References: Message-ID: <741d85b8-9827-9493-29b9-c22f7b30c33e@gmail.com> Hi Folks! I've seen only (very) positive feedback, hence I've just added Doug as a new member of kolla-ansible-core team. Welcome, Doug! :-) -yoctozepto On 2020-05-29 15:18, Radosław Piliszek wrote: > Hi Folks! > > This mail serves to propose Doug Szumski from StackHPC (dougsz @IRC, > CC'ed) as Kolla Ansible core. > > Doug coauthored the Nova cells support and helps greatly with monitoring > and logging facilities available in Kolla. > > Please give your feedback in this thread. > > If there are no objections, I will add Doug after a week from now (that > is roughly when PTG is over). > > -yoctozepto From balazs.gibizer at est.tech Fri Jun 5 15:25:56 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 05 Jun 2020 17:25:56 +0200 Subject: [nova][neutron][ptg] Future of the routed network support In-Reply-To: References: Message-ID: <8JLGBQ.VE52TOBLO09L2@est.tech> On Wed, May 20, 2020 at 16:03, Balázs Gibizer wrote: > Hi, > > [This is a topic from the PTG etherpad [0]. As the PTG time is > intentionally kept short, let's try to discuss it or even conclude it > before the PTG] > > There is only basic scheduling support in nova for the neutron routed > networks feature (server create with port.ip_allocation=deferred > seems to work). There was multiple attempts in the past to complete > the support (e.g.server create with port.ip_allocation=immediate or > server move operations). The latest attempt started by Matt couple of > cycles ago, and in the last cycle I tried to push that forward[1]. > When I added this topic to the PTG etherpad I thought I will have > time in the Victoria cycle to continue [1] but internal priorities > has changed. So finishing this feature needs some developers. If > there are volunteers for Victoria then please let me know and then we > can keep this as a topic for the PTG but otherwise I will remove it > from the schedule. Sylvain stepped up to continue the nova effort and he also started a spec [1] about it. On the PTG we agreed in Victoria we do the next step on the nova side that does not require any neutron changes right now. If both nova and neutron have extra brainpower then we will continue discussing the more generic cross-project effort. Cheers, gibi [1] https://review.opendev.org/#/c/733703 > > Cheers, > gibi > > [0] https://etherpad.opendev.org/p/nova-victoria-ptg > [1] https://review.opendev.org/#/q/topic:routed-networks-scheduling > > From balazs.gibizer at est.tech Fri Jun 5 15:37:39 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 05 Jun 2020 17:37:39 +0200 Subject: [nova][neutron][ptg] How to increase the minimum bandwidth guarantee of a running instance In-Reply-To: <09K3BQ.U60PNNRYHFS31@est.tech> References: <20200519195510.chfuo5byodcrooaj@skaplons-mac> <09K3BQ.U60PNNRYHFS31@est.tech> Message-ID: On Fri, May 29, 2020 at 16:29, Balázs Gibizer wrote: > > > On Wed, May 20, 2020 at 13:50, Balázs Gibizer > wrote: >> >> >> On Tue, May 19, 2020 at 23:48, Sean Mooney >> wrote: >>> On Tue, 2020-05-19 at 21:55 +0200, Slawek Kaplonski wrote: >>> > > [snip] > > > Also I would like to tease out the Neutron team's opinion about the > option of implementing Option B on the neutron side. E.g.: > * User request a min bw rule replacement > * Neutron reads the current allocation of the port.device_id (i.e > instance_uuid) from placement > * Neutron calculates the difference between the bw resource request > of the old min bw rule and the new min bw rule > * Neutron adds this difference to the bw allocation of the RP > indicated by the value of port.binding_profile['allocation'] (which > is an RP uuid) and the PUTs the new instance allocation back to > placement. If the PUT /allocations call succeed the the rule > replacement is accepted and if the PUT /allocations fails then the > rule replacement is rejected to the end user. > On the PTG we agreed that * There will be an RFE on neutron to allow in-place min bandwidth allocation change based on the above drafted sequence * Triggering a resize due to interface attach or port.resource_request change seems insane. In the future we might look at that from a different perspective. I.e. What if a resize could take a new parameter that indicates that the resize is not due to flavor change but due to bandwidth change. Cheers, gibi [snip] > > Cheers, > gibi > > > From balazs.gibizer at est.tech Fri Jun 5 15:46:47 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 05 Jun 2020 17:46:47 +0200 Subject: [nova][ptg] virtual PTG In-Reply-To: References: <6VQA9Q.TH47VQEIJBW43@est.tech> Message-ID: Hi, Nova is done with the PTG. I would like to thank you all for participating! I feel that we had a productive virtual PTG. If you have any feedback about the nova sessions the feel free to let me know here or in IRC. Cheers, gibi On Wed, May 27, 2020 at 19:05, Balázs Gibizer wrote: > Hi, > > I've did some reorganization on the PTG etherpad [1] and assigned > time slots for different discussions. Please check the etherpad and > let me know any issues with timing or organization of the topics. > > Cheers, > gibi > > [1] https://etherpad.opendev.org/p/nova-victoria-ptg > > On Tue, May 19, 2020 at 16:29, Balázs Gibizer > wrote: >> Hi, >> >> The PTG is two weeks from now. So I would like to encourage you to >> look at the PTG etherpad [0]. If you have topics for the PTG then >> please start discussing them now on the ML or in a spec. (See the >> threads [1][2][3][4][5] I have already started) Such preparation is >> needed as we will only have limited time to conclude the topics >> during the PTG. >> >> Cheers, >> gibi >> >> [0] https://etherpad.opendev.org/p/nova-victoria-ptg >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014916.html >> [2] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014917.html >> [3] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014919.html >> [4] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014921.html >> [5] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014938.html >> >> On Wed, Apr 29, 2020 at 11:38, Balázs Gibizer >>  wrote: >>> Hi, >>> >>> Based on the doodle I've booked the following slots for Nova, in >>> the Rocky room[1]. >>> >>> * June 3 Wednesday 13:00 UTC - 15:00 UTC >>> * June 4 Thursday 13:00 UTC - 15:00 UTC >>> * June 5 Friday 13:00 UTC - 15:00 UTC >>> >>> I have synced with Slaweq and we agreed to have the Neutron - Nova >>> cross project discussion on Friday form 13:00 UTC. >>> >>> If it turns out that we need more time then we can arrange extra >>> slots per topic on the week after. >>> >>> Cheers, >>> gibi >>> >>> [1] https://ethercalc.openstack.org/126u8ek25noy >>> >>> >>> On Fri, Apr 24, 2020 at 16:28, Balázs Gibizer >>>  wrote: >>>> >>>> >>>> On Wed, Apr 15, 2020 at 10:26, Balázs Gibizer >>>>  wrote: >>>>> Hi, >>>>> >>>>> I need to book slots for nova discussions on the official >>>>> schedule[2] of the virtual PTG[1]. I need two things to >>>>> do that: >>>>> >>>>> 1) What topics we have that needs real-time discussion. Please >>>>> add those to the etherpad [3] >>>>> 2) Who wants to join such real-time discussion and what time >>>>> slots works for you. Please fill the doodle[4] >>>>> >>>>> Based on the current etherpad content we need 2-3 slots for nova >>>>> discussion and a half slot for cross project (current >>>>> neutron) discussion. >>>> >>>> We refined our schedule during the Nova meeting [5]. Based on that >>>> my current plan is to book one 2 hours slot for 3 >>>> consecutive days (Wed-Fri) during the PTG week. I talked to >>>> Slaweq about a neutron-nova cross project session. We agreed >>>> to book a one hour slot for that. >>>> >>>> If you haven't filled the doodle[4] then please do so until early >>>> next week. >>>> >>>> Thanks >>>> gibi >>>> >>>>> >>>>> Please try to provide your topics and options before the >>>>> Thursday's nova meeting. >>>>> >>>>> Cheers, >>>>> gibi >>>>> >>>>> >>>>> [1] >>>>> http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014126.html >>>>> [2] https://ethercalc.openstack.org/126u8ek25noy >>>>> [3] https://etherpad.opendev.org/p/nova-victoria-ptg >>>>> [4] https://doodle.com/poll/ermn3vxy9v53aayy >>>>> >>>> [5] >>>> http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-04-23-16.00.log.html#l-119 >>>> >>>> >>>> >>> >>> >>> >> >> >> > > > From jp.methot at planethoster.info Fri Jun 5 16:33:01 2020 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Fri, 5 Jun 2020 12:33:01 -0400 Subject: [nova] running nova-scheduler in a container Message-ID: Hi, I’ve been building my own docker images as a mean to both learn docker and to see if we can make our own images and run them in production. I’ve figured out how to make most services run fairly well. However, an issue remains with nova-scheduler and I can’t seem to figure out what’s going on. Essentially, when I try to create a VM it loops in a scheduling state and when I try to delete a VM, it loops forever in a deleting state. I’ve narrowed down the culprit to nova-scheduler. As far as I know, nothing appears in the debug logs of my containerized nova-scheduler whenever I do any kind of action, which forces me to believe that nova-scheduler is not receiving any command. From what I’ve always understood, nova-scheduler works through RPC and Rabbitmq. The fact that this nova-scheduler connects to rabbitmq without issue makes me believe that something else is missing from my container configuration. Does Nova-scheduler listen on network port? Does it listen on a socket? Is there any way that nova-scheduler could ignore requests sent to it? Jean-Philippe Méthot Senior Openstack system administrator Administrateur système Openstack sénior PlanetHoster inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Fri Jun 5 17:20:15 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Fri, 05 Jun 2020 20:20:15 +0300 Subject: [openstack-ansible] PTG results Message-ID: <213831591377527@mail.yandex.ru> An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Jun 5 17:26:34 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 05 Jun 2020 18:26:34 +0100 Subject: [nova] running nova-scheduler in a container In-Reply-To: References: Message-ID: <289a222591b74e49244ebd5b3fae93c7be6c0f71.camel@redhat.com> On Fri, 2020-06-05 at 12:33 -0400, Jean-Philippe Méthot wrote: > Hi, > > I’ve been building my own docker images as a mean to both learn docker and to see if we can make our own images and > run them in production. I’ve figured out how to make most services run fairly well. However, an issue remains with > nova-scheduler and I can’t seem to figure out what’s going on. > > Essentially, when I try to create a VM it loops in a scheduling state and when I try to delete a VM, it loops forever > in a deleting state. the scheduler is not invlvoed in deleteing a vm so this more or less rules out the schduler as teh route cause. i woudl guess the issue likes somewhere beteen the api and conductor. > I’ve narrowed down the culprit to nova-scheduler. can you explaine why you think its the nova-schduler? > As far as I know, nothing appears in the debug logs of my containerized nova-scheduler whenever I do any kind of > action, which forces me to believe that nova-scheduler is not receiving any command. did you confirm thjat the conductor was reciving the build requierst and calling the schduler. > > From what I’ve always understood, nova-scheduler works through RPC and Rabbitmq. The fact that this nova-scheduler > connects to rabbitmq without issue makes me believe that something else is missing from my container configuration. > > Does Nova-scheduler listen on network port? not the scudler only compunicates withthe conductor via the rpc bus. > Does it listen on a socket? no > Is there any way that nova-scheduler could ignore requests sent to it? only if it was not listening to the corerct exchange. i would first change that the api show an rpc to the conductor and validate that the conductor started the buidl request. if you see output in the conductor log realted to your api queries then you can check the logs to see if ti called the schduler. > > > Jean-Philippe Méthot > Senior Openstack system administrator > Administrateur système Openstack sénior > PlanetHoster inc. > > > > > > From noonedeadpunk at ya.ru Fri Jun 5 17:47:55 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Fri, 05 Jun 2020 20:47:55 +0300 Subject: [openstack-ansible] PTG results Message-ID: <208941591377694@mail.yandex.ru> Hi there, Sorry for the previous email - it has been sent accidentally - it was just at the draft stage... I did some formating afterwards:) Here are some of the decisions we come up with during the PTG week: * CentOS 8 topic. ** Install systemd-extras to get systemd-networkd ** Launch without LXC build at first because of the complexity of the solution. To implement LXC we would need the following: *** Replace what `machinectl` does with Ansible tasks, as it's not working properly without btrfs *** See how much it costs to implement lxd - that would not only resolve Centos 8 issue, but also is pretty appreciated among comunity. However, installing in with snapd will auto-update. it's possible to delay up to 60 days the updates but still needs to fix/clean ** Make nspawn unmaintained, because it mostly relies on machinectl. *** Remove nspawn from docs. *** Remove code in cycle afterwards. ** Backport CentOS 8 to Ussuri, drop CentOS 7 for Victoria afterwards. *** On Ussuri CentOS 7 is going to live without distro support *** write reno, that explains that distro installed OSA upgrade path might be tricky/broken for CentOS because of absent Centos7 packages for U. * Logs topic ** finish transition to journald *** check heat for logging *** check for ceph logs - see [1] but this may be a result of the way cephadm containerises all the ceph daemons ** Rewrite log collection script on python with systemd python bindings and get deprecated messages from services' journal into separate file ** Check where we don't use uwsgi role and see if we can use it there now (like designate) ** Check through the logs what roles are we covered with tempest, and what not. We have buch of roles that run tempest, but test like only keystone (but not itself). ** We add libvirtd_exporter [2] to ansible-role-requirements and offer it's deployments on users own prometheus. Offer prometheus deployment as step2. Document usage * Promote ELK stack to 1st class thing ** Create a separate repo and remove from openstack-ansible-ops ** provide out-of-the-box deployment with OSA. Model would be similar to ceph-ansible where deployment can be integrated or standalone * Work on speeding up OSA runtime: ** Fight with skipped tasks (ie by moving them to separate files that would be included) - most valid for systemd service, systemd_networkd and python_venv_build roles ** Try to split up variables by group_vars ** Try to use include instead of imports again * speedup ci ** try to speedup zuul required projects clone process - work with infra team ** Set *_db_setup_host across all roles to utility and adjust [7] * Build mariadb deps for focal (like 10.4.12 release). We can use repo.vexxhost.net for hosting it until mariadb 10.4.14 release. * In case having issues with distro jobs/support we don't hestitate remove it or setting to non-voting state * Remove SUSE support early in Victoria. We already have [4] for this - needs rebasing * Add neutron ovn to integrated tests (with perspective to make it default for new deployments). * Drop resource creation tasks out of os_tempest - OSA and TripleO manage resource creation themselves and pass required vars to os_tempest for config generation * Add support for zookeeper deployment for services coordination (like telemetry, designate, etc) * add tooling to bootstrap-ansible to apply provided gerrit patches for roles - start with this [3] * Try to add aarch64 jobs with separate pipeline once we have some python wheels built up * Migrate group names to remove underscores "The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names by default, this will change, but still be user configurable on deprecation. This feature will be removed in version 2.10" * add tooling to bootstrap-ansible to apply provided gerrit patches for roles - start with this [5] use it like this [6] * publish common roles (galera, haproxy, memcached, uwsgi, python_venv_build, etc...) to galaxy, rename them to ansible-role-* pattern. As a stage 2 consider publishing os roles. * add some check for repo server, to verify it's ok (lua linters check) instead of failing afterwards because of missing dev libraries for hosts [1] https://github.com/ceph/ceph/blob/be117b555fc1bba1048b87a624d542fd629d1ad1/doc/cephadm/operations.rst [2] https://github.com/jrosser/rd-ansible-libvirtd-exporter [3] http://paste.openstack.org/show/794258/ [4] https://review.opendev.org/#/c/725541/ [5] http://paste.openstack.org/show/794258/ [6] http://paste.openstack.org/show/794259/ [7] https://review.opendev.org/#/c/671454 From jp.methot at planethoster.info Fri Jun 5 18:07:05 2020 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Fri, 5 Jun 2020 14:07:05 -0400 Subject: [nova] running nova-scheduler in a container In-Reply-To: <289a222591b74e49244ebd5b3fae93c7be6c0f71.camel@redhat.com> References: <289a222591b74e49244ebd5b3fae93c7be6c0f71.camel@redhat.com> Message-ID: Hi Sean, You’re right, the issue isn’t the scheduler after all. I wasn’t fully aware of how the scheduler and conductor work with each other. From the logs, I can see that it does pick a compute node as selected host: 2020-06-05 17:50:54.742 520 DEBUG nova.conductor.manager [req-323fe2d5-4ab5-4d3f-a923-deb7ce9da9f9 0ec0114b913646338f16f1ce6457da3a e9dd8de2dad64f3c99078f26330aefb9 - default default] [instance: e1d20902-660e-4359-bb56-2482734b656f] Selected host: compute2.staging.planethoster.net; Selected node: compute2.staging.planethoster.net; Alternates: [(u'compute3.staging.planethoster.net', u'compute3.staging.planethoster.net'), (u'compute1.staging.planethoster.net', u'compute1.staging.planethoster.net')] schedule_and_build_instances /usr/lib/python2.7/site-packages/nova/conductor/manager.py:1371 It then proceed to block device mapping : 2020-06-05 17:50:54.750 520 DEBUG nova.conductor.manager [req-323fe2d5-4ab5-4d3f-a923-deb7ce9da9f9 0ec0114b913646338f16f1ce6457da3a e9dd8de2dad64f3c99078f26330aefb9 - default default] [instance: e1d20902-660e-4359-bb56-2482734b656f] block_device_mapping [BlockDeviceMapping(attachment_id=,boot_index=0,connection_info=None,created_at=,delete_on_termination=False,deleted=,deleted_at=,destination_type='volume',device_name=None,device_type=None,disk_bus=None,guest_format=None,id=,image_id='364b1fe6-6025-4ca5-8d7d-76fbd71074cb',instance=,instance_uuid=,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=,uuid=,volume_id=None,volume_size=20)] _create_block_device_mapping /usr/lib/python2.7/site-packages/nova/conductor/manager.py:1169 However, the status of the VM in the database stays at scheduling and the conductor doesn’t do anything else, as if it was waiting for something that never comes. So, would that mean that the scheduler and placement actually do their job, but the process gets stuck in cinder? I was under the impression this was a nova issue, because if I shutdown my containers with the nova services and boot the same nova services with same configuration locally, I have no issue whatsoever. Jean-Philippe Méthot Senior Openstack system administrator Administrateur système Openstack sénior PlanetHoster inc. 4414-4416 Louis B Mayer Laval, QC, H7P 0G1, Canada TEL : +1.514.802.1644 - Poste : 2644 FAX : +1.514.612.0678 CA/US : 1.855.774.4678 FR : 01 76 60 41 43 UK : 0808 189 0423 > Le 5 juin 2020 à 13:26, Sean Mooney a écrit : > > On Fri, 2020-06-05 at 12:33 -0400, Jean-Philippe Méthot wrote: >> Hi, >> >> I’ve been building my own docker images as a mean to both learn docker and to see if we can make our own images and >> run them in production. I’ve figured out how to make most services run fairly well. However, an issue remains with >> nova-scheduler and I can’t seem to figure out what’s going on. >> >> Essentially, when I try to create a VM it loops in a scheduling state and when I try to delete a VM, it loops forever >> in a deleting state. > > the scheduler is not invlvoed in deleteing a vm so this more or less rules out the schduler as teh route cause. > i woudl guess the issue likes somewhere beteen the api and conductor. > >> I’ve narrowed down the culprit to nova-scheduler. > can you explaine why you think its the nova-schduler? >> As far as I know, nothing appears in the debug logs of my containerized nova-scheduler whenever I do any kind of >> action, which forces me to believe that nova-scheduler is not receiving any command. > did you confirm thjat the conductor was reciving the build requierst and calling the schduler. >> >> From what I’ve always understood, nova-scheduler works through RPC and Rabbitmq. The fact that this nova-scheduler >> connects to rabbitmq without issue makes me believe that something else is missing from my container configuration. >> >> Does Nova-scheduler listen on network port? > not the scudler only compunicates withthe conductor via the rpc bus. >> Does it listen on a socket? > no >> Is there any way that nova-scheduler could ignore requests sent to it? > only if it was not listening to the corerct exchange. > > i would first change that the api show an rpc to the conductor and validate that the conductor started the buidl > request. > if you see output in the conductor log realted to your api queries then you can check the logs to see if ti called the > schduler. >> >> >> Jean-Philippe Méthot >> Senior Openstack system administrator >> Administrateur système Openstack sénior >> PlanetHoster inc. >> >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Fri Jun 5 18:39:18 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Fri, 05 Jun 2020 21:39:18 +0300 Subject: [openstack-ansible] PTG results In-Reply-To: <208941591377694@mail.yandex.ru> References: <208941591377694@mail.yandex.ru> Message-ID: <221811591382158@mail.yandex.ru> PS: Oh, and btw another nice news from PTG week. We agreed with keystone team to add a rolling upgrade testing with OSA for their CI. This had been done until 2017 or so, but we got the way more mature and it's high time we renewed that integration:) 05.06.2020, 20:53, "Dmitriy Rabotyagov" : > Hi there, > > Sorry for the previous email - it has been sent accidentally - it was just at the draft stage... I did some formating afterwards:) > > Here are some of the decisions we come up with during the PTG week: > > * CentOS 8 topic. > ** Install systemd-extras to get systemd-networkd > ** Launch without LXC build at first because of the complexity of the solution. To implement LXC we would need the following: > *** Replace what `machinectl` does with Ansible tasks, as it's not working properly without btrfs > *** See how much it costs to implement lxd - that would not only resolve Centos 8 issue, but also is pretty appreciated among comunity. However, installing in with snapd will auto-update. it's possible to delay up to 60 days the updates but still needs to fix/clean > ** Make nspawn unmaintained, because it mostly relies on machinectl. > *** Remove nspawn from docs. > *** Remove code in cycle afterwards. > ** Backport CentOS 8 to Ussuri, drop CentOS 7 for Victoria afterwards. > *** On Ussuri CentOS 7 is going to live without distro support > *** write reno, that explains that distro installed OSA upgrade path might be tricky/broken for CentOS because of absent Centos7 packages for U. > > * Logs topic > ** finish transition to journald > *** check heat for logging > *** check for ceph logs - see [1] but this may be a result of the way cephadm containerises all the ceph daemons > ** Rewrite log collection script on python with systemd python bindings and get deprecated messages from services' journal into separate file > ** Check where we don't use uwsgi role and see if we can use it there now (like designate) > ** Check through the logs what roles are we covered with tempest, and what not. We have buch of roles that run tempest, but test like only keystone (but not itself). > ** We add libvirtd_exporter [2] to ansible-role-requirements and offer it's deployments on users own prometheus. Offer prometheus deployment as step2. Document usage > > * Promote ELK stack to 1st class thing > ** Create a separate repo and remove from openstack-ansible-ops > ** provide out-of-the-box deployment with OSA. Model would be similar to ceph-ansible where deployment can be integrated or standalone > > * Work on speeding up OSA runtime: > ** Fight with skipped tasks (ie by moving them to separate files that would be included) - most valid for systemd service, systemd_networkd and python_venv_build roles > ** Try to split up variables by group_vars > ** Try to use include instead of imports again > > * speedup ci > ** try to speedup zuul required projects clone process - work with infra team > ** Set *_db_setup_host across all roles to utility and adjust [7] > > * Build mariadb deps for focal (like 10.4.12 release). We can use repo.vexxhost.net for hosting it until mariadb 10.4.14 release. > > * In case having issues with distro jobs/support we don't hestitate remove it or setting to non-voting state > > * Remove SUSE support early in Victoria. We already have [4] for this - needs rebasing > > * Add neutron ovn to integrated tests (with perspective to make it default for new deployments). > > * Drop resource creation tasks out of os_tempest - OSA and TripleO manage resource creation themselves and pass required vars to os_tempest for config generation > > * Add support for zookeeper deployment for services coordination (like telemetry, designate, etc) > > * add tooling to bootstrap-ansible to apply provided gerrit patches for roles - start with this [3] > > * Try to add aarch64 jobs with separate pipeline once we have some python wheels built up > > * Migrate group names to remove underscores >   "The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names by default, this will change, but still be user configurable on deprecation. This feature will be removed in version 2.10" > > * add tooling to bootstrap-ansible to apply provided gerrit patches for roles - start with this [5] use it like this [6] > > * publish common roles (galera, haproxy, memcached, uwsgi, python_venv_build, etc...) to galaxy, rename them to ansible-role-* pattern. As a stage 2 consider publishing os roles. > > * add some check for repo server, to verify it's ok (lua linters check) instead of failing afterwards because of missing dev libraries for hosts > > [1] https://github.com/ceph/ceph/blob/be117b555fc1bba1048b87a624d542fd629d1ad1/doc/cephadm/operations.rst > [2] https://github.com/jrosser/rd-ansible-libvirtd-exporter > [3] http://paste.openstack.org/show/794258/ > [4] https://review.opendev.org/#/c/725541/ > [5] http://paste.openstack.org/show/794258/ > [6] http://paste.openstack.org/show/794259/ > [7] https://review.opendev.org/#/c/671454 --  Kind Regards, Dmitriy Rabotyagov From najoy at cisco.com Fri Jun 5 19:07:49 2020 From: najoy at cisco.com (Naveen Joy (najoy)) Date: Fri, 5 Jun 2020 19:07:49 +0000 Subject: Networking-vpp 20.05 for VPP 20.05 is now available Message-ID: <56A0AFB2-98DE-4BB6-BA24-81F968710CCB@cisco.com> Hello All, We'd like to invite you to try out Networking-vpp 20.05. As many of you may already know, VPP is a fast user space forwarder based on the DPDK toolkit. VPP uses vector packet processing algorithms to minimize the CPU time spent on each packet to maximize throughput. Networking-vpp is a ML2 mechanism driver that controls VPP on your control and compute hosts to provide fast L2 forwarding under Neutron. This latest version of Networking-vpp is updated to work with VPP 20.05. In this release, we've made the below changes: - We've added a class that defines network type APIs and implements them. The network types supported are VLAN, Flat and GPE. Our intent is to add support for pluggable network types without the need to modify code. We'll be continuing our work to enhance this feature in the next release. - We've updated the code to be compatible with VPP 20.05 API changes. - We've added code to deal with poorly formed security group subnets. For instance, 1.2.3.4/0 is completely acceptable in a security group rule. Previously, VPP accepted this in its API calls but this is not currently the case. We've fixed this up at the high level to be acceptable to VPP. - We've fixed an issue with missing package data required for VPP API message validation. - We've dropped VPP 19.04 compatibility code. - We've fixed an issue with the security group mechdriver code in which a SG rule is added but not pushed out to VPP until the next rule is added or the security group is changed. - We've moved VPP specific constants to a dedicated vpp_constants.py file. - Due to a VPP issue, GPE support will be deferred to the 20.05.1 release. - We've been doing the usual round of bug fixes, clean-ups and updates - the code will work with VPP 20.05, the OpenStack Stein release & Python 3. The README [1] explains how you can try out VPP with Networking-vpp using devstack: the devstack plugin will deploy the mechanism driver with VPP 20.05 and should give you a working system with a minimum of hassle. We will be continuing our development for VPP's 20.05.1 release. We welcome anyone who would like to come help us. -- Jerome, Ian & Naveen [1] https://opendev.org/x/networking-vpp/src/branch/master/README.rst -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Jun 5 19:27:36 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Fri, 5 Jun 2020 19:27:36 +0000 Subject: [swift, interop] Swift API level changes to reflect in Interop In-Reply-To: <17270f29780.ce5474ae60362.4615355983998475279@ghanshyammann.com> References: <17270f29780.ce5474ae60362.4615355983998475279@ghanshyammann.com> Message-ID: <9e07b16ac7f14429b9a82a232e0bedfd@AUSX13MPS308.AMER.DELL.COM> Swift team, Need you response, please, on API changes for swift APIs in Ussuri cycle. Thanks, Arkady -----Original Message----- From: Ghanshyam Mann Sent: Monday, June 1, 2020 12:34 PM To: Kanevsky, Arkady Cc: openstack-discuss Subject: Re: [swift, interop] Swift API level changes to reflect in Interop [EXTERNAL EMAIL] ---- On Mon, 01 Jun 2020 11:27:16 -0500 wrote ---- > As we create new guidelines for Interop, > We need to see what changes needed for object storage guidelines. > > So a few specific questions for Swift team: > > 1. What new Tempest tests added for Ussuri release? > a. APIs for query and accessing older versions? Is it for S3 APIs or for swift API also? > b. Any new or modified Tempest test for Etags? > c. Any SIGUSR1 test coverage? > d. New tempest tests for swift-ring-builder? > > 2. What are tempest tests deprecated for Ussuri release? > a. Any tempest tests removed for auto_create_account_prefix? We do not deprecate the Tempest test anytime. Tests can be removed if it satisfies the Tempest test-removal policy - https://docs.openstack.org/tempest/latest/test_removal.html Also adding test in Tempest is also not necessary to happen when API is introduced, it can be later so it is hard to tell when that API was introduced from the Tempest test addition. So from the Tempest side, it will not be a clear pic on what all API/capabilities are added/deprecated in which cycle. From the Tempest point of view, there is no difference between deprecated vs non-deprecated APIs, we keep testing it until those are not removed. For example, you can still run Tempest for Cinder v2 APIs. I think swift team can tell from their API changes not from what changed in Tempest. -gmann > > > Any other API test coverage tests missed above? > Thanks, > Arkady > From doug at stackhpc.com Fri Jun 5 19:29:24 2020 From: doug at stackhpc.com (Doug Szumski) Date: Fri, 5 Jun 2020 20:29:24 +0100 Subject: [kolla-ansible] Proposing Doug Szumski as Kolla Ansible core In-Reply-To: <741d85b8-9827-9493-29b9-c22f7b30c33e@gmail.com> References: <741d85b8-9827-9493-29b9-c22f7b30c33e@gmail.com> Message-ID: <7d238d82-b013-066b-3211-34050b390965@stackhpc.com> On 05/06/2020 15:20, Radosław Piliszek wrote: > Hi Folks! > > I've seen only (very) positive feedback, hence I've just added Doug as > a new member of kolla-ansible-core team. > > Welcome, Doug! :-) Thank you everyone! It's been a real pleasure working alongside such talented people. Long live Kolla. > > -yoctozepto > > On 2020-05-29 15:18, Radosław Piliszek wrote: >> Hi Folks! >> >> This mail serves to propose Doug Szumski from StackHPC (dougsz @IRC, >> CC'ed) as Kolla Ansible core. >> >> Doug coauthored the Nova cells support and helps greatly with >> monitoring and logging facilities available in Kolla. >> >> Please give your feedback in this thread. >> >> If there are no objections, I will add Doug after a week from now >> (that is roughly when PTG is over). >> >> -yoctozepto From tburke at nvidia.com Fri Jun 5 23:54:10 2020 From: tburke at nvidia.com (Tim Burke) Date: Fri, 5 Jun 2020 16:54:10 -0700 Subject: [swift, interop] Swift API level changes to reflect in Interop In-Reply-To: <9e07b16ac7f14429b9a82a232e0bedfd@AUSX13MPS308.AMER.DELL.COM> References: <17270f29780.ce5474ae60362.4615355983998475279@ghanshyammann.com> <9e07b16ac7f14429b9a82a232e0bedfd@AUSX13MPS308.AMER.DELL.COM> Message-ID: On 6/5/20 12:27 PM, Arkady.Kanevsky at dell.com wrote: > Swift team, > Need you response, please, on API changes for swift APIs in Ussuri cycle. > Thanks, > Arkady > > -----Original Message----- > From: Ghanshyam Mann > Sent: Monday, June 1, 2020 12:34 PM > To: Kanevsky, Arkady > Cc: openstack-discuss > Subject: Re: [swift, interop] Swift API level changes to reflect in Interop > > > [EXTERNAL EMAIL] > > ---- On Mon, 01 Jun 2020 11:27:16 -0500 wrote ---- > As we create new guidelines for Interop, > We need to see what changes needed for object storage guidelines. > > > > So a few specific questions for Swift team: > > > > 1. What new Tempest tests added for Ussuri release? > > a. APIs for query and accessing older versions? Is it for S3 APIs or for swift API also? > > b. Any new or modified Tempest test for Etags? > > c. Any SIGUSR1 test coverage? > > d. New tempest tests for swift-ring-builder? > > > > 2. What are tempest tests deprecated for Ussuri release? > > a. Any tempest tests removed for auto_create_account_prefix? > > We do not deprecate the Tempest test anytime. Tests can be removed if it satisfies the Tempest test-removal policy - https://docs.openstack.org/tempest/latest/test_removal.html > > Also adding test in Tempest is also not necessary to happen when API is introduced, it can be later so it is hard to tell when that API was introduced from the Tempest test addition. > > So from the Tempest side, it will not be a clear pic on what all API/capabilities are added/deprecated in which cycle. From the Tempest point of view, there is no difference between deprecated vs non-deprecated APIs, we keep testing it until those are not removed. For example, you can still run Tempest for Cinder v2 APIs. > > I think swift team can tell from their API changes not from what changed in Tempest. > > -gmann > > > > > > > > Any other API test coverage tests missed above? > > Thanks, > > Arkady > > > Sorry for the delay. To my knowledge, no one on the Swift team has added or deprecated any Tempest tests. Some more specific details about recent changes: The new object versioning APIs affect both Swift and S3 access. For more information about the new Swift versioning API, see https://docs.openstack.org/swift/latest/middleware.html#object-versioning. For more information about the S3 versioning API, see https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html; we've reproduced the S3 behaviors as faithfully as we could and any differences should be reported as bugs. I don't know what expectations Tempest has around ETags; it may be that no changes are warranted even if the service defaults to RFC-compliant ETags. At any rate, the default ETag quoting behavior is unlikely to change in the near future (measured in years). I'm not sure that USR1 signal handling would really be in-scope for Tempest. I was under the impression that Tempest was performing blackbox testing (and so would not deal with service management issues), though I'd welcome any corrections in my understanding. Along the same lines, swift-ring-builder seems out of scope as well. The auto_create_account_prefix change should not change any client-facing behaviors; rather, it simply moved a config option to a more suitable location. Tim From tburke at nvidia.com Fri Jun 5 23:57:37 2020 From: tburke at nvidia.com (Tim Burke) Date: Fri, 5 Jun 2020 16:57:37 -0700 Subject: [all][InteropWG] Please verify if Swift Project needs any updates for Interop testing of user APIs In-Reply-To: <302651857.2189087.1591205088007@mail.yahoo.com> References: <1617120026.1853380.1591152244867.ref@mail.yahoo.com> <1617120026.1853380.1591152244867@mail.yahoo.com> <302651857.2189087.1591205088007@mail.yahoo.com> Message-ID: On 6/3/20 10:24 AM, prakash RAMCHANDRAN wrote: > > Hi Swift team core members, > > We are trying to run our ambassadors or volunteers  to your > meetings, but if we miss please reply with "Confirmed OK" to > submit these draft to be merged for "OpenStack Powered Storage" > approval. > > > > Thanks > Interop WG chai Prakash > & Vice chair Mark > > > > > Confirmed OK -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Sat Jun 6 01:58:19 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Sat, 6 Jun 2020 01:58:19 +0000 Subject: [swift, interop] Swift API level changes to reflect in Interop In-Reply-To: References: <17270f29780.ce5474ae60362.4615355983998475279@ghanshyammann.com> <9e07b16ac7f14429b9a82a232e0bedfd@AUSX13MPS308.AMER.DELL.COM> Message-ID: Thanks Tim. -----Original Message----- From: Tim Burke Sent: Friday, June 5, 2020 6:54 PM To: openstack-discuss at lists.openstack.org Subject: Re: [swift, interop] Swift API level changes to reflect in Interop [EXTERNAL EMAIL] On 6/5/20 12:27 PM, Arkady.Kanevsky at dell.com wrote: > Swift team, > Need you response, please, on API changes for swift APIs in Ussuri cycle. > Thanks, > Arkady > > -----Original Message----- > From: Ghanshyam Mann > Sent: Monday, June 1, 2020 12:34 PM > To: Kanevsky, Arkady > Cc: openstack-discuss > Subject: Re: [swift, interop] Swift API level changes to reflect in > Interop > > > [EXTERNAL EMAIL] > > ---- On Mon, 01 Jun 2020 11:27:16 -0500 wrote ---- > As we create new guidelines for Interop, > We need to see what changes needed for object storage guidelines. > > > > So a few specific questions for Swift team: > > > > 1. What new Tempest tests added for Ussuri release? > > a. APIs for query and accessing older versions? Is it for S3 APIs or for swift API also? > > b. Any new or modified Tempest test for Etags? > > c. Any SIGUSR1 test coverage? > > d. New tempest tests for swift-ring-builder? > > > > 2. What are tempest tests deprecated for Ussuri release? > > a. Any tempest tests removed for auto_create_account_prefix? > > We do not deprecate the Tempest test anytime. Tests can be removed if > it satisfies the Tempest test-removal policy - > https://docs.openstack.org/tempest/latest/test_removal.html > > Also adding test in Tempest is also not necessary to happen when API is introduced, it can be later so it is hard to tell when that API was introduced from the Tempest test addition. > > So from the Tempest side, it will not be a clear pic on what all API/capabilities are added/deprecated in which cycle. From the Tempest point of view, there is no difference between deprecated vs non-deprecated APIs, we keep testing it until those are not removed. For example, you can still run Tempest for Cinder v2 APIs. > > I think swift team can tell from their API changes not from what changed in Tempest. > > -gmann > > > > > > > > Any other API test coverage tests missed above? > > Thanks, > > Arkady > > > Sorry for the delay. To my knowledge, no one on the Swift team has added or deprecated any Tempest tests. Some more specific details about recent changes: The new object versioning APIs affect both Swift and S3 access. For more information about the new Swift versioning API, see https://docs.openstack.org/swift/latest/middleware.html#object-versioning. For more information about the S3 versioning API, see https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html; we've reproduced the S3 behaviors as faithfully as we could and any differences should be reported as bugs. I don't know what expectations Tempest has around ETags; it may be that no changes are warranted even if the service defaults to RFC-compliant ETags. At any rate, the default ETag quoting behavior is unlikely to change in the near future (measured in years). I'm not sure that USR1 signal handling would really be in-scope for Tempest. I was under the impression that Tempest was performing blackbox testing (and so would not deal with service management issues), though I'd welcome any corrections in my understanding. Along the same lines, swift-ring-builder seems out of scope as well. The auto_create_account_prefix change should not change any client-facing behaviors; rather, it simply moved a config option to a more suitable location. Tim From zhaoxiaolin at loongson.cn Sat Jun 6 06:35:41 2020 From: zhaoxiaolin at loongson.cn (=?UTF-8?B?6LW15pmT55Cz?=) Date: Sat, 6 Jun 2020 14:35:41 +0800 (GMT+08:00) Subject: [nova][libvirt] Support for MIPS architecture? In-Reply-To: <092808511d77eac2cf993708eba8a7acf770f233.camel@redhat.com> References: <48158965.124e2.17283e426e5.Coremail.zhaoxiaolin@loongson.cn> <092808511d77eac2cf993708eba8a7acf770f233.camel@redhat.com> Message-ID: <34bc8dd5.1263e.172885798b8.Coremail.zhaoxiaolin@loongson.cn> We can provide ci resources to the first party. But, sorry, what detailed ci resources do we need to provide? and how to provide? Do you mean KVM and hosts with MIPS architecture? On Fri, Jun 5, 2020 at 7:39 AM Sean Mooney wrote: > > On Fri, 2020-06-05 at 17:51 +0800, 赵晓琳 wrote: > > Hi I'm trying to run openstack on a host with MIPS architecture, but got some errors, and I have fixed > > them. Many people around me use hosts with MIPS architecture. We hope the official can add support for MIPS and we > > can maintain it. Thanks, xiaolin > to state that mips is fully supported would require an automated ci running on mips hardware. > if you can provide ci resources either to the first party ci or via a third party ci that projects can consume > we may be able to test that the basic functionality works. in the long run support for other architectures really > required a concerted effort from a vendor or community that runs openstack on that architecture. > > there recently has been a effort to add more aarch64 testing but traditionally anything that was not x86 fell to > third party hardware vendors to test via third party ci. i.e. the ibm provided powerVM and power KVM CIs to test nova on > power pc. From zhaoxiaolin at loongson.cn Sat Jun 6 06:46:05 2020 From: zhaoxiaolin at loongson.cn (=?UTF-8?B?6LW15pmT55Cz?=) Date: Sat, 6 Jun 2020 14:46:05 +0800 (GMT+08:00) Subject: [nova][libvirt] Support for MIPS architecture? In-Reply-To: References: <48158965.124e2.17283e426e5.Coremail.zhaoxiaolin@loongson.cn> <092808511d77eac2cf993708eba8a7acf770f233.camel@redhat.com> Message-ID: <5d87b4c3.12643.17288611d17.Coremail.zhaoxiaolin@loongson.cn> Thanks for the help. We will pay attention to the Multi-Arch Special Interest Group, and provide CI resources to the openstack community as soon as possible. > > OpenStack also has a Multi-Arch Special Interest Group: > https://docs.openstack.org/multi-arch-sig/latest/index.html > > Although (as Sean says) most non-x86 efforts are currently around > aarch64, the Multi-Arch Special Interest Group is still interested in > tracking (and, if possible, facilitating) MIPS-related efforts. > > > > On Fri, Jun 5, 2020 at 7:39 AM Sean Mooney wrote: > > > > On Fri, 2020-06-05 at 17:51 +0800, 赵晓琳 wrote: > > > Hi I'm trying to run openstack on a host with MIPS architecture, but got some errors, and I have fixed > > > them. Many people around me use hosts with MIPS architecture. We hope the official can add support for MIPS and we > > > can maintain it. Thanks, xiaolin > > to state that mips is fully supported would require an automated ci running on mips hardware. > > if you can provide ci resources either to the first party ci or via a third party ci that projects can consume > > we may be able to test that the basic functionality works. in the long run support for other architectures really > > required a concerted effort from a vendor or community that runs openstack on that architecture. > > > > there recently has been a effort to add more aarch64 testing but traditionally anything that was not x86 fell to > > third party hardware vendors to test via third party ci. i.e. the ibm provided powerVM and power KVM CIs to test nova on > > power pc. > > > > From fungi at yuggoth.org Sat Jun 6 13:13:32 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 6 Jun 2020 13:13:32 +0000 Subject: [nova][libvirt] Support for MIPS architecture? In-Reply-To: <34bc8dd5.1263e.172885798b8.Coremail.zhaoxiaolin@loongson.cn> References: <48158965.124e2.17283e426e5.Coremail.zhaoxiaolin@loongson.cn> <092808511d77eac2cf993708eba8a7acf770f233.camel@redhat.com> <34bc8dd5.1263e.172885798b8.Coremail.zhaoxiaolin@loongson.cn> Message-ID: <20200606131332.v4puyfzpd4tecztc@yuggoth.org> On 2020-06-06 14:35:41 +0800 (+0800), 赵晓琳 wrote: > We can provide ci resources to the first party. But, sorry, what > detailed ci resources do we need to provide? and how to provide? > Do you mean KVM and hosts with MIPS architecture? [...] Here's a bit of documentation on the process and what you can expect: https://docs.opendev.org/opendev/system-config/latest/contribute-cloud.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pramchan at yahoo.com Sun Jun 7 05:55:49 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Sun, 7 Jun 2020 05:55:49 +0000 (UTC) Subject: [all][InteropWG] Last call for interop capabilities from OpenStack partciapting projects References: <1079982233.463634.1591509349922.ref@mail.yahoo.com> Message-ID: <1079982233.463634.1591509349922@mail.yahoo.com> Hi all, We are happy to report that score of people turned up  for Interop WG discussions on First call on Monday June1st.Friday  5th June we had a review call to consolidate and follow up on guidelines for the OpenStack Powered Trademark Program. https://etherpad.opendev.org/p/victoria-ptg-interop-wghttps://etherpad.opendev.org/p/interop Please note the terminology -https://github.com/openstack/interop/blob/master/doc/source/schema/2.0.rst#capabilities capabilities - required, advisory, deprecated, removed                           Stein           Train /add    Ussuri/delete     Victoria KeystoneGlanceSwift       Nova                                                         NA (Ironic BM)              Neutron                                                      Cinder                                                                   v2 Add-ons OpenStack Powered - Orchestration & DNSHeatDesignate Based on current review there are no net new APIs to be added to Ussuri cycle with respect to Stein that are eligible for Interop Programs.The above project teams can go through codeRefStack server guidelines page RefStack Client.   Artem Goncharov (irc: gtema) - ANswers to Questions by Artem and follow up actions - refstack-client seems to be orphaned now and offered changes are not reviewed (gtema) - There is access failure and is being addressed by Mark, after that we will review this with help of gtema to file a ticket with bug details  - link between guideline and required tempest is desired, since it requires currently effors to nail down failures by jumping between tempest versions (gtema) - This a feature add request, need a story or bug to aggressed in Tempest by gtema - network extensions api - does it make sense to include it? - Note any L2/L3  CRUD on network, subnet and ports and all their attributes are covered. The plug-ins and admin user requirements for extensions are out of scope and hence not considered. However you may verify with Neutron team as what is their proposal to this team?  - resource tagging apis - does it make sense to include them? - Reosurce taggin again are optional attirbutes to reousrce ID and very speicifc to resource under trait libraray or ad-hoc to Vendor specific plugins and drivers and will not qualify for interoperability. You can still ask if any new resources that are used by any of core projects to propose and justify why tha'ts required for interop testing? Attached the current diff's for both 2020.06.json vs 2019.11.json and similar diff's for add-ons for Orchstration and DNS. Seeking volunteers to update web page write-up  and logos to be proposed for "Openstack Powerd Orchestaion" & Openstack Powered DNS". Heat ans Designate teams can suggest what they would like to bring to table for draft and approval to board?  ThanksPrakash -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: iterop-draft-diff.txt URL: From zigo at debian.org Sun Jun 7 20:50:39 2020 From: zigo at debian.org (Thomas Goirand) Date: Sun, 7 Jun 2020 22:50:39 +0200 Subject: [doc] installation guide maintenance In-Reply-To: <20200604233512.c7m5rgtthicmijui@yuggoth.org> References: <20200604233512.c7m5rgtthicmijui@yuggoth.org> Message-ID: On 6/5/20 1:35 AM, Jeremy Stanley wrote: > On 2020-06-05 00:48:42 +0200 (+0200), Thomas Goirand wrote: > [...] >> Back at the time, the decision was made because some core >> contributors to the install guide just left the project. > > It wasn't just "some core contributors to the install guide" but > rather the entirety of the documentation team, or very nearly so. > The one or two people who remained barely had time to help maintain > tooling around generating documentation and no time whatsoever to > review content. > >> It was at the time said that moving the maintenance to each >> individual projects would scale nicer. > > It scaled at least as well as the alternative, which was no longer > updating the documentation at all. There were calls for help over > many months, in lots of places, and yet no new volunteers stepped > forward to take up the task. > >> I am now convince that this is the exact opposite that has >> happened: docs are maintained a lot less now, with lower quality, >> and less uniformity. So I am convince that we did a very bad move >> at the time, one that we shouldn't have made. > [...] > > Less documentation maintenance was inevitable no matter what choice > was made. The employer of basically all the technical writers in the > OpenStack community decided there were better places for it to focus > that time and money. The only people left who could write and review > documentation were the developers in each project. The task fell on > them not because they wanted it, but because there was no one else. > That they often don't find a lot of time to devote to documentation > is unsurprising, and in that regard representative of most open > source software communities I've ever known. Yes, some people left. Yes, calls for help have been ignored. But still, IMO, there was alternatives to destroying the install-guide, and that wasn't the solution. Anyways, it's probably too late to go back, so let's not continue discussing this. :) BTW, where may I find that old install guide? Is there somewhere it can still be seen online? I've searched for it and couldn't find it anymore. Thomas From zigo at debian.org Sun Jun 7 20:54:17 2020 From: zigo at debian.org (Thomas Goirand) Date: Sun, 7 Jun 2020 22:54:17 +0200 Subject: [puppet] puppet-openstack meeting Message-ID: <8dd8084a-316a-d218-7d75-ae562a8c4ef2@debian.org> Hi guys! Wouldn't it be nice to organize some meet-up online, for example using the jitsi instance that the infra team built? I at least would enjoy a lot seeing you face-to-face (ie: Tobias, Takashi, Zheng, as I know already Emilien and Alex). Moreover, it'd be nice to discuss the plans for Victoria. Your thoughts? Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Sun Jun 7 23:09:32 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 7 Jun 2020 23:09:32 +0000 Subject: [doc] installation guide maintenance In-Reply-To: References: <20200604233512.c7m5rgtthicmijui@yuggoth.org> Message-ID: <20200607230932.udto75h6i6hhlfmp@yuggoth.org> On 2020-06-07 22:50:39 +0200 (+0200), Thomas Goirand wrote: [...] > BTW, where may I find that old install guide? Is there somewhere it can > still be seen online? I've searched for it and couldn't find it anymore. Looks like the one for Newton was the last version to include Debian: https://docs.openstack.org/newton/install/ The combined guide seems to have continued into Ocata (at a similar URL) but only includes openSUSE, CentOS and Ubuntu. It appears Pike was when we went to the split guide assembled piecemeal from different project repositories. You can also find old source code in branches of the openstack/openstack-manuals repository on OpenDev: https://opendev.org/openstack/openstack-manuals/src/branch/stable/newton/doc/install-guide/source Hope that helps! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From yumeng_bao at yahoo.com Mon Jun 8 04:19:11 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Mon, 8 Jun 2020 12:19:11 +0800 Subject: =?utf-8?Q?Re=EF=BC=9A[cyborg][ptg]_Cyborg_PTG_Summary_for_the_Da?= =?utf-8?Q?y_=E2=80=94_Come_in_series?= References: <8FE3AA0B-376A-41EF-9551-984ABFC46937.ref@yahoo.com> Message-ID: <8FE3AA0B-376A-41EF-9551-984ABFC46937@yahoo.com> Hi all, All topics for cyborg PTG are done! Notes and comments are on the etherpad. Thank you all for the great topics, participation, discussion etc! Topics that have come to an conclusion are marked with [AGREE] or [ACTION], while topics that need further discussion are not. listed below are important topics that need further discussion either in the format of future meeting or maillist or a spec: 1. 3rd party CI support for new drivers * define which tempest tests need to be used in 3rd party CI. * 2. cyborg/neutron/nova sriov intigration * a spec in nova will be proposed to continue discussion If you have any feedbacks please reply here or in IRC. Regards, Yumeng From lucasagomes at gmail.com Mon Jun 8 08:56:51 2020 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 8 Jun 2020 09:56:51 +0100 Subject: [neutron] Neutron Bug Deputy Report Jun 1-8 Message-ID: Hi, This is the Neutron bug report of week 23 (1 Jun - 8 Jun). Untriaged: - "Inconsistent coding while upgrading" * https://bugs.launchpad.net/neutron/+bug/1881685 - "Centralized SNAT failover does not recover until "systemctl restart neutron-l3-agent" on transferred node" * https://bugs.launchpad.net/neutron/+bug/1881995 - "Quality of Service (QoS) in neutron" * https://bugs.launchpad.net/neutron/+bug/1882072 Critical: - "[OVN]IPv6 hot plug tempest tests are failing with OVN backend" * https://bugs.launchpad.net/neutron/+bug/1881558 - "Neutron-vpnaas scenario tests are very unstable" * https://bugs.launchpad.net/neutron/+bug/1882220 High: - "[OVN] Virtual port type set while port has no parents" * https://bugs.launchpad.net/neutron/+bug/1881759 * Fix released * Assigned to Maciej Jozefczyk * Patch proposed: https://review.opendev.org/732690 - "neutron-ipset-cleanup fails with Traceback: Unhandled error: oslo_config.cfg.NoSuchOptError: no such option AGENT in group [DEFAULT]" * https://bugs.launchpad.net/neutron/+bug/1881771 * Fix released * Assigned to Frode Nordahl * Patch proposed: https://review.opendev.org/732701 - "neutron-ovn-db-sync-util fails with KeyError: 'port_security_enabled' when port security not enabled" * https://bugs.launchpad.net/neutron/+bug/1882061 * In-Progress * Assigned to Frode Nordahl * Patch proposed: https://review.opendev.org/733512 - "neutron-ovn-db-sync-util stops with Traceback due to L3 service plugin expecting ``ovn`` mech driver to be present while ``ovn-sync`` mech driver is loaded" * https://bugs.launchpad.net/neutron/+bug/1882202 * In-Progress * Assigned to Frode Nordahl * Patch proposed: https://review.opendev.org/733775 Medium: - "neutron-ovn-db-sync-util fails with Traceback when notify_nova config is present" * https://bugs.launchpad.net/neutron/+bug/1882020 * Fix released * Assigned to Frode Nordahl * Patch proposed: https://review.opendev.org/733481 Invalid: - "Neutron CI doesn't run tests that require advanced image" * https://bugs.launchpad.net/neutron/+bug/1882060 From thierry at openstack.org Mon Jun 8 09:19:17 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 8 Jun 2020 11:19:17 +0200 Subject: [largescale-sig] Next meeting: June 10, 8utc Message-ID: <9f177eac-aa50-c70c-3734-33d4534fd613@openstack.org> Hi everyone, The Large Scale SIG will have a meeting this week on Wednesday, June 10 at 8 UTC[1] in the #openstack-meeting-3 channel on IRC: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200610T08 Feel free to add topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting A reminder of the TODOs we had from last meeting, in case you have time to make progress on them: - masahito to post initial oslo.metrics code to openstack/oslo.metrics Talk to you all on Wednesday, -- Thierry Carrez From stephenfin at redhat.com Mon Jun 8 11:36:16 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Mon, 08 Jun 2020 12:36:16 +0100 Subject: [doc] installation guide maintenance In-Reply-To: References: Message-ID: On Thu, 2020-06-04 at 18:40 +0900, Akihiro Motoki wrote: > Hi, > > During the doc migration, the installation guide was moved to > individual project repos. > I see problems in installation guide maintenance after the migration. > > - The installation guide is not maintained well perhaps in many projects. > AFAIK they are not verified well at least in horizon and neutron. > - Even if we try to verify it, it is a tough thing because we need to > prepare base distribution > and setup other projects together (of course it depends on projects). > This leads to a development bandwidth and priority issue. > - We sometimes receive bug reports on the installation guide, but it > is not easy for the > upstream team confirm them and verify fixes. > > I guess the installation guides are not being maintained well from > these reasons. > Any thoughts on this situation? (This is my first question.) As has been summarized above and elsewhere, this is almost certainly a bandwidth and priority issue. > If a project team has no bandwidth to maintain it, what is a recommended way? > I see several options: > - Drop the installation guide (per OS or as a whole) -- If drop what > should the criteria be? > - Keep the installation guide with warnings like "the upstream team > does not maintain it and just host it". > - Keep it as-is (unmaintained) Personally, I'd love to see per-cycle hackathons where people sit down and install an OpenStack deployment manually on each OS, however, I can't see that happening and most teams have no interest in setting aside time each cycle to validate this themselves. The latter point is unfortunate, since the process of doing such manual work often serves to highlight the uglier corners of project configuration, but it is also understandable since this work is often tedious and unrewarding. As you've mentioned below, most (all?) people using OpenStack for anything more than a learning tool are installing using a deployment tool. I personally suspect the amount of people using OpenStack on CentOS/RHEL without TripleO/Director or something like Kolla is exceedingly low, bordering on non-existent. Similarly, I would assume the Ubuntu users are using a combo of MAAS/Juju while the amount of people installing OpenStack on SUSE with anyting is likely approaching zero now, given their divestment. All in all, while I'd rather we didn't have to do so, I think deleting the installation guides is probably the right move. It sucks and will result in a worse experience for our users, but it's just a reflection of reality. Thanks for bringing this up, Stephen > Finally, I am not sure we need to maintain step-by-step guides on > installations and > I wonder we need to drop them at some time. > Most users deploy OpenStack using deployment projects (or their own > deployment tools). > Step-by-step guides might be useful from educational perspective > but unmaintained guides are not useful. > > Thanks in advance, > > -- Akihiro Motoki (amotoki) From balazs.gibizer at est.tech Mon Jun 8 11:42:15 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 08 Jun 2020 13:42:15 +0200 Subject: [nova] Runway for Victoria Message-ID: Hi, Runway etherpad [1] with 3 slots and and empty queue is open for Victoria. Cheers, gibi [1] https://etherpad.opendev.org/p/nova-runways-victoria From sean.mcginnis at gmx.com Mon Jun 8 12:29:47 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 8 Jun 2020 07:29:47 -0500 Subject: [cinder][kayobe][kolla][OSA][tripleo] Ussuri cycle-trailing release deadline Message-ID: <0afc05c2-bfec-a3b3-14d6-8c2541b3dd94@gmx.com> Hello teams with deliverables following the cycle-trailing release model! This is just a reminder about wrapping those Ussuri trailing deliverables up. A few cycles ago we extended the deadline for cycle-trailing to give more time, so the actual deadline isn't until August 13: https://releases.openstack.org/victoria/schedule.html#v-cycle-trail If things are ready sooner than that though, all the better for our downstream consumers. Just for awareness, the following cycle-trailing deliverables will need their final releases at some point in the next few months: cinderlib kayobe kolla-ansible kolla-cli kolla openstack-ansible os-refresh-config Thanks! Sean From mnaser at vexxhost.com Mon Jun 8 14:12:53 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 8 Jun 2020 10:12:53 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # CHANGES PENDING - Select migrate-to-focal goal for Victoria cycle: https://review.opendev.org/#/c/731213/ - Propose Kendall Nelson for vice-chair: https://review.opendev.org/#/c/733141/ - Clarify the support for linux distro: https://review.opendev.org/#/c/727238/ - Add njohnston liaison preference: https://review.opendev.org/#/c/733269/ - Update joining-tc.rst to be general tc-guide.rst: https://review.opendev.org/#/c/732983/ - Remove tricircle project team: https://review.opendev.org/#/c/731566/ - Add diablo_rojo liaison preferences: https://review.opendev.org/#/c/733284/ - Rename ansible-role-lunasa-hsm deliverable: https://review.opendev.org/#/c/731313/ - Merging TC and UC into a single body: https://review.opendev.org/#/c/734074/ - Retire swift-specs: https://review.opendev.org/#/c/733901/ # RETIRED PROJECTS - Remove congress project team: https://review.opendev.org/#/c/728818/ # GENERAL CHANGES - Switch to newer openstackdocstheme version: https://review.opendev.org/#/c/733313/ Thanks! Regards, -- Mohammed Naser VEXXHOST, Inc. From gmann at ghanshyammann.com Mon Jun 8 14:40:23 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 08 Jun 2020 09:40:23 -0500 Subject: [swift, interop] Swift API level changes to reflect in Interop In-Reply-To: References: <17270f29780.ce5474ae60362.4615355983998475279@ghanshyammann.com> <9e07b16ac7f14429b9a82a232e0bedfd@AUSX13MPS308.AMER.DELL.COM> Message-ID: <172946011fc.ee8257fc327311.7454376927279445379@ghanshyammann.com> ---- On Fri, 05 Jun 2020 18:54:10 -0500 Tim Burke wrote ---- > On 6/5/20 12:27 PM, Arkady.Kanevsky at dell.com wrote: > > Swift team, > > Need you response, please, on API changes for swift APIs in Ussuri cycle. > > Thanks, > > Arkady > > > > -----Original Message----- > > From: Ghanshyam Mann > > Sent: Monday, June 1, 2020 12:34 PM > > To: Kanevsky, Arkady > > Cc: openstack-discuss > > Subject: Re: [swift, interop] Swift API level changes to reflect in Interop > > > > > > [EXTERNAL EMAIL] > > > > ---- On Mon, 01 Jun 2020 11:27:16 -0500 wrote ---- > As we create new guidelines for Interop, > We need to see what changes needed for object storage guidelines. > > > > > > So a few specific questions for Swift team: > > > > > > 1. What new Tempest tests added for Ussuri release? > > > a. APIs for query and accessing older versions? Is it for S3 APIs or for swift API also? > > > b. Any new or modified Tempest test for Etags? > > > c. Any SIGUSR1 test coverage? > > > d. New tempest tests for swift-ring-builder? > > > > > > 2. What are tempest tests deprecated for Ussuri release? > > > a. Any tempest tests removed for auto_create_account_prefix? > > > > We do not deprecate the Tempest test anytime. Tests can be removed if it satisfies the Tempest test-removal policy - https://docs.openstack.org/tempest/latest/test_removal.html > > > > Also adding test in Tempest is also not necessary to happen when API is introduced, it can be later so it is hard to tell when that API was introduced from the Tempest test addition. > > > > So from the Tempest side, it will not be a clear pic on what all API/capabilities are added/deprecated in which cycle. From the Tempest point of view, there is no difference between deprecated vs non-deprecated APIs, we keep testing it until those are not removed. For example, you can still run Tempest for Cinder v2 APIs. > > > > I think swift team can tell from their API changes not from what changed in Tempest. > > > > -gmann > > > > > > > > > > > > > Any other API test coverage tests missed above? > > > Thanks, > > > Arkady > > > > > > Sorry for the delay. To my knowledge, no one on the Swift team has added > or deprecated any Tempest tests. Some more specific details about recent > changes: Yeah, swift tests are very much changes since they were added. We kept them working for interop requirements. > > The new object versioning APIs affect both Swift and S3 access. For more > information about the new Swift versioning API, see > https://docs.openstack.org/swift/latest/middleware.html#object-versioning. > For more information about the S3 versioning API, see > https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html; we've > reproduced the S3 behaviors as faithfully as we could and any > differences should be reported as bugs. > > I don't know what expectations Tempest has around ETags; it may be that > no changes are warranted even if the service defaults to RFC-compliant > ETags. At any rate, the default ETag quoting behavior is unlikely to > change in the near future (measured in years). > > I'm not sure that USR1 signal handling would really be in-scope for > Tempest. I was under the impression that Tempest was performing blackbox > testing (and so would not deal with service management issues), though > I'd welcome any corrections in my understanding. That is correct. Tempest tests which verifying the behaviour on API (from interop PoV), should not have any impact. If they are passing (which is yes as per gate results) then changes in swift are ok until those are not config driven. -gmann > > Along the same lines, swift-ring-builder seems out of scope as well. > > The auto_create_account_prefix change should not change any > client-facing behaviors; rather, it simply moved a config option to a > more suitable location. > > Tim > > > From helena at openstack.org Mon Jun 8 15:42:01 2020 From: helena at openstack.org (helena at openstack.org) Date: Mon, 8 Jun 2020 11:42:01 -0400 (EDT) Subject: OpenStack Glossary Message-ID: <1591630921.362613753@apps.rackspace.com> Greetings OpenStack Community! We are on a mission to create a glossary of OpenStack related terms and want your help! As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Similarly, having a glossary of terms has proven to be a good SEO tactic to gain more web traffic; by creating this glossary, we are hoping to have greater visibility to potential contributors, users, and supporting organizations. This is where you come in! We need your help to define the terms that we can use to educate future contributors. Below is an etherpad link. We ask that you add, edit, review, and collaborate on this etherpad to help us make the OpenStack community more accessible and understandable. If you think of more terms to add to the list, please do! As always, feel free to reach out with any questions. Cheers, Helena Spease OpenStack: [ https://etherpad.opendev.org/p/OpenStack_Glossary ]( https://etherpad.opendev.org/p/OpenStack_Glossary ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Mon Jun 8 16:06:50 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 8 Jun 2020 18:06:50 +0200 Subject: [baremetal-sig][ironic] Baremetal whitepaper: the final chapter In-Reply-To: References: Message-ID: <959daedd-4278-520d-723d-47ef1e33c124@cern.ch> Dear all, It seems that Wed June 10th at 2pm UTC works best. I scheduled a meeting here: https://cern.zoom.us/j/93137948560 See you there! Cheers Arne On 05.06.20 11:04, Arne Wiebalck wrote: > Dear all, > > The bare metal white paper [0] is almost finished, thanks > to everyone who helped during the past weeks! > > We plan to have a (hopefully final) session to address > the remaining open issues and to round things off during > one of slots proposed in the doodle available at > > https://doodle.com/poll/afwgy9zs8fi55wqe > > Everyone is still welcome to give input or feedback. > > Like before, I will send out the call details once we have > settled on the time slot. > > Cheers, >  Arne > > [0] > https://docs.google.com/document/d/1BmB2JL_oG3lWXId_NXT9KWcBJjqgtnbmixIcNsfGooA/edit > > > -- > Arne Wiebalck > CERN IT > From fungi at yuggoth.org Mon Jun 8 16:53:28 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 8 Jun 2020 16:53:28 +0000 Subject: [docs] OpenStack Glossary In-Reply-To: <1591630921.362613753@apps.rackspace.com> References: <1591630921.362613753@apps.rackspace.com> Message-ID: <20200608165328.jh3r7a5axaxyggrs@yuggoth.org> On 2020-06-08 11:42:01 -0400 (-0400), helena at openstack.org wrote: > We are on a mission to create a glossary of OpenStack related > terms and want your help! [...] Is that effort intended to replace the current https://docs.openstack.org/glossary/ or are you coming up with new entries to add to the existing one, or does it serve a separate purpose entirely? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kennelson11 at gmail.com Mon Jun 8 17:08:26 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 8 Jun 2020 10:08:26 -0700 Subject: [all][PTG] Feedback Etherpad Message-ID: Hello Everyone! Thanks to everyone who made the first ever virtual PTG a success! Instead of doing a live feedback session, we are asking you to share your experiences with us in a feedback etherpad! We started circulating it even before the PTG, but wanted to make sure you all saw it and had a chance to voice your feedback. We really appreciate it! -the Kendalls (diablo_rojo & wendallkaters) [1] https://etherpad.opendev.org/p/June2020-PTG-Feedback -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Mon Jun 8 19:16:21 2020 From: allison at openstack.org (Allison Price) Date: Mon, 8 Jun 2020 14:16:21 -0500 Subject: OSF Community Meeting - June 25 & 26 Message-ID: <5F79380E-1A58-475C-8BBA-6861D8B17150@openstack.org> Hi everyone, On June 25 (1300 UTC) and June 26 (0200 UTC) , we will be holding the quarterly OSF community [1] that will cover project updates from all OSF-supported projects and events. The OpenStack community is encouraged to prepare a slide and present a 3-5 minute update on the project and community’s progress. The update should cover updates that have occurred since the last community meeting on April 2. If you would like to volunteer to present the OpenStack update for one meeting (or both!) please sign up here [1]. We are aiming to finalize the content by Friday, June 19. If you missed the Q1 community meeting, you can see how the upcoming meeting will be structured in this recording [2] and this slide deck [3]. If you have any questions, please let me know. Thanks! Allison [1] https://etherpad.opendev.org/p/OSF_Community_Meeting_Q2 [2] https://zoom.us/rec/share/7vVXdIvopzxIYbPztF7SVpAKXYnbX6a82iMaqfZfmEl1b0Fqb6j3Zh47qPSV_ar2 [3] https://docs.google.com/presentation/d/1l05skj_BCfF8fgYWu4n0b1rQmbNhHp8sMeYcb-v-rdA/edit#slide=id.g82b6d187d5_0_525 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knikolla at bu.edu Mon Jun 8 19:55:48 2020 From: knikolla at bu.edu (Nikolla, Kristi) Date: Mon, 8 Jun 2020 19:55:48 +0000 Subject: [keystone] No meeting June 9th Message-ID: <57567D30-BC75-43C4-8563-E5500335C465@bu.edu> Hi all, Since we met during the PTG on Thu-Fri, I don't think it's necessary to have the weekly meeting this Tuesday on June 9th. Have 1 hour back :) Best, Kristi From openstack at nemebean.com Mon Jun 8 20:27:23 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 8 Jun 2020 15:27:23 -0500 Subject: [oslo] Ping List for Victoria In-Reply-To: <58089512-5990-d0a8-35ab-876bace6fd91@nemebean.com> References: <58089512-5990-d0a8-35ab-876bace6fd91@nemebean.com> Message-ID: <9e5d33d0-a1e8-1345-37c9-f48e7f90dadd@nemebean.com> Oh hey, I already sent this. May 18th seems like an eternity ago. o.O Anyway, since we skipped a couple of meetings due to holidays and the PTG, we'll keep using the old ping list until next week. After next week I'll switch to the new one, so if you want to keep getting pings that's your deadline. -Ben On 5/18/20 12:09 PM, Ben Nemec wrote: > Hi, > > With the start of a new cycle, we refresh our courtesy ping list for the > start of the meeting. We do this to avoid spamming people who may no > longer be working on Oslo but haven't explicitly removed their name from > the ping list. > > To that end, I've added a new ping list above the agenda template[0]. If > you wish to continue receiving courtesy pings, please add your name > there. In a couple of weeks we will switch to using this new list. > > Thanks. > > -Ben > > 0: https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_Template From whayutin at redhat.com Mon Jun 8 21:48:27 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 8 Jun 2020 15:48:27 -0600 Subject: [tripleo] no irc meeting tomorrow Message-ID: Greetings, As discussed at the PTG, there will only be one TripleO irc meeting every three weeks. I'll update the docs and schedule to reflect that. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Tue Jun 9 01:30:18 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Tue, 9 Jun 2020 01:30:18 +0000 (UTC) Subject: [all][InteropWG] Tuesday June 9, UTC 17 - Meeting Reminder to review OpenStack Powered Logo Guidelines and your feedback References: <563189325.253183.1591666218100.ref@mail.yahoo.com> Message-ID: <563189325.253183.1591666218100@mail.yahoo.com> InteropWG is inviting you to a scheduled Zoom meeting Tuesday June 9th,10-10.45 AM PDT / 13-13.45 EDT / 17-17.45 UTC,Join from a browser: https://Dell.zoom.us/wc/join/99829924353 , Password: 101045 , orJoin Zoom Meeting: https://Dell.zoom.us/j/99829924353?pwd=bU9NTkZQVkd3SmIrVGErZWFGVnlmUT09 , and in paralle on irc ,irc: #openstack-interopwg or #openstack , For background review -   1. https://www.slideshare.net/markvoelker/interopwg-intro-vertical-programs-jan-2017 , 2. https://opendev.org/openstack/interop/src/branch/master/2020.06.json , 3. https://etherpad.opendev.org/p/victoria-ptg-interop-wg , Agenda: 1. Agenda Bashing2. Presentation for comments on Ussuri Interop Draft Guidelines (Need - Verbage and logo for adding OpenStack Powered DNS & Orchetsration -  -https://www.openstack.org/brand/interop/)     Review Project APIs for Interop Ussuri cycle    a. keystone ,     b. glance ,    c. nova ,    d. neutron ,    e. cinder ,    f. swift ,    g. designate (OpenStack powered DNS logo?) ,    h. heat (OpenStack powered Orchestrator Logo?) . 3. Feed back by contributors and any other PTG or representatiive from community 4. Conclusion by Mark Voelker for Ussuri Draft - Vice Chair (Market Place, logo, irc, conference-bridge, issues in refstack-client, tempest features,...) 5. Approval verbal or irc  vote in Interop WG for Draft to Board for  Thursday June 11 schedule - https://wiki.openstack.org/wiki/Governance/Foundation/11June2020BoardMeeting Finally Any suggestions for Victoria cycle to explore  1. Ironic or Bare metal (Open Infrastructure Powered Logo?)2. NFV (tacker? NFVi &/or VNF/CNF APIs?) (OpenStack NFV Logo?)3. Ideas needed for  Heterogeneus Cloud compliance (suggest use cases) (OpenStack Compatible Hybrid Cloud Logo?) ThanksPrakash : Chair InteropWGMark : Vice Chair : Interop WG -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Tue Jun 9 07:19:46 2020 From: aj at suse.com (Andreas Jaeger) Date: Tue, 9 Jun 2020 09:19:46 +0200 Subject: [docs] OpenStack Glossary In-Reply-To: <20200608165328.jh3r7a5axaxyggrs@yuggoth.org> References: <1591630921.362613753@apps.rackspace.com> <20200608165328.jh3r7a5axaxyggrs@yuggoth.org> Message-ID: <56e0c20c-e996-c382-3f64-7bb59bc8a0e0@suse.com> On 08/06/2020 18.53, Jeremy Stanley wrote: > On 2020-06-08 11:42:01 -0400 (-0400), helena at openstack.org wrote: >> We are on a mission to create a glossary of OpenStack related >> terms and want your help! > [...] > > Is that effort intended to replace the current > https://docs.openstack.org/glossary/ or are you coming up with new > entries to add to the existing one, or does it serve a separate > purpose entirely? Let me add to this: Updates for docs.o.o/glossary are welcome, please sent patches to openstack-manuals! Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From skaplons at redhat.com Tue Jun 9 07:54:41 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 9 Jun 2020 09:54:41 +0200 Subject: [neutron] CI meeting on Wednesday 10.06.2020 cancelled Message-ID: <20200609075441.2pmsqde4ewlyxslw@skaplons-mac> Hi, I need to cancel tomorrows CI meeting due to conflicting internal meeting which I have. I will check our CI status and will ping You on IRC if there will be any need. See You on the meeting next week. -- Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Tue Jun 9 07:56:25 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 9 Jun 2020 09:56:25 +0200 Subject: [neutron] Drivers meeting on Friday 12.06.2020 cancelled Message-ID: <20200609075625.cnrqkw2sfehxqyll@skaplons-mac> Hi Due to my day off this Friday I will not be able to attend and chair our drivers meeting. I know that also at least 2 other members of the drivers team will not be available this week so lets cancel this meeting and lets get back to it next week. Have a great week and see You all next week :) -- Slawek Kaplonski Senior software engineer Red Hat From thierry at openstack.org Tue Jun 9 10:00:14 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 9 Jun 2020 12:00:14 +0200 Subject: [all][release] One following-cycle release model to bind them all Message-ID: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> Hi everyone, As you know[1] I'm trying to push toward simplification of OpenStack processes, to make them easier to navigate for new members of our community and generally remove weight. A good example of that is release models. We used to have a single model (with milestones and RCs) but over time we grew a number of alternative models to accommodate corner cases. The result is a confusing collection of release models with abstract rules for each and not much flexibility. Projects are forced to choose between those models for their deliverables, with limited guidance. And much of the rationale for those models (exercise release machinery early and often, trigger external testing...) is no longer valid. I'd like to suggest we simplify this and have a single model for things that follow the development cycle: the "follows-cycle" model. The only alternative, its nemesis, its Wario would be the "independent" release model. In the "follows-cycle" model, deliverables would be released at least once per cycle, but could be released more often. The "final" release would be marked by creating a release (stable) branch, and that would need to be done before a deadline. Like today, that deadline depends on whether that deliverable is a library, a client library, a release-trailing exception or just a regular part of the common release. The main change this proposal introduces would be to stop having release candidates at the end of the cycle. Instead we would produce a release, which would be a candidate for inclusion in the coordinated OpenStack release. New releases could be pushed to the release branch to include late bugfixes or translation updates, until final release date. So instead of doing a 14.0.0.0rc1 and then a 14.0.0.0rc2 that gets promoted to 14.0.0, we would produce a 14.0.0, then a 14.0.1 and just list that 14.0.1 in the release page at coordinated release time. I feel like this would not change that much for deliverables following the cycle-with-rc model. It would not change anything for cycle-with-intermediary, libraries or cycle-trailing deliverables. But it would simplify our processes quite a bit, and generally make our releases more consistent. Thoughts? [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013236.html -- Thierry Carrez (ttx) From neil at tigera.io Tue Jun 9 10:57:30 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 9 Jun 2020 11:57:30 +0100 Subject: [ubuntu] Known that qemu 4.2 is incompatible with GCP instances? Message-ID: I run tests with GCP instances as the OpenStack hypervisors. Obviously it's better if those can use libvirt_type kvm, i.e. nested virtualization, and this has been possible prior to my current Ussuri upgrade work. With Ussuri on Ubuntu, IIUC, we get qemu 4.2 from cloud-archive:ussuri, but qemu 4.2 has a bug that was fixed by this commit prior to 5.0.0: https://github.com/qemu/qemu/commit/4a910e1f6ab4155ec8b24c49b2585cc486916985 target/i386: do not set unsupported VMX secondary execution controls Commit 048c951 ("target/i386: work around KVM_GET_MSRS bug for secondary execution controls") added a workaround for KVM pre-dating commit 6defc591846d ("KVM: nVMX: include conditional controls in /dev/kvm KVM_GET_MSRS") which wasn't setting certain available controls. The workaround uses generic CPUID feature bits to set missing VMX controls. [...] The bug manifests on a GCP instance with nested virtualization enabled [1], because such a GCP instance doesn't support MSR features. The OpenStack-level symptom is that a VM can't be scheduled onto that GCP instance. Is this a well-known problem? For CentOS/RHEL, [2] looks similar and maybe fixed, but it's difficult to be sure. Best wishes, Neil [1] https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances [2] https://bugzilla.redhat.com/show_bug.cgi?id=1722360 -------------- next part -------------- An HTML attachment was scrubbed... URL: From masayuki.igawa at gmail.com Tue Jun 9 10:58:22 2020 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Tue, 09 Jun 2020 19:58:22 +0900 Subject: [qa] Virtual PTG Wrap-up and office hour was changed to Tue 1300 UTC Message-ID: <181d591b-7ff0-42d6-801a-06b66f1066bd@www.fastmail.com> Hi, Thank you for attending our sessions! We discussed many things during virtual PTG. Topics are the following. And as we discussed our office hour office time 30 minutes earlier than the current time. So, the office hour will start in 2 hours today. ========= Topics: Mon: @ bexar 13:00-13:30 Ussuri Retrospective (gmann) 13:30-14:00 Make tempest scenario manager a stable interface (kopecmartin/soniya) 14:00-14:30 tempest cleanup (kopecmartin) 14:30-15:00 Gates optimization by a better test schedulling (kopecmartin) Tue: @grizzly 13:00-13:30 Description for testcases as docstrings (kopecmartin) 13:30-14:00 Feature Freeze idea for new tests in Tempest and Patrole and other QA projects (gmann) 14:00-14:15 How to handle the tox.ini constarint for each Tempest new tag (gmann) 14:15-14:30 Migrating hacking checks from other projects to hacking itself(paras333) 14:30-15:00 Victoria Priority & Planning (30min) (masayukig) https://etherpad.opendev.org/p/qa-victoria-priority Etherpad: https://etherpad.opendev.org/p/qa-victoria-ptg You can find the recording video files on the etherpad. However, Monday's recording was disappeared somewhere in the cloud.. :( ========= -- Masayuki Igawa From dtantsur at redhat.com Tue Jun 9 11:06:15 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 9 Jun 2020 13:06:15 +0200 Subject: [ironic] advanced partitioning discussion Message-ID: Hi folks, As a follow up to the PTG discussion, I'd like to schedule a call about advanced partitioning in ironic. Please vote for the date and time next week: https://doodle.com/poll/5yg93gv7casu3ate Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Tue Jun 9 11:32:04 2020 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 9 Jun 2020 18:32:04 +0700 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> Message-ID: <024635ca-e389-4efe-acbc-5ab0f5ebf40d@Spark> Hi, On 9 Jun 2020, 17:01 +0700, Thierry Carrez , wrote: > instead of doing a 14.0.0.0rc1 and then a 14.0.0.0rc2 that gets promoted > to 14.0.0, we would produce a 14.0.0, then a 14.0.1 and just list that > 14.0.1 in the release page at coordinated release time. I like this part because to me it feels like those RCs nowadays are often not necessary to do from development perspective but still "have to" because it’s just such a release process. On the other hand, RCs are useful when the project is really being actively developed (new features, refactoring etc) because it helps to get something pretending to be the final release but there’s still a chance to update it if testing helped find some issues. For what it’s worth, I think having less versions is better, especially if all artificial (procedural if you will) ones are gone. Less confusing for users and new contributors. Such confusions really happen once in a while in practice. Thanks Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Tue Jun 9 12:08:16 2020 From: james.page at canonical.com (James Page) Date: Tue, 9 Jun 2020 13:08:16 +0100 Subject: [ubuntu] Known that qemu 4.2 is incompatible with GCP instances? In-Reply-To: References: Message-ID: Hi Neil On Tue, Jun 9, 2020 at 11:59 AM Neil Jerram wrote: > I run tests with GCP instances as the OpenStack hypervisors. Obviously > it's better if those can use libvirt_type kvm, i.e. nested virtualization, > and this has been possible prior to my current Ussuri upgrade work. > > With Ussuri on Ubuntu, IIUC, we get qemu 4.2 from cloud-archive:ussuri, > but qemu 4.2 has a bug that was fixed by this commit prior to 5.0.0: > https://github.com/qemu/qemu/commit/4a910e1f6ab4155ec8b24c49b2585cc486916985 > > target/i386: do not set unsupported VMX secondary execution controls > > Commit 048c951 ("target/i386: work around KVM_GET_MSRS bug for > secondary execution controls") added a workaround for KVM pre-dating > commit 6defc591846d ("KVM: nVMX: include conditional controls in > /dev/kvm > KVM_GET_MSRS") which wasn't setting certain available controls. The > workaround uses generic CPUID feature bits to set missing VMX > controls. [...] > > The bug manifests on a GCP instance with nested virtualization enabled > [1], because such a GCP instance doesn't support MSR features. The > OpenStack-level symptom is that a VM can't be scheduled onto that GCP > instance. > > Is this a well-known problem? For CentOS/RHEL, [2] looks similar and > maybe fixed, but it's difficult to be sure. > I could not an existing bug in Ubuntu describing these symptoms - any chance you can report a bug here: https://bugs.launchpad.net/ubuntu/+source/qemu/+filebug Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Tue Jun 9 13:04:12 2020 From: james.page at canonical.com (James Page) Date: Tue, 9 Jun 2020 14:04:12 +0100 Subject: [ubuntu] Known that qemu 4.2 is incompatible with GCP instances? In-Reply-To: References: Message-ID: On Tue, Jun 9, 2020 at 1:08 PM James Page wrote: > Hi Neil > > > > On Tue, Jun 9, 2020 at 11:59 AM Neil Jerram wrote: > >> I run tests with GCP instances as the OpenStack hypervisors. Obviously >> it's better if those can use libvirt_type kvm, i.e. nested virtualization, >> and this has been possible prior to my current Ussuri upgrade work. >> >> With Ussuri on Ubuntu, IIUC, we get qemu 4.2 from cloud-archive:ussuri, >> but qemu 4.2 has a bug that was fixed by this commit prior to 5.0.0: >> https://github.com/qemu/qemu/commit/4a910e1f6ab4155ec8b24c49b2585cc486916985 >> >> target/i386: do not set unsupported VMX secondary execution controls >> >> Commit 048c951 ("target/i386: work around KVM_GET_MSRS bug for >> secondary execution controls") added a workaround for KVM pre-dating >> commit 6defc591846d ("KVM: nVMX: include conditional controls in >> /dev/kvm >> KVM_GET_MSRS") which wasn't setting certain available controls. The >> workaround uses generic CPUID feature bits to set missing VMX >> controls. [...] >> >> The bug manifests on a GCP instance with nested virtualization enabled >> [1], because such a GCP instance doesn't support MSR features. The >> OpenStack-level symptom is that a VM can't be scheduled onto that GCP >> instance. >> >> Is this a well-known problem? For CentOS/RHEL, [2] looks similar and >> maybe fixed, but it's difficult to be sure. >> > > I could not an existing bug in Ubuntu describing these symptoms - any > chance you can report a bug here: > > https://bugs.launchpad.net/ubuntu/+source/qemu/+filebug > > Cheers > https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1882774 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Jun 9 13:50:38 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 9 Jun 2020 08:50:38 -0500 Subject: [openstack-helm] No IRC Meeting Today 06/09 Message-ID: Today's weekly IRC meeting will be cancelled since we just recently discussed topics during last week's PTG. We will continue as scheduled next week. -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue Jun 9 14:15:54 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 9 Jun 2020 15:15:54 +0100 Subject: [ubuntu] Known that qemu 4.2 is incompatible with GCP instances? In-Reply-To: References: Message-ID: On Tue, Jun 9, 2020 at 2:04 PM James Page wrote: > On Tue, Jun 9, 2020 at 1:08 PM James Page > wrote: > >> Hi Neil >> >> >> >> On Tue, Jun 9, 2020 at 11:59 AM Neil Jerram wrote: >> >>> I run tests with GCP instances as the OpenStack hypervisors. Obviously >>> it's better if those can use libvirt_type kvm, i.e. nested virtualization, >>> and this has been possible prior to my current Ussuri upgrade work. >>> >>> With Ussuri on Ubuntu, IIUC, we get qemu 4.2 from cloud-archive:ussuri, >>> but qemu 4.2 has a bug that was fixed by this commit prior to 5.0.0: >>> https://github.com/qemu/qemu/commit/4a910e1f6ab4155ec8b24c49b2585cc486916985 >>> >>> target/i386: do not set unsupported VMX secondary execution controls >>> >>> Commit 048c951 ("target/i386: work around KVM_GET_MSRS bug for >>> secondary execution controls") added a workaround for KVM pre-dating >>> commit 6defc591846d ("KVM: nVMX: include conditional controls in >>> /dev/kvm >>> KVM_GET_MSRS") which wasn't setting certain available controls. The >>> workaround uses generic CPUID feature bits to set missing VMX >>> controls. [...] >>> >>> The bug manifests on a GCP instance with nested virtualization enabled >>> [1], because such a GCP instance doesn't support MSR features. The >>> OpenStack-level symptom is that a VM can't be scheduled onto that GCP >>> instance. >>> >>> Is this a well-known problem? For CentOS/RHEL, [2] looks similar and >>> maybe fixed, but it's difficult to be sure. >>> >> >> I could not an existing bug in Ubuntu describing these symptoms - any >> chance you can report a bug here: >> >> https://bugs.launchpad.net/ubuntu/+source/qemu/+filebug >> >> Cheers >> > > https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1882774 > Many thanks James, that's exactly it. I've just commented on the bug to ask if it would be easy to build packages for Bionic (as well as for Focal). Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Tue Jun 9 16:55:10 2020 From: helena at openstack.org (helena at openstack.org) Date: Tue, 9 Jun 2020 12:55:10 -0400 (EDT) Subject: [docs] OpenStack Glossary In-Reply-To: <20200608165328.jh3r7a5axaxyggrs@yuggoth.org> References: <1591630921.362613753@apps.rackspace.com> <20200608165328.jh3r7a5axaxyggrs@yuggoth.org> Message-ID: <1591721710.34282749@apps.rackspace.com> Hi Jeremy, Thank you for sending me the glossary! Yes, we will be using the etherpad to get community feedback and then editing the present glossary accordingly. Cheers, Helena -----Original Message----- From: "Jeremy Stanley" Sent: Monday, June 8, 2020 12:53pm To: openstack-discuss at lists.openstack.org Subject: Re: [docs] OpenStack Glossary On 2020-06-08 11:42:01 -0400 (-0400), helena at openstack.org wrote: > We are on a mission to create a glossary of OpenStack related > terms and want your help! [...] Is that effort intended to replace the current https://docs.openstack.org/glossary/ or are you coming up with new entries to add to the existing one, or does it serve a separate purpose entirely? -- Jeremy Stanley -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Jun 9 17:39:58 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Jun 2020 17:39:58 +0000 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> Message-ID: <20200609173958.c5t3vujgytuihd3q@yuggoth.org> On 2020-06-09 12:00:14 +0200 (+0200), Thierry Carrez wrote: > In the "follows-cycle" model, deliverables would be released at least once > per cycle, but could be released more often. The "final" release would be > marked by creating a release (stable) branch, and that would need to be done > before a deadline. Like today, that deadline depends on whether that > deliverable is a library, a client library, a release-trailing exception or > just a regular part of the common release. > > The main change this proposal introduces would be to stop having release > candidates at the end of the cycle. Instead we would produce a release, > which would be a candidate for inclusion in the coordinated OpenStack > release. New releases could be pushed to the release branch to include late > bugfixes or translation updates, until final release date. So instead of > doing a 14.0.0.0rc1 and then a 14.0.0.0rc2 that gets promoted to 14.0.0, we > would produce a 14.0.0, then a 14.0.1 and just list that 14.0.1 in the > release page at coordinated release time. [...] I suppose this will also have the effect that the official release tags will now appear in the history of the master branch. That will be nice. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gagehugo at gmail.com Tue Jun 9 18:13:54 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 9 Jun 2020 13:13:54 -0500 Subject: [openstack-helm] Abandoning old patchsets Message-ID: Hello everyone. >From our discussions during the PTG and past meetings, it was brought up about openstack-helm's large number of old patchsets that are still active and have not been updated in a long time, some are several years old. Over the next few days we will be going through these changes and working to abandon anything that is older than 6-12 months, depending on any context if available. If anyone has a change they are still interested in and would like to pick back up, if it gets abandoned, please feel free to restore it and let us know that you intend to continue working on it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Jun 9 18:24:30 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 9 Jun 2020 12:24:30 -0600 Subject: [tripleo] Victoria TripleO PTG summary Message-ID: Greetings, Thanks to everyone who attended the OpenStack PTG last week! A special thanks to those who presented their topics and discussed work items with the folks in attendance. As you know the event was virtually hosted in a video conference and seemed quite busy and packed with great topics and conversations. As the current PTL for TripleO I will do my best here to summarize those conversations and items others should be made aware of. To review the topics and discussion please follow the links here [1]. The event was recorded, however the OpenStack foundation has not made any of the videos publicly available yet. Monday June 1st: Retrospective: The TripleO project started with a retrospective of the Ussuri cycle. I attempted to use OpenStack’s storyboard for the process, but had to revert to an etherpad for usability. Keep trying, Storyboard is getting there. The good news is that the good things outweighed the bad things [2], and the ideas for improvement were focused on making things faster \0/ TripleO Operator Ansible Status: by Alex Schultz Alex gave a nice overview of his hard work throughout Ussuri to make TripleO Operator Ansible a reality. TripleO Operator Ansible is the official way now to execute TripleO commands via ansible. The upstream ci and consultants in the field are all consolidating around the tool. The history of reviews to make this happen can be found [4], while Alex completed a lot of the work he also attracted a number of contributors that also completed a lot of work. One note that Alex wanted to emphasize was that while TripleO Operators are meant to be executed by customers, and consultants, TripleO-Ansible is NOT meant to be exposed or called directly. Slides are available here [3]. Thank you Alex! The Future of python-tripleoclient: by Rabi Mishra Rabi led a very interesting conversation about the steps the project would have to take to further simplify the stack of projects used in a TripleO deployment. Currently there are a number of layers in client calls to tripelo-operator-ansible, python-tripleoclient, tripleo-ansible playbooks/modules, tripleo-common library which is complex and not an ideal user experience in terms of logs, resolving bugs. Out of the gate Rabi discussed breaking down python-tripleoclient into something more basic and moving more functions to ansible modules. The proposal was to get rid of CLI or replace it with a very simple one and move all the logic in tripleoclient to ansible playbooks/modules. The top level playbooks would directly map to current cli actions and would be in tripleo-ansible repo. tripleo-operator-ansible can also change to use those playbooks directly and transparently under the hood. Details from the session can be found here [5]. Thanks Rabi! Ansible Strategies & us: by Alex Schultz Alex was up again to let us know what he’s been up to make Ansible more performant. Ansible offers several different kinds of “strategies” with regards to how tasks are executed across multiple hosts. The strategies are pluggable and Alex has built a custom strategy currently called “TripleO Free” that can be used across some but not all of TripleO’s tasks [7]. The performance enhancement is spectacular, reducing a 30 node deployment from almost 2 hours to under 50 minutes. Well done!!! I’ll note the strategy name will be changed to garner more community support and the performance gains are not as pronounced with fewer nodes. A standard CI like deployment ( 4-5 nodes ) can expect to see 20 minutes cut off the deployment Slides of Alex’s presentation can be found here [6]. Very well done! Thanks Alex! Mistral has been removed, so what is left to do? By Kevin Carter Kevin hit us next with what to expect now that mistral has been removed and what steps we need to take next to make the community successful. I’ll note the mistral container is still on the undercloud but inactive, and workflow processing has been converted directly to ansible. There is still some cleanup in tripleo-common and rpm dependencies to prune. A link to the conversation is available [8], and Kevin’s presentation is available [9]. Thank you Kevin! TripleO Operator Pipelines By Emilien Macchi Emilien walked us through what it would take to further consolidate on TripleO Operator Ansible based CI pipelines in TripleO, OSP and at customer sites. Breaking down the full workflow of a deployment and day two operations in CI was reviewed. The goal here is to replace as much CI as possible with TripleO Operator Ansible to have a standardized ansible interface with TripleO in any CI or customer environment. TripleO Operators are shipping in Ussuri and should be backwards compatible with earlier releases. There will be a major push upstream to further integrate TripleO Operator Ansible into every CI job. Notes and comments are published here [10]. Thanks a million! [1] https://etherpad.opendev.org/p/tripleo-ptg-victoria [2] https://etherpad.opendev.org/p/tripleo-ptg-retrospective [3] https://docs.google.com/presentation/d/1Oxs-sflnJd5KIMoY6e_mlfU2zK4QWqWSG7AhJLpqTMo/edit#slide=id.p3 [4] https://etherpad.opendev.org/p/tripleo-operator-ansible [5] https://etherpad.opendev.org/p/tripleo-future-of-tripleoclient [6] https://docs.google.com/presentation/d/19mr3HyyYUUGcwbRHsy2-k-F4WNCYCoFvSPzdbCTgw-A/edit?usp=sharing [7] https://review.opendev.org/#/q/topic:strategy-improvements [8] https://etherpad.opendev.org/p/tripleo-remove-mistral [9] https://docs.google.com/presentation/d/1iir7FA6YwBxRoU_SZJQ2H9Enbak4GAfBw73wU3udFkQ/edit#slide=id.g861ce413bb_0_701 [10] https://etherpad.opendev.org/p/tripleo-operator-pipelines Tuesday June 2nd CI updates: by Wes Hayutin In the CI update I mostly covered the new upstream Component Pipeline. The component pipeline has three major goals, the first is to enable us to release at any time with working components of OpenStack, secondly break down a large problem into smaller problems, and third reduce time to debug and fix. The presentation covers monolithic vs. components builds, the workflow and testing and monitoring of the pipeline. The presentation is available here [11] I also noted that the upstream CI executed 268,805 deployments of TripleO in the Ussuri cycle. Third party CI executed 132,853 deployments. Not to shabby. Details are here [12]. Thanks easter bunny! IPv6 and DCN (routed-networks) in upstream CI by Harald Jensas Harald kicked off the next topic about utilizing more advanced networking with OVB in our third party TripleO CI. The proposal is to update the CI with multiple network segments. Herald has been the primary of OVB ( openstack virtual baremetal ) and looks to be wrapping up this feature [13]. Documentation for the feature can be found here [14] and notes of the discussion are posted [15] Thank you Harald! Enable network isolation by default by Harald Jensas Harald continued the networking discussion by highlighting common mistakes made in the field with network isolation settings and TripleO. When customers or consultants accidentally forget to include network-isolation settings heat can be destructive to the production environment and delete networks during an update or upgrade. The discussion led to merged patches that already solve the issue, but catching it earlier in the process was still a concern and led to discussion around additional validations. The goal was also shifted to make network-isolation more approachable by our customers. A spec will be written to improve the customer experience here. Notes can be found [16] Thanks Harald! Future deprecation of tripleo-validations by Cedric Jeanneret Cedric led us through the current status and future of TripleO Validations. The deprecated version of the validations do not have clear ownership, and testing each validation has proved to be difficult. Enter the solution with the validation teams new framework of validations where the validation service is clearly delineated from the validations themselves. We discussed ownership, packaging, CI with the entire workflow of packaging and testing all have clear ownership and fits in very neatly with the component pipeline. Please read through the details of the discussion as this will impact several projects with a clean, exciting way to validate each service in OpenStack [17]. Thank you Cedric!! Container Image Build v2 by Emilien Macchi and Kevin Carter Kevin and Emilien have been putting in a lot of extra hours revamping the container build system for TripleO and have produced a much improved system in record time. TripleO will benefit from smaller containers, faster builds and the flexibility to handle upstream and downstream builds easily. If you haven’t seen the presentation please do have a look [18]. Notes on the topic can be found here [19]. Get involved if you can keep up, Thanks Emilien, Kevin! Ceph Integration w/ cephadm by Francesco Pantano, Giulio Fidente, John Fulton The storage trio walked us through details of cephadm and the ramifications of replacing ceph-ansible. We discussed a wide range of topics here including what features should be built into TripleO vs. handled directly by cephadmin like scale up/down, updates, upgrades. The team walked us through how they dissected the deployment and injected cephadmin as a proof of concept that everything works well and proved we’re in good hands on the storage front. There is a lot of detail in the notes, so please have a read through here [20]. Thank you Francesco, Giulio ( aka bob dylan ), John! Removal of Heat and Swift from Undercloud by Rabi Mishra Rabi continued from his earlier topic regarding the noble effort to further simplify the OpenStack deployment by removing heat and swift. Rabi articulately described how heat is currently used in the latest release and what would have to be done to remove both heat and swift, walking us through heat resources, extra config, IPAM etc and how each could potentially be replaced. I personally really enjoyed hearing this particular topic and how we can move forward making OpenStack less complex. This is an important topic and you should review Rabi’s clear strategy here [21]. Thanks Rabi! Database migrations - can we make them more friendly, or can we do them a better way? By Jesse Pretorius Jesse led us next and spoke to the hard and complex problem of database migrations across OpenStack projects. This session was more of a brainstorming exercise in discovering creative solutions to complex problems. Unfortunately the group felt the problem came down to governance of OpenStack itself in more uniformly enforcing migration details. It was concluded getting all the projects to agree on a standard for migrations would be quite the uphill climb. Notes on the subject can be found here [22]. Thanks Jesse! [11] https://drive.google.com/file/d/1rAohZ01BDFGBOjI3kS9jy-n-P1weQVIX/view [12] https://etherpad.opendev.org/p/tripleo-ptg-victoria-ci-updates [13] https://review.opendev.org/#/q/topic:ipv6-support+(status:open+OR+status:merged)+project:openstack/openstack-virtual-baremetal [14] https://openstack-virtual-baremetal.readthedocs.io/en/latest/deploy/quintupleo.html#quintupleo-and-routed-networks [15] https://etherpad.opendev.org/p/tripleo-ipv6-and-routed-networks-in-upstream-ci [16] https://etherpad.opendev.org/p/tripleo-enable-net-iso-by-default [17] https://etherpad.opendev.org/p/tripleo-validations-future [18] https://docs.google.com/presentation/d/1l2RzL-hJ-fT9jzi2A7s8ladEumOvy8tDy7DP0XqnqNE/edit#slide=id.p3 [19] https://etherpad.opendev.org/p/tripleo-container-image-tooling-victoria [20] https://etherpad.opendev.org/p/tripleo-ceph [21] https://etherpad.opendev.org/p/tripleo-heat-swift-removal-undercloud [22] https://etherpad.opendev.org/p/tripleo-ptg-victoria-better-db-sync Wednesday June 3rd Speeding up deployments, updates and upgrades by Jesse Pretorius Jesse had quite a few suggestions and proposals on how we may speed up updates and upgrades. Some of the highlights were building on top of Alex’s ansible strategy improvements and avoiding skipped tasks, avoiding unnecessary reboots etc. There was a lot discussed, please refer to the etherpad for details [23]. Thank you Jesse! Running validations from within a container by Cedric Jeanneret Cedric continued to walk us through the very near future of validations. Cedric wanted to discuss the delivery of validations and the implications of using a container to host the validations. Older non-containerized versions of TripleO were discussed, using an ansible collection and a container were discussed. There are multiple use cases for validations leading the group to not consolidate on a single delivery mechanism. More discussion was needed at the end of this topic. Notes are here [24]. Thanks Cedric! Auto --limit scale-up of Compute nodes by Luke Short Luke walked us through his auto scale up spec [25] in the following presentation [26]. Essentially this is customizing the scale up process to better match the ansible configuration for forked processes to make the scale up quicker. Luke was able to perform a 10 node scale up in 20 minutes. Nice presentation Luke! TripleO usability enhancements: by Wes Hayutin Initially there was not a lot of suggestions in this topic prior to PTG, however once we got started with usability improvements they started to roll in. Definitely check the etherpad [27], but I’ll list some here. - Fix “FAILED - RETRYING in stack status” - fixed already in https://review.opendev.org/#/c/725665/ - Network-Isolation user experience - linked back to Harold’s topic - Chem raised a number of update / upgrade improvements on line 25 [27] - Eliminate the need for customers to remove deprecated services from roles_data in upgrades - Improved logging - Block commands that require a previous actions - Prompt user prior to dangerous actions - Keep simplifying, e.g.’s Rabi’s proposals. Check the etherpad for more details [27] Improvements in TLS Everywhere/ CI Presenter by Ronelle Landy, Ade Lee At the last PTG in Shanghai Ade and I proposed a job that would setup IPA and a Standalone deployment of TripleO and configure it to work together upstream to check and GATE changes to TLS. I’m happy to report that Ade and Ronelle got the job done!! The presentation is available here [29]. This should go a very long way to help prevent TLS related bugs across upstream and internal testing. Thanks Ade, Thanks Ronelle! [23] https://etherpad.opendev.org/p/tripleo-ptg-victoria-speedups [24] https://etherpad.opendev.org/p/tripleo-validation-container [25] https://review.opendev.org/#/c/727768/1/specs/victoria/auto_scale_up_compute_nodes.rst [26] https://drive.google.com/file/d/1c2D67QZ5UGBYRONogZJKMief5wJxQAIt/view [27] https://etherpad.opendev.org/p/tripleo-usability-enhancements [28] https://etherpad.opendev.org/p/tripleo-tls-everywhere-ci [29] https://docs.google.com/presentation/d/1gruDzHIjZtPtUUYSRGIrJynTOt9bZP3W7TLfd80H9ws/edit#slide=id.p Thursday June 4th Config-download 2.0 by Luke Short Luke walked us through a proposal to build on config download and what steps could be taken further simplify the deployment using Ansible. Using idempotent tasks, static playbooks etc. The source of truth with regards customer environments was a tough nut to crack here and it was difficult to see a very clear and backwards compatible method to approach this. Notes are available here [30] Ansible logging within tripleoclient by Cedric Jeanneret Cedric proposed improvements to TripleO logging here [31]. Cedric walked the group through the spec. Logging is certainly an area we all want to see improvement on and we all have opinions about so this was a lively conversation. Cedric pointed out there are two main types of steps we need to go after which is any ansible task, and any tripleo cli command called need to be logged together in a human readable way. Very good points made and the conversation will continue in the TripleO spec. [31]. Thanks Cedric! Transitioning the underlying CentOS/RHEL - how can we improve this process? By Jesse Jesse spoke to the challenges facing upgrades with regards to the mix of host RHEL versions in a deployment. Several finer points were made with nuances of the upgrade with regards to HA, pacemaker, libvirt and ironic versions. It was a good troubleshooting session and a thoughtful dialogue with the group. Read the details here [33] Thanks Jesse! VxFlexOS integration within TripleO by Jean Pierre Roquesalane/Rajini Karthik The Tripleo team answered questions and walked the Dell team through the best practices with integrating a 3rd party service with TripleO. Details are here [34] TripleO CI - audit coverage for neutron, ovn, ovs, octavia by Brent Eagles, Slawek Kaplonski, Wes Hayutin This session was about reemphasizing the importance of the network scenarios and workflows that are critical to TripleO’s success in the field. Over the past year or so Brent and others have done a great job in adding additional upstream coverage but it’s now time to make sure this is everyone’s job as well. We discussed the challenges with upstream jobs, the neutron project and TripleO. We also spec’d out some idea on where to build on top of Brent and Slawek successes to reach greater upstream coverage of critical network features and workflows. Notes are available [35]. Really appreciate Brent and Slawek making time to attend, thank you!! tripleo-validations package future by Cedric Jeanneret Last but not least Cedric led a discussion on packaging for validations and how we can align responsibility of validations with rpms, git repos and CI. We spoke to how the packaging is related to the component pipeline. Designing this carefully so that the validation’s team and other projects can work together and independently with clear lines. Good stuff here, details [36] Thanks Cedric! openstack tripleo deploy (standalone) for multinode by James Slagle Please read through the blueprint and specs proposed by James with regards to utilizing the standalone deployment for multinode overcloud deployments. [37] Thanks James [30] https://etherpad.opendev.org/p/tripleo-ptg-victoria-config-download-two [31] https://review.opendev.org/#/c/733652/1/specs/victoria/ansible-logging-tripleoclient.rst [32] https://etherpad.opendev.org/p/tripleo-ptg-victoria-ansible-logging [33] https://etherpad.opendev.org/p/tripleo-ptg-victoria-distro-transition [34] https://etherpad.opendev.org/p/tripleo-ptg-victororia-VxFlexOS [35 https://etherpad.opendev.org/p/tripleo-network-coverage-audit [36] https://etherpad.opendev.org/p/tripleo-validations-future [37 https://etherpad.opendev.org/p/tripleo-ptg-tripleo-deploy Did you make it? Special thank you sincerely ( I know I never sound sincere ) to both Emilien and Alex throughout the cycle in helping me with my PTL responsibilities!! See you next PTG :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate.johnston at redhat.com Tue Jun 9 19:08:02 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Tue, 9 Jun 2020 15:08:02 -0400 Subject: [tc][neutron] Victoria/Virtual PTG Summary Message-ID: <20200609190802.c6j7ehvafthpulwf@firewall> As I sit down to describe the highlights of the virtual OpenDev PTG of June 2020, the first thing I have to say is that the virtual participation is a two edged sword. On one hand, the spontaneity of the hallway track and the ability to quickly drop in to another group's meeting were largely lost. But I think that there were some people in the Neutron room that would not have been able to make it if the event was held in Vancouver. Once the pandemic is in the rear view mirror, I hope that we can resume in-person events. But I think that there is potential value to interspersing virtual events with the in-person variety, to reach out to students, small companies, and ravel-averse companies or individuals. I'd also like to say that the Foundation staff did a great job with running an event that so completely broke from precedent. The choice of two videoconferencing platforms, with the ability of teams to select the one that matched their needs the best, was foresighted since certain areas had access issues with certain platforms. --- Technical Committee --- I felt that the TC sessions at this PTG were very productive and valuable. In particular, we used the 'raise hands' feature in Zoom to ensure an orderly exchange of views that gave everyone a chance to speak, and I felt that went very well. A few of the highlights of what the TC discussed and decided are: 1.) The TC decided to draft a policy change to provide a model for a project without a PTL. If an interested project was interested in changing to a PTL-less model they could opt-in to such a model, but for projects that do not opt in nothing would change. When imagining how a project could function without a PTL, we deconstructed the role into it's component parts, including: interfacing with the release, infrastructure, and security teams; doing the necessary prep work for PTG and summit events; acting as a public contact point for the project; acting as the bug resolver of last resort. Of these the only ones that are essential from a TC perspective are the release liaison and the infrastructure liaison (for CI issues); the VMT team already has a list of subject matter experts they consult for vulnerabilities. An events person may be called for on a per-event basis - and if no one steps forward for a project, then that project wouldn't be represented at that event. There's definitely more responsibility diffused within the project team in this model, without a "the buck stops here" PTL position. And not every question is answered - for example, who looks for potential cores to mentor and bring up to speed while still ensuring diversity in the core group over time? Look forward to a governance proposal in the coming month or so about this topic, and hopefully a lively discussion that will help us sort out the rough edges! 2.) There has been an increasing consensus that the division between the Technical Committee and the User Committee of OpenStack is an artificial one, a relic of the rapid-expansion phase of the OpenStack universe and not in keeping with the current OpenStack environment. In particular, with more operators contributing back and interested in guiding the technical evolution of OpenStack, the dichotomy between developers and operators/users seems false and sends the wrong message. To that end, the TC has formally proposed that the TC and UC should merge into a single body. Because changes to the Foundation bylaws are costly and slow, we would like to execute this merger without changes to the foundation bylaws. Since the TC is specified in greater detail than the UC in the bylaws, the merger would be accomplished by the UC becoming a subset of the TC. This change was anticipated by the higher rate of operators being nominated and elected to the TC this past election, and the proposed governance change [1] completes a merger that has been a long time in coming. 3.) There has been some feedback from operators that the unfinished migration from project-specific clients to the unified OpenStack client causes uncertainty and difficulty for both operators and users. Some have even created their own client that hides the difference between the unified client and the remaining project-specific clients. This is obviously a bad experience and something we can improve. Given this, the TC will place a greater emphasis on preparing the community to conclude the migration to OSC. 4.) The community goals for the V cycle have been selected, and they are to conclude the migration to Zuul v3, and to migrate the base Ubuntu image used in the upstream gates from Bionic 18.04 to Focal 20.04 [2]. 5.) In the U cycle the TC created the 'ideas' repository [3], which is a place for people to post ideas that they think would be great for the OpenStack community. We did this after reflecting that there are a lot of great ideas that are discussed on the mailing list that are perhaps a bit ahead of their time, and then get lost in the mailing list archives. Rather than forgetting about an idea, it could be posted to the ideas repo and then brought back when time is right. Rather than shifting the graveyard for ideas from the mailing list to this repo, the TC or a representative could review the tabulated ideas with an eye to what is ready for revival and discussion in a PTG or forum session. --- Neutron --- The Neutron community made good use of the Jitsi/Meetpad platform after a few opening hiccups. It handled the large number of people in the room well for the most part, and the etherpad integration was interesting. I think it was great how the Neutron team worked well with the Keystone, Nova, Cyborg, and Edge SIG teams and had great collaboration on each front. The themes in the Neutron community were the same as they have been before: stadium projects are increasingly being run by the Neutron core team, those that are not are entering hibernation, with networking-midonet currently at the most risk. The community slowly pushes forward with Engine Facade and is looking to fully deprecate python-neutronclient. We acknowledged that the neutron-lib migration has stalled and decided to make the process easier with debtcollector deprecations. The biggest thing for me in the summit was the decision to move forward with ML2/OVN as the default back end for Neutron, including in devstack. This will be proposed to the broader community and the Neutron team will work with everyone to make sure this is a success. In conclusion, I am as always grateful for the tremendous worldwide OpenStack community. I am proud of how much was accomplished, and I think this PTG sets the stage for a great Victoria cycle. Thanks, Nate [1] https://review.opendev.org/#/c/734074/ [2] Waiting on https://review.opendev.org/#/c/731213/ to be merged [3] https://governance.openstack.org/ideas/ From gmann at ghanshyammann.com Tue Jun 9 20:24:10 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 09 Jun 2020 15:24:10 -0500 Subject: [tc][interop] Interop repos renaming from openstack/ to osf/ namespace Message-ID: <1729ac12c14.ea04659360747.4656504251181015679@ghanshyammann.com> Hello Everyone, With OpenDev model, TC merged the resolution of keeping only openstack governance owned projects to 'openstack/' namespace[1]. Migration for other non-openstack-owned projects/repos have been done but Interop repos[1] are still under openstack/ namespace which is confusing. After discussing it in today interop meeting, it was agreed to move all the interop repos from 'openstack/' to 'osf/' namespace. I have proposed the changes to infra - https://review.opendev.org/#/c/734669/ Redirect will be provided for old url but few of the below things need to be taken care by interop team: - zuul jobs update to use the new url. I have proposed the changes which i found in my grep but if any job is missed, please update those with 734669 as Depends-On[3]. - github mirroring (if needed). After discussing it with clarkb,AJaeger, and fungi this needs to be done by osf github org admin. I am leaving that for now and something can be done once renaming is completed. [1] https://governance.openstack.org/tc/resolutions/20190322-namespace-unofficial-projects.html [2] https://opendev.org/openstack/interop https://opendev.org/openstack/refstack https://opendev.org/openstack/refstack-client https://opendev.org/openstack/python-tempestconf [3] https://review.opendev.org/#/q/topic:interop-repo-renaming+(status:open+OR+status:merged) -gmann From mark at stackhpc.com Tue Jun 9 20:43:09 2020 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 9 Jun 2020 21:43:09 +0100 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> Message-ID: On Tue, 9 Jun 2020, 11:01 Thierry Carrez, wrote: > Hi everyone, > > As you know[1] I'm trying to push toward simplification of OpenStack > processes, to make them easier to navigate for new members of our > community and generally remove weight. A good example of that is release > models. > > We used to have a single model (with milestones and RCs) but over time > we grew a number of alternative models to accommodate corner cases. The > result is a confusing collection of release models with abstract rules > for each and not much flexibility. Projects are forced to choose between > those models for their deliverables, with limited guidance. And much of > the rationale for those models (exercise release machinery early and > often, trigger external testing...) is no longer valid. > > I'd like to suggest we simplify this and have a single model for things > that follow the development cycle: the "follows-cycle" model. The only > alternative, its nemesis, its Wario would be the "independent" release > model. > > In the "follows-cycle" model, deliverables would be released at least > once per cycle, but could be released more often. The "final" release > would be marked by creating a release (stable) branch, and that would > need to be done before a deadline. Like today, that deadline depends on > whether that deliverable is a library, a client library, a > release-trailing exception or just a regular part of the common release. > > The main change this proposal introduces would be to stop having release > candidates at the end of the cycle. Instead we would produce a release, > which would be a candidate for inclusion in the coordinated OpenStack > release. New releases could be pushed to the release branch to include > late bugfixes or translation updates, until final release date. So > instead of doing a 14.0.0.0rc1 and then a 14.0.0.0rc2 that gets promoted > to 14.0.0, we would produce a 14.0.0, then a 14.0.1 and just list that > 14.0.1 in the release page at coordinated release time. > One substantial change here is that there will no longer be a period where the stable branch exists but the coordinated release does not. This could be an issue for cycle trailing projects such as kolla which sometimes get blocked on external (and internal) factors. Currently we are able to revert master from it's temporary stable mode to start development for the next cycle, while we continue stabilising the stable branch for release. We do intend to be more prompt about our releases, but there is always something that comes up. We may end up having to choose between releasing with known issues vs. halting development for the next cycle. On the other hand, perhaps a little focus would help us to push it over the line faster. > I feel like this would not change that much for deliverables following > the cycle-with-rc model. It would not change anything for > cycle-with-intermediary, libraries or cycle-trailing deliverables. But > it would simplify our processes quite a bit, and generally make our > releases more consistent. > > Thoughts? > > [1] > > http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013236.html > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Jun 9 21:28:06 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 09 Jun 2020 16:28:06 -0500 Subject: [all] Migrating upstream CI/CD jobs to Ubuntu Focal (20.04) Message-ID: <1729afbb576.10af708c965501.7833808481567792078@ghanshyammann.com> Hello Everyone, As per the victoria cycle testing runtime, we need to run upstream CI/CD jobs on Ubuntu Focal[1]. For better tracking and speeding up this work, TC is in the process of selecting this as 2nd goal for the Victoria cycle[2]. Plan: ------ * devstack will provide the WIP patch for devstack base job running on focal and all projects will test their jobs using the same[3]. * migrate projects side jobs to focal. This includes testing the jobs and fixing the failure if any or migrate jobs to run on focal nodeset if not automatically migrated by devstack job. * Once all projects are running successfully on focal, then merge the devstack base job. * Deadline: I am planning to finish this by m-2 (July 31st) but anytime before is always better. Current progress: ---------------------- * devstack base jobs migration on focal which can be used by project side testing is in progress. frickler patch failing with one nova test[4]. * As we have community goal to migrate all legacy jobs to zuulv3 native[5], migration to focal will only cover the devstack based or zuulv3 native jobs not the legacy jobs. All the legacy jobs will automatically be moved on focal distro during their migration to zuulv3 native. If anything failing and cannot be fixed immediately, then those legacy jobs can temporarily override the nodeset to bionic. * selecting this as community goal is also is in progress. [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html [2] https://review.opendev.org/#/c/731213/ [3] https://review.opendev.org/#/c/731207/4 [4] https://review.opendev.org/#/c/734029/1 [5] https://governance.openstack.org/tc/goals/selected/victoria/native-zuulv3-jobs.html -gmann From Arkady.Kanevsky at dell.com Tue Jun 9 21:56:28 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Tue, 9 Jun 2020 21:56:28 +0000 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> Message-ID: Do we have a list of projects that use alternative release model? And are these projects follow the same the alternative release model among them? From: Mark Goddard Sent: Tuesday, June 9, 2020 3:43 PM To: Thierry Carrez Cc: openstack-discuss Subject: Re: [all][release] One following-cycle release model to bind them all [EXTERNAL EMAIL] On Tue, 9 Jun 2020, 11:01 Thierry Carrez, > wrote: Hi everyone, As you know[1] I'm trying to push toward simplification of OpenStack processes, to make them easier to navigate for new members of our community and generally remove weight. A good example of that is release models. We used to have a single model (with milestones and RCs) but over time we grew a number of alternative models to accommodate corner cases. The result is a confusing collection of release models with abstract rules for each and not much flexibility. Projects are forced to choose between those models for their deliverables, with limited guidance. And much of the rationale for those models (exercise release machinery early and often, trigger external testing...) is no longer valid. I'd like to suggest we simplify this and have a single model for things that follow the development cycle: the "follows-cycle" model. The only alternative, its nemesis, its Wario would be the "independent" release model. In the "follows-cycle" model, deliverables would be released at least once per cycle, but could be released more often. The "final" release would be marked by creating a release (stable) branch, and that would need to be done before a deadline. Like today, that deadline depends on whether that deliverable is a library, a client library, a release-trailing exception or just a regular part of the common release. The main change this proposal introduces would be to stop having release candidates at the end of the cycle. Instead we would produce a release, which would be a candidate for inclusion in the coordinated OpenStack release. New releases could be pushed to the release branch to include late bugfixes or translation updates, until final release date. So instead of doing a 14.0.0.0rc1 and then a 14.0.0.0rc2 that gets promoted to 14.0.0, we would produce a 14.0.0, then a 14.0.1 and just list that 14.0.1 in the release page at coordinated release time. One substantial change here is that there will no longer be a period where the stable branch exists but the coordinated release does not. This could be an issue for cycle trailing projects such as kolla which sometimes get blocked on external (and internal) factors. Currently we are able to revert master from it's temporary stable mode to start development for the next cycle, while we continue stabilising the stable branch for release. We do intend to be more prompt about our releases, but there is always something that comes up. We may end up having to choose between releasing with known issues vs. halting development for the next cycle. On the other hand, perhaps a little focus would help us to push it over the line faster. I feel like this would not change that much for deliverables following the cycle-with-rc model. It would not change anything for cycle-with-intermediary, libraries or cycle-trailing deliverables. But it would simplify our processes quite a bit, and generally make our releases more consistent. Thoughts? [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013236.html -- Thierry Carrez (ttx) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Jun 10 03:04:09 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 9 Jun 2020 23:04:09 -0400 Subject: [cinder] Victoria Virtual PTG Message-ID: <7100e1aa-b461-4aad-f694-cc09c42c76ae@gmail.com> A summary of what happened at the Cinder part of the PTG is available: https://wiki.openstack.org/wiki/CinderVictoriaPTGSummary That page also contains links to the recordings. If you led a discussion, please take a look at your topic to make sure I captured all the important points correctly. cheers, brian From thierry at openstack.org Wed Jun 10 09:02:02 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 10 Jun 2020 11:02:02 +0200 Subject: [largescale-sig] Next meeting: June 10, 8utc In-Reply-To: <9f177eac-aa50-c70c-3734-33d4534fd613@openstack.org> References: <9f177eac-aa50-c70c-3734-33d4534fd613@openstack.org> Message-ID: Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-06-10-08.00.html TODOs: - all to describe briefly how you solved metrics/billing in your deployment on https://etherpad.opendev.org/p/large-scale-sig-documentation - amorin to add some meat to the wiki page before we push the Nova doc patch further - ttx to solve merge conflict and push initial oslo.metrics code in from https://review.opendev.org/#/c/730753/ - all to review https://review.opendev.org/#/c/730753/ Next meeting: Jun 24, 8:00UTC on #openstack-meeting-3 -- Thierry Carrez (ttx) From thierry at openstack.org Wed Jun 10 09:46:35 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 10 Jun 2020 11:46:35 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> Message-ID: <8f835cd5-ee55-e570-7f40-ddb118e2e088@openstack.org> Mark Goddard wrote: > [...] > One substantial change here is that there will no longer be a period > where the stable branch exists but the coordinated release does not. > This could be an issue for cycle trailing projects such as kolla which > sometimes get blocked on external (and internal) factors. Currently we > are able to revert master from it's temporary stable mode to start > development for the next cycle, while we continue stabilising the stable > branch for release. Making sure I understand... Currently you are using RC1 to create the stable branch, but it's not really a "release candidate", it's more a starting point for stabilization ? So you can have a broken master branch, tag it RC1 and create stable/ussuri from it, then work on making stable/ussuri releasable while keeping master broken ? If I understand correctly, then it's a fair point: the new model actually makes release candidates real release candidates, so it does not really support having a master branch that never gets close to releasable state. I would argue that this was not really the intent before with RC1 tags, but it certainly made it easier to hide. To support your case more clearly, maybe we could allow creating stable branches from arbitrary commit SHAs. It used to be the case before (when stable branches were created by humans) but when automation took over we enforced that branches need to be created from tags. I'll check with the release team where that requirement came from, and if we can safely relax it. -- Thierry Carrez (ttx) From thierry at openstack.org Wed Jun 10 10:03:19 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 10 Jun 2020 12:03:19 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> Message-ID: Arkady.Kanevsky at dell.com wrote: > Do we have a list of projects that use alternative release model? If you mean which service projects are currently using the "independent" model, today that would be only Rally. The other active "independent" deliverables are general-purpose libraries like reno or pbr, or xstatic packages (PyPI packaging of Javascript libraries). > And are these projects follow the same the alternative release model > among them? Could you rephrase that question? -- Thierry Carrez (ttx) From radoslaw.piliszek at gmail.com Wed Jun 10 10:27:40 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=c5=82aw_Piliszek?=) Date: Wed, 10 Jun 2020 12:27:40 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <8f835cd5-ee55-e570-7f40-ddb118e2e088@openstack.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <8f835cd5-ee55-e570-7f40-ddb118e2e088@openstack.org> Message-ID: <2e31876b-f99c-0239-6182-87a0060ed28e@gmail.com> On 2020-06-10 11:46, Thierry Carrez wrote: > Mark Goddard wrote: >> [...] >> One substantial change here is that there will no longer be a period >> where the stable branch exists but the coordinated release does not. >> This could be an issue for cycle trailing projects such as kolla which >> sometimes get blocked on external (and internal) factors. Currently we >> are able to revert master from it's temporary stable mode to start >> development for the next cycle, while we continue stabilising the >> stable branch for release. > > Making sure I understand... Currently you are using RC1 to create the > stable branch, but it's not really a "release candidate", it's more a > starting point for stabilization ? So you can have a broken master > branch, tag it RC1 and create stable/ussuri from it, then work on making > stable/ussuri releasable while keeping master broken ? > > If I understand correctly, then it's a fair point: the new model > actually makes release candidates real release candidates, so it does > not really support having a master branch that never gets close to > releasable state. I would argue that this was not really the intent > before with RC1 tags, but it certainly made it easier to hide. > > To support your case more clearly, maybe we could allow creating stable > branches from arbitrary commit SHAs. It used to be the case before (when > stable branches were created by humans) but when automation took over we > enforced that branches need to be created from tags. > > I'll check with the release team where that requirement came from, and > if we can safely relax it. That would be great to have. Currently our RC1 is in no-touch state as it is broken by design (TM) and we always need to have RC2. :/ To summarize: nowadays we break our master branch to "release" half-broken RC1 to get a stable branch. Then we revert things on master to unbreak and work on stabilizing for release. -yoctozepto From Sathia.Nadarajah.2 at team.telstra.com Wed Jun 10 06:35:49 2020 From: Sathia.Nadarajah.2 at team.telstra.com (Nadarajah, Sathia) Date: Wed, 10 Jun 2020 06:35:49 +0000 Subject: Idempotency for multiple nics Message-ID: Hi All, We are facing the exact issue that is outlined here, “When a server is created with multiple nics which holds security groups, module was applying default SG to the server & remove SG on the nics.” https://github.com/ansible/ansible/pull/58509 https://github.com/ansible/ansible/issues/58495 And we are on the following version of openstacksdk (ansible) [root at ansnvlonls01 bin]# source /var/lib/awx/venv/mypy3/bin/activate (mypy3) [root at ansnvlonls01 bin]# pip show openstacksdk Name: openstacksdk Version: 0.46.0 Summary: An SDK for building applications to work with OpenStack Home-page: https://docs.openstack.org/openstacksdk/ Author: OpenStack Author-email: openstack-discuss at lists.openstack.org License: UNKNOWN Location: /var/lib/awx/venv/mypy3/lib/python3.6/site-packages Requires: netifaces, iso8601, keystoneauth1, os-service-types, PyYAML, cryptography, six, requestsexceptions, decorator, pbr, jmespath, appdirs, munch, jsonpatch, dogpile.cache When can we expect to have this fix factored into openstacksdk ? Thanks. Regards, Sathia Nadarajah Security and Enterprise Engineering CH2 Networks and IT, Telstra [Telstra] P   0386946619 M 0437302281 E   Sathia.Nadarajah.2 at team.telstra.com W www.telstra.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1264 bytes Desc: image001.png URL: From mordred at inaugust.com Wed Jun 10 12:43:30 2020 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 10 Jun 2020 07:43:30 -0500 Subject: Idempotency for multiple nics In-Reply-To: References: Message-ID: > On Jun 10, 2020, at 1:35 AM, Nadarajah, Sathia wrote: > > Hi All, > > We are facing the exact issue that is outlined here, > > “When a server is created with multiple nics which holds security groups, module was applying default SG to the server & remove SG on the nics.” > > https://github.com/ansible/ansible/pull/58509 > https://github.com/ansible/ansible/issues/58495 > > And we are on the following version of openstacksdk > > (ansible) [root at ansnvlonls01 bin]# source /var/lib/awx/venv/mypy3/bin/activate > (mypy3) [root at ansnvlonls01 bin]# pip show openstacksdk > Name: openstacksdk > Version: 0.46.0 > Summary: An SDK for building applications to work with OpenStack > Home-page: https://docs.openstack.org/openstacksdk/ > Author: OpenStack > Author-email: openstack-discuss at lists.openstack.org > License: UNKNOWN > Location: /var/lib/awx/venv/mypy3/lib/python3.6/site-packages > Requires: netifaces, iso8601, keystoneauth1, os-service-types, PyYAML, cryptography, six, requestsexceptions, decorator, pbr, jmespath, appdirs, munch, jsonpatch, dogpile.cache > > When can we expect to have this fix factored into openstacksdk ? > It isn’t really an openstacksdk issue, it’s an issue the ansible modules. I’ve submitted this: https://review.opendev.org/734810 to the ansible-collections-openstack repo. > Thanks. > > Regards, > > Sathia Nadarajah > Security and Enterprise Engineering CH2 > Networks and IT, Telstra > > P   0386946619 > M 0437302281 > E   Sathia.Nadarajah.2 at team.telstra.com > W www.telstra.com From klemen at psi-net.si Wed Jun 10 13:05:29 2020 From: klemen at psi-net.si (Klemen Pogacnik) Date: Wed, 10 Jun 2020 15:05:29 +0200 Subject: [kolla-ansible] Some proposals based on my work with kolla-ansible Message-ID: HI! I’ve worked with kolla-ansible for over the year, currently on rocky version. I’ve done some work which can maybe be intersting to community. 1. Proposal - Adding additional functionality to kolla-ansible playbook For functionalities which are not officially supported by kolla-ansible group, but are done by any individual or a group and are interesting for a community. Each additional functionality has its own GIT project. It consists of one or more ansible roles. The structure of project is very similar to the kolla-ansible project. addfunct-name / README.md / etc / kolla / globals.yml / passwords.yml / ansible / inventory / all-in-one / multinode / group_vars / all.yml / roles / addmodule_role1 / … / addmodule_role2 / … / destroy.yml / kolla-host.yml / post-deploy.yml / site.yml / stop.yml Configuration parameters of functionality are put to globals.yml and all.yml. Passwords are put to passwords.yml. Inventory data are added to all-in-one and multinode files. Roles variables, tasks, handlers, templates are added to ansible/roles. Activation of roles are included to destroy.yml, kolla-host.yml, post-deploy.yml, stop.yml and/or site.yml. Not all steps are compulsory. Merging additional module project to kolla-ansible is quite simple with basic shell commands: cat $ADD_MODULE/etc/kolla/globals.yml >> $KOLLA_ANSIBLE/etc/kolla/globals.yml cat $ADD_MODULE/etc/kolla/passwords.yml >> $KOLLA_ANSIBLE/etc/kolla/passwords.yml cat $ADD_MODULE/ansible/group_vars/all.yml >> $KOLLA_ANSIBLE/ansible/group_vars/all.yml cat $ADD_MODULE/ansible/inventory/all-in-one >> $KOLLA_ANSIBLE/ansible/inventory/all-in-one cat $ADD_MODULE/ansible/inventory/multinode >> $KOLLA_ANSIBLE/ansible/inventory/multinode cat $ADD_MODULE/ansible/destroy.yml >> $KOLLA_ANSIBLE/ansible/destroy.yml cat $ADD_MODULE/ansible/kolla-host.yml >> $KOLLA_ANSIBLE/ansible/kolla-host.yml cat $ADD_MODULE/ansible/post-deploy.yml >> $KOLLA_ANSIBLE/ansible/post-deploy.yml cat $ADD_MODULE/ansible/site.yml >> $KOLLA_ANSIBLE/ansible/site.yml cat $ADD_MODULE/ansible/stop.yml >> $KOLLA_ANSIBLE/ansible/stop.yml cp -r $ADD_MODULE/ansible/roles/* >> $KOLLA_ANSIBLE/ansible/roles That way additional roles are executed together with other kolla-ansible roles on hosts which are specified in a common inventory. Tasks can use all available kolla-ansible variables, tools, ansible modules. 2. Proposal – building kolla-ansible container image kolla-ansible can be run in a container. Image is built together with other kolla containers and is available on docker hub. That way it would be easier for somebody to try kolla. It is also possible to build images, which have some interesting additional functionalities included. 3. Proposal – adding ceph as an additional functionality described in previous proposals Ansible roles are developed for external ceph deployment. Cephadm, kolla-ceph or any other tool can be used in role’s plays. Has somebody have a suggestion, what to use? A special kolla-ansible container image is built with included ceph roles. Merging is specified in Dockerfile. I’ve already included three additional functionalities: - docker_settings (docker_bip network is configurable) - resource_reservation (resources – cpu cores and memory can be limited for openstack services) - freeipa_client_install (freeipa client installation – done by my colleague, not finished yet) They are now all included in the same project and can be viewed on: https://gitlab.com/kemopq/addmodules-kolla-ansible/-/tree/master/ansible I’m building kolla-ansible container with those functioanlities included and running kolla-ansible playbook in container. I’m planing to do external Ceph deployment that way in next weeks. I’m open to suggestions how this concept can be improved. Klemen -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Jun 10 13:07:47 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 10 Jun 2020 15:07:47 +0200 Subject: [neutron] Virtual PTG summary Message-ID: <20200610130747.xcipu2r7zz2g5bdh@skaplons-mac> # Day 1 (Tuesday) ## Retrospective >From the good things team mentioned that migration of the networking-ovn driver to the core neutron went well. Also our CI stability improves in the last cycle. Another good thing was that we implemented all required in this cycle community goals and we even migrated almost all jobs to Zuul v3 syntax already. Not so good was progress on some important Blueprints, like adoptoion of the new engine facade. The other thing mentioned here was activity in the stadium projects and in the neutron-lib. ## Support for the new oslo.policy Akihiro and Miguel volunteered to work on this in the Victoria if they will have some cycles. It is important feature from the Keystone team's point of view. Keystone and Nova teams already have some experience on that transition so they can help us with that. We need to ensure that new policies in neutron can works fine with e.g. old ones defined in the stadium projects. ## Adoption of new enginefacade We discussed how to finish this adoption of new enginefacade in Neutron in Victoria cycle. There is couple of people who volunteered to help with that. After it will be done in Neutron we need to check what changes are required in the stadium projects too. ## tenant_id/project_id transition Rodolfo raised this issue that in some places we may still have e.g. only tenant_id and miss project_id. We should all be aware of that and whenever we see something like that e.g. in neutron, neutron-lib, tempest or other repos, we should propose patch to change that. ## Neutronclient -> OSC migration We found out that the only blocker which prevents us to EOL python-neutronclient as a CLI tool is lack of possibility to pass custom arguments to the commands. As we know, OSC/SDK team don't have anything against adding support for that in OSC. It's actually already done for e.g. Glance. Slaweq volunteered to work on it this cycle and if that will be done, define final date of EOL for python-neutronclient for Victoria+2 cycle. ## RFEs review We reviewed about 10 old, stagnant RFEs. We decicded to close some of them as non relevant anymore. For others we discussed some details and we will continue this discussion on LP and on the drivers meetings. ## Review of stadium projects We needed to review list of stadium projects again. Conclusions are: * neutron-vpnaas, networking-odl, networking-bagpipe, networking-bgpvpn are in good shape and we can definitely keep them in stadium, * neutron-fwaas - we will delete all code from this repo now and mark it as deleted project, see [1] for example how this is done, * neutron-dynamic-routing - send email to call for maintainers, but don't consider it as deprecated yet in this cycle, maybe in the next one, * networking-midonet - this one is the problem currently as it isn't well tested with current neutron in u/s gate. # Day 2 (Wednesday) ## Meeting with Edge SIG During this call project [2] was mentioned. For now it was moved to unofficial projects namespace but maybe someone will revive this project if there is use case for it. We talked about [3] and doing it in Victoria cycle. After discussions about the same with Nova team on Friday (see below) we decided that as a first step Nova will work on changes on their side and those changes will not require anything from the Neutron. On our side, Miguel Lavalle volunteered tentatively that he can work on this on too. There was also question about support of IEEE 1588 (PTP) in Neutron - so far we don't have any support for it. But we are always welcome for new RFEs :) ## Future of metering agent Discussion was related to the spec [4]. General conclustion from the discussion is that we will work on this new L3 agent extension which will collect metering data but this will not be in any way replacement for the old metering agent. When this will be ready, we can start discussion about potential deprecation of the old metering agent and its API but that's nothing for the current cycle for sure. ## ML2/OVS and OVN feature gaps List of gaps between ML2/OVS and ML2/OVN backends is in [5]. We discussed how to maintain this list and make it smaller. Decision from this discussion is that we will try to remember to add new items to this list every time when some new feature will be added to Neutron and implemented in ML2/OVS. We will also open LP for such things to keep it tracked on OVN driver side and to implement that when it will be needed by community. ## OVN driver and agent API During this discussion we talked about Neutron agents API and two options in it: * setting admin to be disabled (admin_state_up = False) - we decided that this is really relevant only for L3 and DHCP agents. It's not really used for OVS or Linuxbridge agents and it shouldn't also have any effect for OVN drivers. So Kuba will propose RFE with description of what our API should return in case of setting this admin_state_up value for agents for which it doesn't makes sense, * deletion of agents - we decided that for ovn agents it should be possible to delete them in same way how it is done now for other agents. This has some downside when user will delete agent which is still alive. In such case agent will be deleted and immediatelly shown back in neutron agent list. That is because of how OVN is storing data about those "agents" internally and how Neutron OVN driver uses this data. This isn't however big issue because it may happend in exactly same way for example for OVS/DHCP/L3 agents if user will delete them just before it will send new heartbeat message to the Neutron server. ## Stateless security groups support for OVS firewall (and OVN?) Conclusion from this discussion is that there is need and use cases (DPDK, telco) to introduce stateless security groups in openvswitch and ovn firewall drivers. During the discussion we agreed that implementation shouldn't be really thought. Bernard volunteered to open LP to track it for both of those drivers. ## OVN concept of distributed localport ports used for Metadata agent and DHCP Maciek explained us limitations in current implementation of segments in Neutron and how it blocks OVN driver to support that. Basically the problem is that currently Neutron has limitation that one port can be only on one segment. OVN driver is using "special" ports (local ports in OVN) to handle metadata and dhcp. And such port has to have IP address from all segments which belongs to the network. We agreed to introduce new type of device_owner which will mean that port is "distributed". Such kind of port may then be in more than one segment at same time. Basically this concept is similar to how DVR router ports works currently. ## Switching default backend in Devstack to be OVN At the end of the day we discussed possibility to switch Devstack's default backend in Neutron to be OVN instead of OVS agent. We all agreed that this is big step forward and that we want to do that as Neutron team. We want to give with this change clear signal that we believe that OVN backend is future of the Neutron and we want to invest more in that. Hovewer we want to highlight that this means *nothing* for real production environments. We are *not* going to deprecate any backends. All existing drivers will be tested as they are tested now in the Neutron gate. Different deployment tools may still use different backends as default ones. This change will be however pretty big change for all OpenStack projects which will use OVN driver as Neutron's backend in their jobs. Such change was already done in TripleO in Stein cycle and it went pretty smooth so we are not expecting any serious issues there. We would like to do this change which we set for ourselves is Victoria-2 milestone. # Day 3 (Thursday) ## Deprecation of use_veth_interconnection in the ovs-agent It seems after discussion that this config option was introduced very long time ago and was probably related to something like descibed in [6] but later Open vSwitch introduced patch ports and there was another old LP [7] which was saying that ovs-agent should use patch ports instead of veth pairs. And that solution is still recommended today in the Neutron. So we agreed to deprecate this option in the Victoria cycle and remove it completly in W cycle. ## Loki service plugin Service plugin called "loki" is developed to be use in the CI jobs and to add DB deadlock errors to some random DB queries. But for now it's not used at all in the Neutron CI. During the discussion we decided to add new periodic Zuul job which will run neutron-tempest-plugin.api tests with enabled loki service plugin to check if Neutron still can handle properly such random DB Deadlocks. ## SR-IOV ML2 driver support for VLAN trunking Patch related to this discussion is [8] - it's big patch which introduces base trunk driver interface for SR-IOV agent but also implementation of this base driver for Intel cards. During the discussion we agreed that we shouldn't have such vendor specific code in the Neutron tree. Our suggestion was to split this patch into two pieces - base driver which will be in the Neutron repo and Intel specific driver which will be hosted in some other repository. ## Neutron-lib We were discussion couple of different issues related to the neutron-lib in general. * Nate started discussion about future of the neutron-lib and if this is still really needed. His proposal was to relax a bit policies about what should be re-homed from Neutron to the neutron-lib repository. Originally neutron-lib was introduced to help neutron stadium and 3rd-party projects to not import things from neutron but from the neutron-lib only and have less dependencies thanks to that. Currently development of neutron-lib is much slower than it was in the past and we don't see so much need to use neutron-lib from the developers of the stadium or 3rd-party projects. After some discussion we agreed that neutron-lib is still usefull and important to keep e.g. api definitions and highligh base generic interfaces for neutron and other related projects. So we will keep neutron-lib as it is now but we will relax a bit policy of what will be moved to this repo from the neutron. Second outcome from the same discussion was decision that we will change a bit our policy of updating every neutron-lib/neutron consumer when we will re-home things. We will be focused only on Neutron stadium projects. For other projects we will introduce 1 cycle deprecation period for things which are moved out from the neutron to neutron-lib repo. * Adam from the Octavia team started discussion about how to test neutron and neutron-lib changes. Currently all neutron CI jobs are using latest released neutron-lib version. But because of that developers can't use "Depends-On" to test some neutron change together with the related neutron-lib change. After discussion our final decision is that we will propose one additional CI job which will use neutron-lib from master branch. So this job can be used to check neutron and neutron-lib changes together before neutron-lib with required change will be released. # Day 4 (Friday) ## Nova-Neutron-Cyborg cross projects session This was long (2h) session where we discussed couple of important things. * First topic was about scheduling of the routed networks. After some discussion was that Nova team will work on something only on Nova's side as a first step. Later we will see how to move forward with integration of Nova and Neutron there. * Second topic was about updating minimum bandwdith for the ports. We agreed that this should be done on Neutron side and Neutron will update resource allocation directly in Placement. There is no need to involve Nova in this process. We also decided that we will not allow update values of the existing, already associated rules in the QoS Policy. The only possible way to change minimum bandwidth will be to update port to associate it with new QoS Policy. * Last topic on this meeting was related moslty to Cyborg and Nova cooperation. There will be required some changes on the Neutron side as well, like e.g. add new api extension which will add "device profile" to the port object. This new attribute will contain e.g. information about name of the device profile from Cyborg. Details are in doc: [9] ## Future of lib/neutron and lib/neutron-legacy in devstack In Devstack there are currently 2 modules which can configure Neutron. Old one called "lib/neutron-legacy" and the new one called "lib/neutron". It is like that since many cycles that "lib/neutron-legacy" is deprecated. But it is still used everywhwere. New module isn't still finished and isn't working fine. This is very confusing for users as really maintained and recommended is still "lib/neutron-legacy" module. During the discussion Sean Collins explained us that originally this new module was created as an attempt to refactor old module, and to make Neutron in the Devstack better to maintain. But now we see that this process failed as new module isn't still used and we don't have any cycles to work on it. So our final conclusion is to "undeprecate" old "lib/neutron-legacy" and get rid of the new module. ## OVN master and stable CI jobs Currently we are running all Neutron OVN related jobs with OVN installed from packages provided by operating system. Maciek explained us why we should also test it with OVN from master branch. It is like that because often new features added to the OVN driver in Neutron are really dependent on changes in the OVN project. And until such changes in OVN aren't really released we can't test if Neutron OVN changes really works with new OVN code as they should. So we deviced to change "neutron-ovn-tempest-full-multinode-ovs-master" that it will also install OVN from the source. ## CI Status During discussion about our CI jobs we decided to remove from the Neutron check queue 2 non-voting and still not stable jobs: "neutron-tempest-plugin-dvr-multinode-scenario" and "neutron-tempest-dvr-ha-multinode-full" and move those jobs to the experimental queue. This is temporary solution until we will fix those 2 jobs finally. # Team photo It wasn't as great as in other PTGs. But You can find our team photos in the current context at [10]. Credits for photos comes to Rodolfo and Jakub who made them :) Thx a lot! # Etherpads Etherpad from Neutron sessions can be found at [11]. Notes from the Nova-Neutron-Cyborg session can be found at [12]. [1] https://opendev.org/openstack/congress [2] https://opendev.org/x/neutron-interconnection [3] https://bugs.launchpad.net/neutron/+bug/1832526 [4] https://bugs.launchpad.net/neutron/+bug/1817881 [5] https://docs.openstack.org/neutron/latest/ovn/gaps.html [6] https://bugs.launchpad.net/neutron/+bug/1045613 [7] https://bugs.launchpad.net/neutron/+bug/1285335 [8] https://review.opendev.org/#/c/665467/72/ [9] https://docs.google.com/document/d/11HkK-cLpDxa5Lku0_O0Nb8Uqh34Jqzx2N7j2aDu05T0/edit [10] http://kaplonski.pl/images/Virtual_PTG_2020/ [11] http://kaplonski.pl/images/Virtual_PTG_2020/ [12] https://etherpad.opendev.org/p/nova-victoria-ptg -- Slawek Kaplonski Senior software engineer Red Hat From zigo at debian.org Wed Jun 10 13:11:35 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 10 Jun 2020 15:11:35 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> Message-ID: <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> On 6/9/20 12:00 PM, Thierry Carrez wrote: > The main change this proposal introduces would be to stop having release > candidates at the end of the cycle. Instead we would produce a release, > which would be a candidate for inclusion in the coordinated OpenStack > release. New releases could be pushed to the release branch to include > late bugfixes or translation updates, until final release date. So > instead of doing a 14.0.0.0rc1 and then a 14.0.0.0rc2 that gets promoted > to 14.0.0, we would produce a 14.0.0, then a 14.0.1 and just list that > 14.0.1 in the release page at coordinated release time. tl;dr: If we do that, I wont be releasing packages the day of the release, and wont be able to get puppet-openstack for Debian ready on time either. Hi, So more or less, you're removing the mandatory release of frozen-before-release artifact. >From a downstream distribution package maintainer, I'd like to voice my concern that with this scheme, it's going to be very complicated to deliver the OpenStack release on-time when it gets released. This also means that it will be difficult to get things like puppet-openstack fixed on-time too, because they depend on the packages. So, while I don't really mind the beta releases anymore (I don't package them these days), I do strongly believe that the RC releases are convenient. I don't think we need RC2, RC3, etc, but having a working RC1 2 or 3 weeks before the release is really a good thing which I would regret a lot if we decided not to do it anymore. Cheers, Thomas Goirand (zigo) From zigo at debian.org Wed Jun 10 13:17:50 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 10 Jun 2020 15:17:50 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <8f835cd5-ee55-e570-7f40-ddb118e2e088@openstack.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <8f835cd5-ee55-e570-7f40-ddb118e2e088@openstack.org> Message-ID: <7bdae922-fed2-b863-285a-d5025e0ddb40@debian.org> On 6/10/20 11:46 AM, Thierry Carrez wrote: > Mark Goddard wrote: >> [...] >> One substantial change here is that there will no longer be a period >> where the stable branch exists but the coordinated release does not. >> This could be an issue for cycle trailing projects such as kolla which >> sometimes get blocked on external (and internal) factors. Currently we >> are able to revert master from it's temporary stable mode to start >> development for the next cycle, while we continue stabilising the >> stable branch for release. > > Making sure I understand... Currently you are using RC1 to create the > stable branch, but it's not really a "release candidate", it's more a > starting point for stabilization ? So you can have a broken master > branch, tag it RC1 and create stable/ussuri from it, then work on making > stable/ussuri releasable while keeping master broken ? > > If I understand correctly, then it's a fair point: the new model > actually makes release candidates real release candidates, so it does > not really support having a master branch that never gets close to > releasable state. I would argue that this was not really the intent > before with RC1 tags, but it certainly made it easier to hide. > > To support your case more clearly, maybe we could allow creating stable > branches from arbitrary commit SHAs. This doesn't work, because downstream users of the upstream code have no way to know what upstream considers as the frozen-working-version. With RC1, we do know. Again, can we have a middle-ground where we have only a SINGLE rc release before the final one? For downstream distros (like Debian) I'd be easy to integrate patches to fix problems on top of this unique RC. Last, the RC doesn't have to be cut out of a stable branch. If that is the issue (ie: backporting to the stable branch before the release is a high cost), then the RC could be cut from master... Whatever is the best workflow upstream, I don't mind, as long as we have pre-release to eat for 1/ downstream distro 2/ config-management projects like Kola / OSA / puppet-openstack. Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Wed Jun 10 13:23:45 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Jun 2020 13:23:45 +0000 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> Message-ID: <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> On 2020-06-10 15:11:35 +0200 (+0200), Thomas Goirand wrote: > On 6/9/20 12:00 PM, Thierry Carrez wrote: [...] > > instead of doing a 14.0.0.0rc1 and then a 14.0.0.0rc2 that gets > > promoted to 14.0.0, we would produce a 14.0.0, then a 14.0.1 and > > just list that 14.0.1 in the release page at coordinated release > > time. [...] > So more or less, you're removing the mandatory release of > frozen-before-release artifact. > > From a downstream distribution package maintainer, I'd like to > voice my concern that with this scheme, it's going to be very > complicated to deliver the OpenStack release on-time when it gets > released. This also means that it will be difficult to get things > like puppet-openstack fixed on-time too, because they depend on > the packages. > > So, while I don't really mind the beta releases anymore (I don't > package them these days), I do strongly believe that the RC > releases are convenient. I don't think we need RC2, RC3, etc, but > having a working RC1 2 or 3 weeks before the release is really a > good thing which I would regret a lot if we decided not to do it > anymore. I don't understand what problem you're trying to convey. The suggestion is basically a cosmetic change, where instead of 14.0.0.0rc1 (and then if necessary 14.0.0.0rc2 and so on) we'd have 14.0.0 (and then if necessary 14.0.1 and so on). How does that change your packaging process? Is the concern that you can't know in advance what the release version number for a given service is going to be? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From skaplons at redhat.com Wed Jun 10 13:40:56 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 10 Jun 2020 15:40:56 +0200 Subject: [neutron] Virtual PTG summary In-Reply-To: <20200610130747.xcipu2r7zz2g5bdh@skaplons-mac> References: <20200610130747.xcipu2r7zz2g5bdh@skaplons-mac> Message-ID: <20200610134056.tn3dn6lpmk5cq3nf@skaplons-mac> On Wed, Jun 10, 2020 at 03:07:47PM +0200, Slawek Kaplonski wrote: > # Day 1 (Tuesday) > > ## Retrospective > > From the good things team mentioned that migration of the networking-ovn > driver to the core neutron went well. Also our CI stability improves in the > last cycle. Another good thing was that we implemented all required in this > cycle community goals and we even migrated almost all jobs to Zuul v3 syntax > already. Not so good was progress on some important Blueprints, like adoptoion > of the new engine facade. The other thing mentioned here was activity in the > stadium projects and in the neutron-lib. > > ## Support for the new oslo.policy > > Akihiro and Miguel volunteered to work on this in the Victoria if they will > have some cycles. It is important feature from the Keystone team's point of > view. Keystone and Nova teams already have some experience on that transition > so they can help us with that. We need to ensure that new policies in neutron > can works fine with e.g. old ones defined in the stadium projects. > > ## Adoption of new enginefacade > > We discussed how to finish this adoption of new enginefacade in Neutron in > Victoria cycle. > There is couple of people who volunteered to help with that. > After it will be done in Neutron we need to check what changes are required > in the stadium projects too. > > ## tenant_id/project_id transition > > Rodolfo raised this issue that in some places we may still have e.g. only > tenant_id and miss project_id. We should all be aware of that and whenever we > see something like that e.g. in neutron, neutron-lib, tempest or other repos, > we should propose patch to change that. > > ## Neutronclient -> OSC migration > > We found out that the only blocker which prevents us to EOL > python-neutronclient as a CLI tool is lack of possibility to pass custom > arguments to the commands. > As we know, OSC/SDK team don't have anything against adding support for > that in OSC. It's actually already done for e.g. Glance. > Slaweq volunteered to work on it this cycle and if that will be done, > define final date of EOL for python-neutronclient for Victoria+2 cycle. > > ## RFEs review > > We reviewed about 10 old, stagnant RFEs. We decicded to close some of them > as non relevant anymore. For others we discussed some details and we will > continue this discussion on LP and on the drivers meetings. > > ## Review of stadium projects > > We needed to review list of stadium projects again. Conclusions are: > > * neutron-vpnaas, networking-odl, networking-bagpipe, networking-bgpvpn are in > good shape and we can definitely keep them in stadium, > * neutron-fwaas - we will delete all code from this repo now and mark it as > deleted project, see [1] for example how this is done, > * neutron-dynamic-routing - send email to call for maintainers, but don't > consider it as deprecated yet in this cycle, maybe in the next one, > * networking-midonet - this one is the problem currently as it isn't well > tested with current neutron in u/s gate. > > > # Day 2 (Wednesday) > > ## Meeting with Edge SIG > During this call project [2] was mentioned. For now it was moved to > unofficial projects namespace but maybe someone will revive this project if > there is use case for it. > We talked about [3] and doing it in Victoria cycle. After discussions about > the same with Nova team on Friday (see below) we decided that as a first step > Nova will work on changes on their side and those changes will not require > anything from the Neutron. On our side, Miguel Lavalle volunteered tentatively > that he can work on this on too. > There was also question about support of IEEE 1588 (PTP) in Neutron - so > far we don't have any support for it. But we are always welcome for new RFEs :) > > ## Future of metering agent > > Discussion was related to the spec [4]. General conclustion from the > discussion is that we will work on this new L3 agent extension which will > collect metering data but this will not be in any way replacement for the old > metering agent. > When this will be ready, we can start discussion about potential > deprecation of the old metering agent and its API but that's nothing for the > current cycle for sure. > > ## ML2/OVS and OVN feature gaps > > List of gaps between ML2/OVS and ML2/OVN backends is in [5]. We discussed > how to maintain this list and make it smaller. > Decision from this discussion is that we will try to remember to add new > items to this list every time when some new feature will be added to Neutron > and implemented in ML2/OVS. We will also open LP for such things to keep it > tracked on OVN driver side and to implement that when it will be needed by > community. > > ## OVN driver and agent API > > During this discussion we talked about Neutron agents API and two options in > it: > > * setting admin to be disabled (admin_state_up = False) - we decided that this > is really relevant only for L3 and DHCP agents. It's not really used for OVS or > Linuxbridge agents and it shouldn't also have any effect for OVN drivers. So > Kuba will propose RFE with description of what our API should return in case of > setting this admin_state_up value for agents for which it doesn't makes sense, > * deletion of agents - we decided that for ovn agents it should be possible to > delete them in same way how it is done now for other agents. This has some > downside when user will delete agent which is still alive. In such case agent > will be deleted and immediatelly shown back in neutron agent list. That is > because of how OVN is storing data about those "agents" internally and how > Neutron OVN driver uses this data. This isn't however big issue because it may > happend in exactly same way for example for OVS/DHCP/L3 agents if user will > delete them just before it will send new heartbeat message to the Neutron > server. > > ## Stateless security groups support for OVS firewall (and OVN?) > > Conclusion from this discussion is that there is need and use cases (DPDK, > telco) to introduce stateless security groups in openvswitch and ovn firewall > drivers. During the discussion we agreed that implementation shouldn't be > really thought. Bernard volunteered to open LP to track it for both of those > drivers. > > ## OVN concept of distributed localport ports used for Metadata agent and DHCP > > Maciek explained us limitations in current implementation of segments in > Neutron and how it blocks OVN driver to support that. Basically the problem is > that currently Neutron has limitation that one port can be only on one segment. > OVN driver is using "special" ports (local ports in OVN) to handle metadata and > dhcp. And such port has to have IP address from all segments which belongs to > the network. We agreed to introduce new type of device_owner which will mean > that port is "distributed". Such kind of port may then be in more than one > segment at same time. Basically this concept is similar to how DVR router ports > works currently. > > ## Switching default backend in Devstack to be OVN > > At the end of the day we discussed possibility to switch Devstack's default > backend in Neutron to be OVN instead of OVS agent. We all agreed that this is > big step forward and that we want to do that as Neutron team. > We want to give with this change clear signal that we believe that OVN > backend is future of the Neutron and we want to invest more in that. > Hovewer we want to highlight that this means *nothing* for real production > environments. We are *not* going to deprecate any backends. All existing > drivers will be tested as they are tested now in the Neutron gate. Different > deployment tools may still use different backends as default ones. > This change will be however pretty big change for all OpenStack projects > which will use OVN driver as Neutron's backend in their jobs. Such change was > already done in TripleO in Stein cycle and it went pretty smooth so we are not > expecting any serious issues there. > We would like to do this change which we set for ourselves is Victoria-2 > milestone. > > > # Day 3 (Thursday) > > ## Deprecation of use_veth_interconnection in the ovs-agent > > It seems after discussion that this config option was introduced very long > time ago and was probably related to something like descibed in [6] but later > Open vSwitch introduced patch ports and there was another old LP [7] which was > saying that ovs-agent should use patch ports instead of veth pairs. And that > solution is still recommended today in the Neutron. > So we agreed to deprecate this option in the Victoria cycle and remove it > completly in W cycle. > > ## Loki service plugin > > Service plugin called "loki" is developed to be use in the CI jobs and to > add DB deadlock errors to some random DB queries. But for now it's not used at > all in the Neutron CI. During the discussion we decided to add new periodic > Zuul job which will run neutron-tempest-plugin.api tests with enabled loki > service plugin to check if Neutron still can handle properly such random DB > Deadlocks. > > ## SR-IOV ML2 driver support for VLAN trunking > > Patch related to this discussion is [8] - it's big patch which introduces > base trunk driver interface for SR-IOV agent but also implementation of this > base driver for Intel cards. During the discussion we agreed that we shouldn't > have such vendor specific code in the Neutron tree. Our suggestion was to split > this patch into two pieces - base driver which will be in the Neutron repo and > Intel specific driver which will be hosted in some other repository. > > ## Neutron-lib > > We were discussion couple of different issues related to the neutron-lib in > general. > > * Nate started discussion about future of the neutron-lib and if this is still > really needed. His proposal was to relax a bit policies about what should be > re-homed from Neutron to the neutron-lib repository. Originally neutron-lib was > introduced to help neutron stadium and 3rd-party projects to not import things > from neutron but from the neutron-lib only and have less dependencies thanks to > that. Currently development of neutron-lib is much slower than it was in the > past and we don't see so much need to use neutron-lib from the developers of > the stadium or 3rd-party projects. > After some discussion we agreed that neutron-lib is still usefull and > important to keep e.g. api definitions and highligh base generic interfaces for > neutron and other related projects. So we will keep neutron-lib as it is now > but we will relax a bit policy of what will be moved to this repo from the > neutron. > Second outcome from the same discussion was decision that we will change > a bit our policy of updating every neutron-lib/neutron consumer when we will > re-home things. We will be focused only on Neutron stadium projects. For other > projects we will introduce 1 cycle deprecation period for things which are > moved out from the neutron to neutron-lib repo. > > * Adam from the Octavia team started discussion about how to test neutron > and neutron-lib changes. Currently all neutron CI jobs are using latest > released neutron-lib version. But because of that developers can't use > "Depends-On" to test some neutron change together with the related neutron-lib > change. After discussion our final decision is that we will propose one > additional CI job which will use neutron-lib from master branch. So this job > can be used to check neutron and neutron-lib changes together before > neutron-lib with required change will be released. > > # Day 4 (Friday) > > ## Nova-Neutron-Cyborg cross projects session > > This was long (2h) session where we discussed couple of important things. > > * First topic was about scheduling of the routed networks. After some > discussion was that Nova team will work on something only on Nova's side as a > first step. Later we will see how to move forward with integration of Nova and > Neutron there. > * Second topic was about updating minimum bandwdith for the ports. We agreed > that this should be done on Neutron side and Neutron will update resource > allocation directly in Placement. There is no need to involve Nova in this > process. We also decided that we will not allow update values of the existing, > already associated rules in the QoS Policy. The only possible way to change > minimum bandwidth will be to update port to associate it with new QoS Policy. > * Last topic on this meeting was related moslty to Cyborg and Nova cooperation. > There will be required some changes on the Neutron side as well, like e.g. add > new api extension which will add "device profile" to the port object. This new > attribute will contain e.g. information about name of the device profile from > Cyborg. Details are in doc: [9] > > ## Future of lib/neutron and lib/neutron-legacy in devstack > > In Devstack there are currently 2 modules which can configure Neutron. Old > one called "lib/neutron-legacy" and the new one called "lib/neutron". It is > like that since many cycles that "lib/neutron-legacy" is deprecated. But it is > still used everywhwere. New module isn't still finished and isn't working fine. > This is very confusing for users as really maintained and recommended is still > "lib/neutron-legacy" module. > > During the discussion Sean Collins explained us that originally this new > module was created as an attempt to refactor old module, and to make Neutron in > the Devstack better to maintain. But now we see that this process failed as new > module isn't still used and we don't have any cycles to work on it. So our > final conclusion is to "undeprecate" old "lib/neutron-legacy" and get rid of > the new module. > > ## OVN master and stable CI jobs > > Currently we are running all Neutron OVN related jobs with OVN installed > from packages provided by operating system. Maciek explained us why we should > also test it with OVN from master branch. It is like that because often new > features added to the OVN driver in Neutron are really dependent on changes in > the OVN project. And until such changes in OVN aren't really released we can't > test if Neutron OVN changes really works with new OVN code as they should. > So we deviced to change "neutron-ovn-tempest-full-multinode-ovs-master" > that it will also install OVN from the source. > > ## CI Status > > During discussion about our CI jobs we decided to remove from the Neutron > check queue 2 non-voting and still not stable jobs: > "neutron-tempest-plugin-dvr-multinode-scenario" and > "neutron-tempest-dvr-ha-multinode-full" and move those jobs to the experimental > queue. This is temporary solution until we will fix those 2 jobs finally. > > > # Team photo > > It wasn't as great as in other PTGs. But You can find our team photos in > the current context at [10]. Credits for photos comes to Rodolfo and Jakub who > made them :) Thx a lot! > > > # Etherpads > > Etherpad from Neutron sessions can be found at [11]. Notes from the > Nova-Neutron-Cyborg session can be found at [12]. > > [1] https://opendev.org/openstack/congress > [2] https://opendev.org/x/neutron-interconnection > [3] https://bugs.launchpad.net/neutron/+bug/1832526 > [4] https://bugs.launchpad.net/neutron/+bug/1817881 > [5] https://docs.openstack.org/neutron/latest/ovn/gaps.html > [6] https://bugs.launchpad.net/neutron/+bug/1045613 > [7] https://bugs.launchpad.net/neutron/+bug/1285335 > [8] https://review.opendev.org/#/c/665467/72/ > [9] https://docs.google.com/document/d/11HkK-cLpDxa5Lku0_O0Nb8Uqh34Jqzx2N7j2aDu05T0/edit > [10] http://kaplonski.pl/images/Virtual_PTG_2020/ > [11] http://kaplonski.pl/images/Virtual_PTG_2020/ And I did mistake in this link. It should be [11] https://etherpad.opendev.org/p/neutron-victoria-ptg > [12] https://etherpad.opendev.org/p/nova-victoria-ptg > > -- > Slawek Kaplonski > Senior software engineer > Red Hat -- Slawek Kaplonski Senior software engineer Red Hat From gfidente at redhat.com Wed Jun 10 14:17:28 2020 From: gfidente at redhat.com (Giulio Fidente) Date: Wed, 10 Jun 2020 16:17:28 +0200 Subject: [tripleo] Proposing Francesco Pantano as core on TripleO/Ceph Message-ID: <9d4cc3fc-2f6e-cd74-ec5c-413ba173913a@redhat.com> Hi all, Francesco (fmount on freenode) started working on the Ceph integration bits in TripleO more than a year ago now [1], contributing over time to all components, heat templates, validations, puppet and ansible repos. He understood the tight relationship in between TripleO and ceph-ansible and contributed directly to ceph-ansible as well, when necessary [2]. I think he'd be a great addition to the TripleO cores group and I hope for him to work more in the future even outside the Ceph integration efforts. I would like to propose Francesco core for the TripleO group on the Ceph bits. Please vote here or leave any feedback for him. Thanks, Giulio 1. https://review.opendev.org/#/q/owner:fpantano%2540redhat.com+status:merged 2. https://github.com/ceph/ceph-ansible/commits?author=fmount -- Giulio Fidente GPG KEY: 08D733BA From johfulto at redhat.com Wed Jun 10 14:21:41 2020 From: johfulto at redhat.com (John Fulton) Date: Wed, 10 Jun 2020 10:21:41 -0400 Subject: [tripleo] Proposing Francesco Pantano as core on TripleO/Ceph In-Reply-To: <9d4cc3fc-2f6e-cd74-ec5c-413ba173913a@redhat.com> References: <9d4cc3fc-2f6e-cd74-ec5c-413ba173913a@redhat.com> Message-ID: On Wed, Jun 10, 2020 at 10:21 AM Giulio Fidente wrote: > Hi all, > > Francesco (fmount on freenode) started working on the Ceph integration > bits in TripleO more than a year ago now [1], contributing over time to > all components, heat templates, validations, puppet and ansible repos. > > He understood the tight relationship in between TripleO and ceph-ansible > and contributed directly to ceph-ansible as well, when necessary [2]. > > I think he'd be a great addition to the TripleO cores group and I hope > for him to work more in the future even outside the Ceph integration > efforts. > > I would like to propose Francesco core for the TripleO group on the Ceph > bits. > > Please vote here or leave any feedback for him. > +1 Been working with him for a while now. He's awesome! --John > > Thanks, > Giulio > > 1. > https://review.opendev.org/#/q/owner:fpantano%2540redhat.com+status:merged > > 2. https://github.com/ceph/ceph-ansible/commits?author=fmount > -- > Giulio Fidente > GPG KEY: 08D733BA > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at goirand.fr Wed Jun 10 15:15:46 2020 From: thomas at goirand.fr (Thomas Goirand) Date: Wed, 10 Jun 2020 17:15:46 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> Message-ID: <46ef2c62-e7fa-f853-ff3f-90e6e69fbee1@goirand.fr> On 6/10/20 3:23 PM, Jeremy Stanley wrote: > On 2020-06-10 15:11:35 +0200 (+0200), Thomas Goirand wrote: >> On 6/9/20 12:00 PM, Thierry Carrez wrote: > [...] >>> instead of doing a 14.0.0.0rc1 and then a 14.0.0.0rc2 that gets >>> promoted to 14.0.0, we would produce a 14.0.0, then a 14.0.1 and >>> just list that 14.0.1 in the release page at coordinated release >>> time. > [...] >> So more or less, you're removing the mandatory release of >> frozen-before-release artifact. >> >> From a downstream distribution package maintainer, I'd like to >> voice my concern that with this scheme, it's going to be very >> complicated to deliver the OpenStack release on-time when it gets >> released. This also means that it will be difficult to get things >> like puppet-openstack fixed on-time too, because they depend on >> the packages. >> >> So, while I don't really mind the beta releases anymore (I don't >> package them these days), I do strongly believe that the RC >> releases are convenient. I don't think we need RC2, RC3, etc, but >> having a working RC1 2 or 3 weeks before the release is really a >> good thing which I would regret a lot if we decided not to do it >> anymore. > > I don't understand what problem you're trying to convey. The > suggestion is basically a cosmetic change, where instead of > 14.0.0.0rc1 (and then if necessary 14.0.0.0rc2 and so on) we'd have > 14.0.0 (and then if necessary 14.0.1 and so on). How does that > change your packaging process? Is the concern that you can't know in > advance what the release version number for a given service is going > to be? I don't buy into the "this is only cosmetic": that's not what's going to happen, unfortunately. Obviously, in your example, 14.0.0 will *NOT* be considered a pre-release of the next stable. 14.0.0 will be seen as the "final release" version, ie: the first stable version. This means that we wont have tags for the pre-release. If the issue is just cosmetic as you say, then let's keep rc1 as the name for the pre-release version. Cheers, Thomas Goirand (zigo) From corey.bryant at canonical.com Wed Jun 10 15:32:54 2020 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 10 Jun 2020 11:32:54 -0400 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <46ef2c62-e7fa-f853-ff3f-90e6e69fbee1@goirand.fr> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> <46ef2c62-e7fa-f853-ff3f-90e6e69fbee1@goirand.fr> Message-ID: On Wed, Jun 10, 2020 at 11:28 AM Thomas Goirand wrote: > On 6/10/20 3:23 PM, Jeremy Stanley wrote: > > On 2020-06-10 15:11:35 +0200 (+0200), Thomas Goirand wrote: > >> On 6/9/20 12:00 PM, Thierry Carrez wrote: > > [...] > >>> instead of doing a 14.0.0.0rc1 and then a 14.0.0.0rc2 that gets > >>> promoted to 14.0.0, we would produce a 14.0.0, then a 14.0.1 and > >>> just list that 14.0.1 in the release page at coordinated release > >>> time. > > [...] > >> So more or less, you're removing the mandatory release of > >> frozen-before-release artifact. > >> > >> From a downstream distribution package maintainer, I'd like to > >> voice my concern that with this scheme, it's going to be very > >> complicated to deliver the OpenStack release on-time when it gets > >> released. This also means that it will be difficult to get things > >> like puppet-openstack fixed on-time too, because they depend on > >> the packages. > >> > >> So, while I don't really mind the beta releases anymore (I don't > >> package them these days), I do strongly believe that the RC > >> releases are convenient. I don't think we need RC2, RC3, etc, but > >> having a working RC1 2 or 3 weeks before the release is really a > >> good thing which I would regret a lot if we decided not to do it > >> anymore. > > > > I don't understand what problem you're trying to convey. The > > suggestion is basically a cosmetic change, where instead of > > 14.0.0.0rc1 (and then if necessary 14.0.0.0rc2 and so on) we'd have > > 14.0.0 (and then if necessary 14.0.1 and so on). How does that > > change your packaging process? Is the concern that you can't know in > > advance what the release version number for a given service is going > > to be? > > I don't buy into the "this is only cosmetic": that's not what's going to > happen, unfortunately. Obviously, in your example, 14.0.0 will *NOT* be > considered a pre-release of the next stable. 14.0.0 will be seen as the > "final release" version, ie: the first stable version. This means that > we wont have tags for the pre-release. > > If the issue is just cosmetic as you say, then let's keep rc1 as the > name for the pre-release version. > > Cheers, > > Thomas Goirand (zigo) > > I'm not seeing any issues with this downstream in Ubuntu. It'll be the same as handling openstack dependency releases today. Semantic versioning will tell you if it's bug fixes only. Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Jun 10 16:30:04 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 10 Jun 2020 17:30:04 +0100 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <8f835cd5-ee55-e570-7f40-ddb118e2e088@openstack.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <8f835cd5-ee55-e570-7f40-ddb118e2e088@openstack.org> Message-ID: On Wed, 10 Jun 2020 at 10:47, Thierry Carrez wrote: > > Mark Goddard wrote: > > [...] > > One substantial change here is that there will no longer be a period > > where the stable branch exists but the coordinated release does not. > > This could be an issue for cycle trailing projects such as kolla which > > sometimes get blocked on external (and internal) factors. Currently we > > are able to revert master from it's temporary stable mode to start > > development for the next cycle, while we continue stabilising the stable > > branch for release. > > Making sure I understand... Currently you are using RC1 to create the > stable branch, but it's not really a "release candidate", it's more a > starting point for stabilization ? So you can have a broken master > branch, tag it RC1 and create stable/ussuri from it, then work on making > stable/ussuri releasable while keeping master broken ? That's right. It's more incomplete than broken though. For example, RDO depends on Kolla's RC1 for it's GA, then we update the stable branch to install the RDO release RPM when it becomes available. There's also a slightly weird dance we have to do where we make master 'look like' the new release, by setting the branch name for our dependencies etc, then we revert those changes on master after the branch is cut. If the requirement to branch and RC1 at the same time was lifted it would avoid the need for this. > > If I understand correctly, then it's a fair point: the new model > actually makes release candidates real release candidates, so it does > not really support having a master branch that never gets close to > releasable state. I would argue that this was not really the intent > before with RC1 tags, but it certainly made it easier to hide. > > To support your case more clearly, maybe we could allow creating stable > branches from arbitrary commit SHAs. It used to be the case before (when > stable branches were created by humans) but when automation took over we > enforced that branches need to be created from tags. I think that would work well for us. > > I'll check with the release team where that requirement came from, and > if we can safely relax it. > > -- > Thierry Carrez (ttx) > From mark at stackhpc.com Wed Jun 10 16:33:55 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 10 Jun 2020 17:33:55 +0100 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> Message-ID: On Wed, 10 Jun 2020 at 14:24, Jeremy Stanley wrote: > > On 2020-06-10 15:11:35 +0200 (+0200), Thomas Goirand wrote: > > On 6/9/20 12:00 PM, Thierry Carrez wrote: > [...] > > > instead of doing a 14.0.0.0rc1 and then a 14.0.0.0rc2 that gets > > > promoted to 14.0.0, we would produce a 14.0.0, then a 14.0.1 and > > > just list that 14.0.1 in the release page at coordinated release > > > time. > [...] > > So more or less, you're removing the mandatory release of > > frozen-before-release artifact. > > > > From a downstream distribution package maintainer, I'd like to > > voice my concern that with this scheme, it's going to be very > > complicated to deliver the OpenStack release on-time when it gets > > released. This also means that it will be difficult to get things > > like puppet-openstack fixed on-time too, because they depend on > > the packages. > > > > So, while I don't really mind the beta releases anymore (I don't > > package them these days), I do strongly believe that the RC > > releases are convenient. I don't think we need RC2, RC3, etc, but > > having a working RC1 2 or 3 weeks before the release is really a > > good thing which I would regret a lot if we decided not to do it > > anymore. > > I don't understand what problem you're trying to convey. The > suggestion is basically a cosmetic change, where instead of > 14.0.0.0rc1 (and then if necessary 14.0.0.0rc2 and so on) we'd have > 14.0.0 (and then if necessary 14.0.1 and so on). How does that > change your packaging process? Is the concern that you can't know in > advance what the release version number for a given service is going > to be? I think the issue is that currently there is a period of time in which every project has a release candidate which can be packaged and tested, prior to the release. In the new model there is no obligation to release anything prior to GA, and I expect most teams would not. > -- > Jeremy Stanley From fungi at yuggoth.org Wed Jun 10 16:56:46 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Jun 2020 16:56:46 +0000 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <46ef2c62-e7fa-f853-ff3f-90e6e69fbee1@goirand.fr> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> <46ef2c62-e7fa-f853-ff3f-90e6e69fbee1@goirand.fr> Message-ID: <20200610165646.iwvckxm4axyvywsa@yuggoth.org> On 2020-06-10 17:15:46 +0200 (+0200), Thomas Goirand wrote: [...] > I don't buy into the "this is only cosmetic": that's not what's > going to happen, unfortunately. Obviously, in your example, 14.0.0 > will *NOT* be considered a pre-release of the next stable. 14.0.0 > will be seen as the "final release" version, ie: the first stable > version. That's no different from how it works now, other than the actual characters in the version string. Currently most projects cut a stable/whatever branch from an rc1 tag and then the same commit which got that rc1 tag gets re-tagged with the release version tag. In Thierry's proposal we'd just use an actual release-numbered tag rather than an rc1 tag. Projects which previously got an rc2 in their stable branch before the official coordinated release date would do a .1 point release there instead. > This means that we wont have tags for the pre-release. We will, they'll just have release-like numbers on them (but the third component of the release number may not be the same if a patch version is tagged in that stable branch prior to the coordinated release). In Thierry's example, 14.0.0 is a candidate for the coordinated release just like 14.0.0.0rc1 would have been, but if issues are found with it then there could be a 14.0.0.1 before the coordinated release date, similar to 14.0.0.0rc2. The same factors which drive some projects to need a second (or third, or fourth) release candidate would still be present to cause them to want a second (or third, or fourth) patch version before the coordinated release date. > If the issue is just cosmetic as you say, then let's keep rc1 as > the name for the pre-release version. The workflow difference is primarily cosmetic (other than not necessarily needing to re-tag the last release candidate at coordinated release time). The issue it solves is not cosmetic: we currently have two primary release models, one for services and another for libraries. This would result in following the same model for services as we've been using to release libraries for years, just at a different point in the cycle than when libraries are released. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Jun 10 17:05:32 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Jun 2020 17:05:32 +0000 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> Message-ID: <20200610170532.krmc4t5y4zb5jt3m@yuggoth.org> On 2020-06-10 17:33:55 +0100 (+0100), Mark Goddard wrote: [...] > I think the issue is that currently there is a period of time in > which every project has a release candidate which can be packaged > and tested, prior to the release. In the new model there is no > obligation to release anything prior to GA, and I expect most > teams would not. You and I clearly read very different proposals then. My understanding is that this does not get rid of the period of time you're describing, just changes the tags we use in it: [Excerpt from Thierry's original post yesterday...] > > The "final" release would be marked by creating a release > > (stable) branch, and that would need to be done before a > > deadline. Like today, that deadline depends on whether that > > deliverable is a library, a client library, a release-trailing > > exception or just a regular part of the common release. > > > > The main change this proposal introduces would be to stop having > > release candidates at the end of the cycle. Instead we would > > produce a release, which would be a candidate for inclusion in > > the coordinated OpenStack release. For service projects, that "deadline" he talks about would be the start of the traditional RC period, we just wouldn't use special rc1 tags for branching at that point, we'd use actual version numbers to branch from. I think the proposal has probably confused some folks by saying, "stop having release candidates [...and instead have a] candidate for inclusion in the coordinated OpenStack release." It would basically still be a "release candidate" in spirit, just not in name, and not using the same tagging scheme as we have traditionally used for release candidates of service projects. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Wed Jun 10 17:14:41 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 10 Jun 2020 12:14:41 -0500 Subject: [oslo] PTG Wrapup Message-ID: I wrote up a slightly wordy summary of the Oslo PTG discussions: http://blog.nemebean.com/content/oslo-virtual-ptg-victoria Hopefully I didn't forget anything in the week+ since then, but if I did let me know. :-) -Ben From fungi at yuggoth.org Wed Jun 10 17:37:27 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Jun 2020 17:37:27 +0000 Subject: [ops][telemetry] OpenInfra Labs discussion on cloud monitoring In-Reply-To: References: Message-ID: <20200610173726.wporoa3e5k6hi2sz@yuggoth.org> The OpenInfra Labs project is starting a discussion on their mailing list to try and find commonalities across open cloud monitoring approaches, beginning by assembling user stories and requirements from a diversity of organizations. If anyone has any feedback for them, I recommend following up to that ML thread (and subscribe even, their list is very low-volume). http://lists.opendev.org/pipermail/openinfralabs/2020-June/000061.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zigo at debian.org Wed Jun 10 21:56:41 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 10 Jun 2020 23:56:41 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <20200610165646.iwvckxm4axyvywsa@yuggoth.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> <46ef2c62-e7fa-f853-ff3f-90e6e69fbee1@goirand.fr> <20200610165646.iwvckxm4axyvywsa@yuggoth.org> Message-ID: <2f187d4a-c97c-07ea-6473-d0d0cb86eafb@debian.org> On 6/10/20 6:56 PM, Jeremy Stanley wrote: >> This means that we wont have tags for the pre-release. > > We will, they'll just have release-like numbers on them It doesn't make sense. >> If the issue is just cosmetic as you say, then let's keep rc1 as >> the name for the pre-release version. > > The workflow difference is primarily cosmetic (other than not > necessarily needing to re-tag the last release candidate at > coordinated release time). Is the re-tag of services THAT time/resource consuming? > The issue it solves is not cosmetic: we > currently have two primary release models, one for services and > another for libraries. This would result in following the same model > for services as we've been using to release libraries for years, > just at a different point in the cycle than when libraries are > released. When I'll look into my Debian Q/A page [1] I wont be able to know if I missed packaging final release just by looking at version numbers (ie: track if there's still some RC version remaining and fix...). I'd be for the opposite move: tagging libraries as RC before the final release would make a lot of sense, and help everyone identify what these versions represent. On 6/10/20 6:33 PM, Mark Goddard wrote: > I think the issue is that currently there is a period of time in which > every project has a release candidate which can be packaged and > tested, prior to the release. In the new model there is no obligation > to release anything prior to GA, and I expect most teams would not. There's also that above that Mark wrote... On 6/10/20 7:05 PM, Jeremy Stanley wrote: > You and I clearly read very different proposals then. My > understanding is that this does not get rid of the period of time > you're describing, just changes the tags we use in it: With this proposal, every project will treat the scheduled first RC as the release time itself, and move on to work on master. Even worse: since they are supposed to be just RC, you'll see that projects will care less to be on-time for it, and the final version from projects will be cut in a period varying from start of what we used to call the RC1, to the final release date. So this effectively, removes the pre-release period which we used to have dedicated for debugging and stabilising. On 6/10/20 6:56 PM, Jeremy Stanley wrote: > I think the proposal has probably confused some folks > by saying, "stop having release candidates [...and instead have a] > candidate for inclusion in the coordinated OpenStack release." Jeremy, my opinion is that you are the person not understanding what this proposal implies, and what consequence it will have on how projects will release final versions. > It would basically still be a "release candidate" in spirit, just not > in name, and not using the same tagging scheme as we have > traditionally used for release candidates of service projects. Please keep release candidate not just "in spirit", but effectively, with the correct name matching what they are supposed to be. Otherwise, you're effectively removing what the RCs were. If that is what you want to do (ie: stop having release candidates), because of various reasons, just explain why and move on. I would understand such a move: - if we declare OpenStack more mature, and needing less care for coordinated releases. - if there's not enough people working on stable branches between RC and final releases. - if OpenStack isn't producing lots of bug-fixes after the first RCs, and they are now useless. I wouldn't understand if RC versions would be gone, just because numbers don't look pretty. That's IMO a wrong answer to a wrong problem. Cheers, Thomas Goirand (zigo) [1] https://qa.debian.org/developer.php?login=openstack-devel at lists.alioth.debian.org From fungi at yuggoth.org Wed Jun 10 22:25:53 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Jun 2020 22:25:53 +0000 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <2f187d4a-c97c-07ea-6473-d0d0cb86eafb@debian.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> <46ef2c62-e7fa-f853-ff3f-90e6e69fbee1@goirand.fr> <20200610165646.iwvckxm4axyvywsa@yuggoth.org> <2f187d4a-c97c-07ea-6473-d0d0cb86eafb@debian.org> Message-ID: <20200610222553.vftdp4ipbbesl2ft@yuggoth.org> On 2020-06-10 23:56:41 +0200 (+0200), Thomas Goirand wrote: [...] > Is the re-tag of services THAT time/resource consuming? [...] It means divergent tooling and process for releasing our libraries vs releasing our services, and additional cognitive load for projects to need to decide which deliverables should follow what release model. > When I'll look into my Debian Q/A page [1] I wont be able to know if I > missed packaging final release just by looking at version numbers (ie: > track if there's still some RC version remaining and fix...). You don't know that today for our libraries either, since they branch from non-rc version numbers and may produce additional point releases in their stable/$cycle branch up until near the coordinated release. > I'd be for the opposite move: tagging libraries as RC before the final > release would make a lot of sense, and help everyone identify what these > versions represent. [...] Except that they typically branch long, long before the coordinated release and need to have their version numbers included in requirements lists for other projects' master branches. It might be doable to have them use release candidates on the new stable branch until close to coordinated release and then tag them all with version numbers and update every requirements list to compensate, but that seems like it would turn into a massive scramble vs the one patch version here and there that they normally accrue late in the cycle. > With this proposal, every project will treat the scheduled first > RC as the release time itself, and move on to work on master. They already do. Right now when rc1 is tagged and the stable/$cycle branch is created from it, subsequent fixes enter master and get backported to stable/$cycle and then an rc2 (or 3, or 4...) is tagged in stable/$cycle as needed. At coordinated release time whatever the corresponding commit was for the last rcN tag was gets re-tagged as the release version, and in most cases that's identical to the rc1 tag (no additional fixes during release candidate period). The new proposal would follow the same process, just not stick rcN prefixes on the tags and increment the last number in the tag instead, then not do any re-tagging of the last tag at coordinated release time because the versions of the last tags are already normal version numbers. > Even worse: since they are supposed to be just RC, you'll see that > projects will care less to be on-time for it, and the final > version from projects will be cut in a period varying from start > of what we used to call the RC1, to the final release date. I don't see why you would expect this any more than today. Projects backport patches from master to the new stable/$cycle branch during this period already, the incentive to do that doesn't change based on what the tags created in that branch look like. > So this effectively, removes the pre-release period which we used to > have dedicated for debugging and stabilising. [...] You keep asserting this, but the plan is precisely intended to maintain this period, and says nothing about getting rid of it. > If that is what you want to do (ie: stop having release candidates), > because of various reasons, just explain why and move on. I would > understand such a move: > - if we declare OpenStack more mature, and needing less care for > coordinated releases. > - if there's not enough people working on stable branches between RC and > final releases. > - if OpenStack isn't producing lots of bug-fixes after the first RCs, > and they are now useless. [...] The original plan as written explains exactly why: "We [...have] a confusing collection of release models with abstract rules for each and not much flexibility. Projects are forced to choose between those models for their deliverables, with limited guidance. And much of the rationale for those models (exercise release machinery early and often, trigger external testing...) is no longer valid." We already ask far too much of our project teams, and simplifying the release models and processes they have to understand and follow should ease some of their overall burden, freeing up their time for more important remaining tasks. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From whayutin at redhat.com Wed Jun 10 22:30:25 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 10 Jun 2020 16:30:25 -0600 Subject: [tripleo] Proposing Francesco Pantano as core on TripleO/Ceph In-Reply-To: References: <9d4cc3fc-2f6e-cd74-ec5c-413ba173913a@redhat.com> Message-ID: +1 most def :) On Wed, Jun 10, 2020 at 8:23 AM John Fulton wrote: > On Wed, Jun 10, 2020 at 10:21 AM Giulio Fidente > wrote: > >> Hi all, >> >> Francesco (fmount on freenode) started working on the Ceph integration >> bits in TripleO more than a year ago now [1], contributing over time to >> all components, heat templates, validations, puppet and ansible repos. >> >> He understood the tight relationship in between TripleO and ceph-ansible >> and contributed directly to ceph-ansible as well, when necessary [2]. >> >> I think he'd be a great addition to the TripleO cores group and I hope >> for him to work more in the future even outside the Ceph integration >> efforts. >> >> I would like to propose Francesco core for the TripleO group on the Ceph >> bits. >> >> Please vote here or leave any feedback for him. >> > > +1 Been working with him for a while now. He's awesome! --John > > >> >> Thanks, >> Giulio >> >> 1. >> https://review.opendev.org/#/q/owner:fpantano%2540redhat.com+status:merged >> >> 2. https://github.com/ceph/ceph-ansible/commits?author=fmount >> -- >> Giulio Fidente >> GPG KEY: 08D733BA >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu Jun 11 01:49:14 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 10 Jun 2020 21:49:14 -0400 Subject: [cinder] victoria virtual mid-cycle part 1 poll Message-ID: Hello Cinder team and fellow travelers, As discussed at today's Cinder meeting, we'll hold a virtual mid-cycle meeting in two two-hour sessions, at weeks R-16 and at R-8. Please indicate your availability to meet for the first session, which will be held during the week of June 22-26: https://doodle.com/poll/4vx4gavsewrkgbyh Please respond before 12:00 UTC on Monday 15 June. thanks, brian From marios at redhat.com Thu Jun 11 06:00:22 2020 From: marios at redhat.com (Marios Andreou) Date: Thu, 11 Jun 2020 09:00:22 +0300 Subject: [tripleo] Proposing Francesco Pantano as core on TripleO/Ceph In-Reply-To: <9d4cc3fc-2f6e-cd74-ec5c-413ba173913a@redhat.com> References: <9d4cc3fc-2f6e-cd74-ec5c-413ba173913a@redhat.com> Message-ID: +1 On Wed, Jun 10, 2020 at 5:19 PM Giulio Fidente wrote: > Hi all, > > Francesco (fmount on freenode) started working on the Ceph integration > bits in TripleO more than a year ago now [1], contributing over time to > all components, heat templates, validations, puppet and ansible repos. > > He understood the tight relationship in between TripleO and ceph-ansible > and contributed directly to ceph-ansible as well, when necessary [2]. > > I think he'd be a great addition to the TripleO cores group and I hope > for him to work more in the future even outside the Ceph integration > efforts. > > I would like to propose Francesco core for the TripleO group on the Ceph > bits. > > Please vote here or leave any feedback for him. > > Thanks, > Giulio > > 1. > https://review.opendev.org/#/q/owner:fpantano%2540redhat.com+status:merged > > 2. https://github.com/ceph/ceph-ansible/commits?author=fmount > -- > Giulio Fidente > GPG KEY: 08D733BA > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Thu Jun 11 07:56:07 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 11 Jun 2020 09:56:07 +0200 Subject: [neutron] Virtual PTG summary In-Reply-To: <20200610130747.xcipu2r7zz2g5bdh@skaplons-mac> References: <20200610130747.xcipu2r7zz2g5bdh@skaplons-mac> Message-ID: On Wed, Jun 10, 2020 at 15:07, Slawek Kaplonski wrote: > [snip] > * Second topic was about updating minimum bandwdith for the ports. We > agreed > that this should be done on Neutron side and Neutron will update > resource > allocation directly in Placement. There is no need to involve Nova > in this > process. We also decided that we will not allow update values of > the existing, > already associated rules in the QoS Policy. The only possible way > to change > minimum bandwidth will be to update port to associate it with new > QoS Policy. I've filed a Neutron RFE [1] outlining the Neutron - Placement interaction needed for this work. Lajos or Bence will detail the Neutron internal part of the RFE. Cheers, gibi [1] https://bugs.launchpad.net/neutron/+bug/1882804 [snip] From mark at stackhpc.com Thu Jun 11 08:03:16 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 11 Jun 2020 09:03:16 +0100 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <20200610170532.krmc4t5y4zb5jt3m@yuggoth.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> <20200610170532.krmc4t5y4zb5jt3m@yuggoth.org> Message-ID: On Wed, 10 Jun 2020 at 18:06, Jeremy Stanley wrote: > > On 2020-06-10 17:33:55 +0100 (+0100), Mark Goddard wrote: > [...] > > I think the issue is that currently there is a period of time in > > which every project has a release candidate which can be packaged > > and tested, prior to the release. In the new model there is no > > obligation to release anything prior to GA, and I expect most > > teams would not. > > You and I clearly read very different proposals then. Friendlier wording: we interpreted it differently. > My > understanding is that this does not get rid of the period of time > you're describing, just changes the tags we use in it: > > [Excerpt from Thierry's original post yesterday...] > > > > The "final" release would be marked by creating a release > > > (stable) branch, and that would need to be done before a > > > deadline. Like today, that deadline depends on whether that > > > deliverable is a library, a client library, a release-trailing > > > exception or just a regular part of the common release. > > > > > > The main change this proposal introduces would be to stop having > > > release candidates at the end of the cycle. Instead we would > > > produce a release, which would be a candidate for inclusion in > > > the coordinated OpenStack release. > > For service projects, that "deadline" he talks about would be the > start of the traditional RC period, we just wouldn't use special rc1 > tags for branching at that point, we'd use actual version numbers to > branch from. I think the proposal has probably confused some folks > by saying, "stop having release candidates [...and instead have a] > candidate for inclusion in the coordinated OpenStack release." It > would basically still be a "release candidate" in spirit, just not > in name, and not using the same tagging scheme as we have > traditionally used for release candidates of service projects. I think this is reading between the lines somewhat. I agree that it makes sense however, and preserves the period before release during which every deliverable to be included in GA should have a release available. > -- > Jeremy Stanley From skaplons at redhat.com Thu Jun 11 08:40:36 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 11 Jun 2020 10:40:36 +0200 Subject: [neutron] Virtual PTG summary In-Reply-To: References: <20200610130747.xcipu2r7zz2g5bdh@skaplons-mac> Message-ID: <20200611084036.da2yaotsv7voeikw@skaplons-mac> Hi, On Thu, Jun 11, 2020 at 09:56:07AM +0200, Balázs Gibizer wrote: > > > On Wed, Jun 10, 2020 at 15:07, Slawek Kaplonski wrote: > > > > [snip] > > > * Second topic was about updating minimum bandwdith for the ports. We > > agreed > > that this should be done on Neutron side and Neutron will update > > resource > > allocation directly in Placement. There is no need to involve Nova in > > this > > process. We also decided that we will not allow update values of the > > existing, > > already associated rules in the QoS Policy. The only possible way to > > change > > minimum bandwidth will be to update port to associate it with new QoS > > Policy. > > I've filed a Neutron RFE [1] outlining the Neutron - Placement interaction > needed for this work. Lajos or Bence will detail the Neutron internal part > of the RFE. Thx gibi. I marked it as triaged for now. > > Cheers, > gibi > > [1] https://bugs.launchpad.net/neutron/+bug/1882804 > > [snip] > > -- Slawek Kaplonski Senior software engineer Red Hat From ruslanas at lpic.lt Thu Jun 11 08:46:41 2020 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 11 Jun 2020 10:46:41 +0200 Subject: [neutron] subnet policy for ip allocation Message-ID: Hi team, I need that my IP addresses in subnet would not be allocated from the beginning, but it would go to the next IP compared to previous used: instance1 created: 1.1.1.1 Instance2 created: 1.1.1.2 instance1 deleted. instance2 deleted. Instance3 created: 1.1.1.3 (not 1.1.1.1 again) I remember have read about such, do not remember how to google it... -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Jun 11 10:23:27 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 11 Jun 2020 12:23:27 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> <20200610170532.krmc4t5y4zb5jt3m@yuggoth.org> Message-ID: <084b2e8f-910b-9923-23be-5e97ce5c4d7c@openstack.org> Mark Goddard wrote: >> [Excerpt from Thierry's original post yesterday...] >> >>>> The "final" release would be marked by creating a release >>>> (stable) branch, and that would need to be done before a >>>> deadline. Like today, that deadline depends on whether that >>>> deliverable is a library, a client library, a release-trailing >>>> exception or just a regular part of the common release. >>>> >>>> The main change this proposal introduces would be to stop having >>>> release candidates at the end of the cycle. Instead we would >>>> produce a release, which would be a candidate for inclusion in >>>> the coordinated OpenStack release. >> >> For service projects, that "deadline" he talks about would be the >> start of the traditional RC period, we just wouldn't use special rc1 >> tags for branching at that point, we'd use actual version numbers to >> branch from. I think the proposal has probably confused some folks >> by saying, "stop having release candidates [...and instead have a] >> candidate for inclusion in the coordinated OpenStack release." It >> would basically still be a "release candidate" in spirit, just not >> in name, and not using the same tagging scheme as we have >> traditionally used for release candidates of service projects. > > I think this is reading between the lines somewhat. I agree that it > makes sense however, and preserves the period before release during > which every deliverable to be included in GA should have a release > available. To clarify, the intent is definitely to keep a stabilization period between the moment the branch is created and the "final" coordinated release date. We currently do it by asking projects to release a "RC1" which coincides with the branching, and then have them iterate on tagging further RCx versions (usually just one). The last available version at coordinated release date is re-tagged as a "normal" release number. With the unified model, projects would create the stabilization branch when they feel like it, then iterate on tagging versions (usually just one). The last available version at coordinated release date is considered included in the "OpenStack" release, without the need for a re-tag. Also to clarify, we are /already/ using that model for all cycle-with-intermediary deliverables like Swift or Ironic or Karbor or Vitrage or Monasca or CloudKitty or Horizon... It's not as if it was a new thing that would break packagers workflows. They already support it. If anything, it simplifies the workflow by making everything available ahead of time, rather than artificially trigger a bunch of new versions on release day as we re-tag the final RCs into correct release numbers. -- Thierry Carrez (ttx) From sundar.nadathur at intel.com Thu Jun 11 11:04:12 2020 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Thu, 11 Jun 2020 11:04:12 +0000 Subject: [cyborg][neutron][nova] Networking support in Cyborg Message-ID: Hi all, Based on the Victoria PTG discussion [1], here's a stab at making some aspects of the Nova-Neutron-Cyborg interaction more concrete. * Background: A smart NIC may have a single 'device' that combines the accelerator and the NIC, or two (or more) components in a single PCI card, with separate accelerator and NIC components. * What we said in the PTG: We should model the smart NIC as a single RP representing the combined accelerator/NIC for the first case. For the second case, we could have a hierarchy with separate RPs for the accelerator and the NICs, and a top-level resource-less RP which aggregates all the children RPs and combines their traits. (Correspondingly, Cyborg may represent it as a single object, which we call a Deployable, or as a hierarchy of such objects. There is already support in Cyborg for creating a tree of such objects, though it may need validation for this use case.) * Who creates these RPs? I suggest Cyborg create it in all cases, to keep it uniform. Neutron creates RPs today for the bandwidth provider. But, if different services create RPs depend on which feature is enabled and whether it is a single/multi-component device, that can get complex and problematic. So, could we discuss the possibility of Neutron not creating the RP? The admin should not configure Neutron to handle such NICs. * Ideally, the admin should be able to formulate the device profile in the same way, independent of whether it is a single-component or multi-component device. For that, the device profile must have a single resource group that includes the resource, traits and Cyborg properties for both the accelerator and NIC. The device profile for a Neutron port will presumably have only one request group. So, the device profile would look something like this: { "name": "my-smartnic-dp", "groups": [{ "resources:FPGA": "1", "resources:CUSTOM_NIC_X": "1", "trait:CUSTOM_FPGA_REGION_ID_FOO": "required", "trait:CUSTOM_NIC_TRAIT_BAR": "required", "trait:CUSTOM_PHYSNET_VLAN3": "required", "accel:bitstream_id": "3AFE" }] } Having a single resource group for resources/traits for both accelerator and NIC would ensure that a single RP would provide all those resources, thus ensuring resource co-location in the same device. That single RP could be the top-level RP of a hierarchy. (If they were separate request groups, there is no way to ensure that the resources come from a single RP, even if we set group_policy to None.) * During ARQ binding, Cyborg would still get a single RP as today. In the case of a multi-component device, Cyborg would translate that to the top-level Deployable object, and figure out what constituent components are present. For this scheme to work, it is important that the resource classes and traits for the accelerator RP and the NIC RP be totally disjoint (no overlapping resource classes or traits). * We discussed the physnet trait at the PTG. My suggestion is to keep Cyborg out of this, and out of networking in general, if possible. [1] https://etherpad.opendev.org/p/nova-victoria-ptg "Cyborg-Nova" Lines 104-164 Regards, Sundar From smooney at redhat.com Thu Jun 11 11:31:17 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 11 Jun 2020 12:31:17 +0100 Subject: [cyborg][neutron][nova] Networking support in Cyborg In-Reply-To: References: Message-ID: On Thu, 2020-06-11 at 11:04 +0000, Nadathur, Sundar wrote: > Hi all, > Based on the Victoria PTG discussion [1], here's a stab at making some aspects of the Nova-Neutron-Cyborg > interaction more concrete. > > * Background: A smart NIC may have a single 'device' that combines the accelerator and the NIC, or two (or more) > components in a single PCI card, with separate accelerator and NIC components. > > * What we said in the PTG: We should model the smart NIC as a single RP representing the combined accelerator/NIC for > the first case. For the second case, we could have a hierarchy with separate RPs for the accelerator and the NICs, and > a top-level resource-less RP which aggregates all the children RPs and combines their traits. (Correspondingly, Cyborg > may represent it as a single object, which we call a Deployable, or as a hierarchy of such objects. There is already > support in Cyborg for creating a tree of such objects, though it may need validation for this use case.) > > * Who creates these RPs? I suggest Cyborg create it in all cases, to keep it uniform. Neutron creates RPs today for > the bandwidth provider. But, if different services create RPs depend on which feature is enabled and whether it is a > single/multi-component device, that can get complex and problematic. So, could we discuss the possibility of Neutron > not creating the RP? The admin should not configure Neutron to handle such NICs. > > * Ideally, the admin should be able to formulate the device profile in the same way, independent of whether it is a > single-component or multi-component device. For that, the device profile must have a single resource group that > includes the resource, traits and Cyborg properties for both the accelerator and NIC. The device profile for a Neutron > port will presumably have only one request group. So, the device profile would look something like this: > > { "name": "my-smartnic-dp", > "groups": [{ > "resources:FPGA": "1", > "resources:CUSTOM_NIC_X": "1", > "trait:CUSTOM_FPGA_REGION_ID_FOO": "required", > "trait:CUSTOM_NIC_TRAIT_BAR": "required", > "trait:CUSTOM_PHYSNET_VLAN3": "required", > "accel:bitstream_id": "3AFE" > }] > } having "trait:CUSTOM_PHYSNET_VLAN3": "required", in the device profile means you have to create a seperate device profile with the same details for each physnet and the user then need to fine the profile that matches there neutron network's physnet which is also problematic if they use the multiprovidernet extention. so we shoud keep the physnet seperate and have nova or neutorn append that when we make the placment query. > > Having a single resource group for resources/traits for both accelerator and NIC would ensure that a single RP would > provide all those resources, thus ensuring resource co-location in the same device. That single RP could be the top- > level RP of a hierarchy. (If they were separate request groups, there is no way to ensure that the resources come from > a single RP, even if we set group_policy to None.) > > * During ARQ binding, Cyborg would still get a single RP as today. In the case of a multi-component device, Cyborg > would translate that to the top-level Deployable object, and figure out what constituent components are present. For > this scheme to work, it is important that the resource classes and traits for the accelerator RP and the NIC RP be > totally disjoint (no overlapping resource classes or traits). > > * We discussed the physnet trait at the PTG. My suggestion is to keep Cyborg out of this, and out of networking in > general, if possible. well this feature is more or less the opisite of that intent but i get that you dont want cyborg to have to confiure the networking atribute of the interface. > > > [1] https://etherpad.opendev.org/p/nova-victoria-ptg "Cyborg-Nova" Lines 104-164 > > Regards, > Sundar > > From ltoscano at redhat.com Thu Jun 11 12:05:18 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 11 Jun 2020 14:05:18 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <084b2e8f-910b-9923-23be-5e97ce5c4d7c@openstack.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <084b2e8f-910b-9923-23be-5e97ce5c4d7c@openstack.org> Message-ID: <19655737.0c2gjJ1VT2@whitebase.usersys.redhat.com> On Thursday, 11 June 2020 12:23:27 CEST Thierry Carrez wrote: > Mark Goddard wrote: > >> [Excerpt from Thierry's original post yesterday...] > >> > >>>> The "final" release would be marked by creating a release > >>>> (stable) branch, and that would need to be done before a > >>>> deadline. Like today, that deadline depends on whether that > >>>> deliverable is a library, a client library, a release-trailing > >>>> exception or just a regular part of the common release. > >>>> > >>>> The main change this proposal introduces would be to stop having > >>>> release candidates at the end of the cycle. Instead we would > >>>> produce a release, which would be a candidate for inclusion in > >>>> the coordinated OpenStack release. > >> > >> For service projects, that "deadline" he talks about would be the > >> start of the traditional RC period, we just wouldn't use special rc1 > >> tags for branching at that point, we'd use actual version numbers to > >> branch from. I think the proposal has probably confused some folks > >> by saying, "stop having release candidates [...and instead have a] > >> candidate for inclusion in the coordinated OpenStack release." It > >> would basically still be a "release candidate" in spirit, just not > >> in name, and not using the same tagging scheme as we have > >> traditionally used for release candidates of service projects. > > > > I think this is reading between the lines somewhat. I agree that it > > makes sense however, and preserves the period before release during > > which every deliverable to be included in GA should have a release > > available. > > To clarify, the intent is definitely to keep a stabilization period > between the moment the branch is created and the "final" coordinated > release date. > > We currently do it by asking projects to release a "RC1" which coincides > with the branching, and then have them iterate on tagging further RCx > versions (usually just one). The last available version at coordinated > release date is re-tagged as a "normal" release number. > > With the unified model, projects would create the stabilization branch > when they feel like it, then iterate on tagging versions (usually just > one). The last available version at coordinated release date is > considered included in the "OpenStack" release, without the need for a > re-tag. > > Also to clarify, we are /already/ using that model for all > cycle-with-intermediary deliverables like Swift or Ironic or Karbor or > Vitrage or Monasca or CloudKitty or Horizon... It's not as if it was a > new thing that would break packagers workflows. They already support it. Can we at least enforce a rule that when tagging the "OpenStack" release, the y number should be increased? Bonus points for having the stabilization (RC) release use y=0, and the stable one starts from y=1. I know it may not sound to important for a computer, but it's useful for a human eye to know that the first release has a final 0 somewhere? (I really dislike the apache and mysql model in this regard) Ciao -- Luigi From fungi at yuggoth.org Thu Jun 11 12:16:34 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Jun 2020 12:16:34 +0000 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <19655737.0c2gjJ1VT2@whitebase.usersys.redhat.com> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <084b2e8f-910b-9923-23be-5e97ce5c4d7c@openstack.org> <19655737.0c2gjJ1VT2@whitebase.usersys.redhat.com> Message-ID: <20200611121633.tabzk6labz372qc3@yuggoth.org> On 2020-06-11 14:05:18 +0200 (+0200), Luigi Toscano wrote: [...] > Can we at least enforce a rule that when tagging the "OpenStack" > release, the y number should be increased? Bonus points for having > the stabilization (RC) release use y=0, and the stable one starts > from y=1. > > I know it may not sound to important for a computer, but it's > useful for a human eye to know that the first release has a final > 0 somewhere? (I really dislike the apache and mysql model in this > regard) As pointed out, this already isn't the case for the many projects currently following this model, especially those which branch earlier in the cycle. It's also a bit of a puzzle... basically every new release you tag on the stable branch could be your final release, so how do you predict in advance that you won't need additional patches? I guess you could make every new tag in stable prior to the release a semantic versioning "feature" addition (even though it's really just patch level fixes), so you branch at 14.0.0, and then decide you need some fixes and tag 14.1.0, and then realize you need more fixes so tag 14.2.0, but then after the coordinated release you only increase the patch level of the version to 14.2.1, 14.2.2 and so on. Is that basically what you're suggesting? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu Jun 11 12:20:02 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 11 Jun 2020 07:20:02 -0500 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <19655737.0c2gjJ1VT2@whitebase.usersys.redhat.com> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <084b2e8f-910b-9923-23be-5e97ce5c4d7c@openstack.org> <19655737.0c2gjJ1VT2@whitebase.usersys.redhat.com> Message-ID: <79fb695c-2cf9-305c-a159-96ed048a592f@gmx.com> >> To clarify, the intent is definitely to keep a stabilization period >> between the moment the branch is created and the "final" coordinated >> release date. >> >> We currently do it by asking projects to release a "RC1" which coincides >> with the branching, and then have them iterate on tagging further RCx >> versions (usually just one). The last available version at coordinated >> release date is re-tagged as a "normal" release number. >> >> With the unified model, projects would create the stabilization branch >> when they feel like it, then iterate on tagging versions (usually just >> one). The last available version at coordinated release date is >> considered included in the "OpenStack" release, without the need for a >> re-tag. >> >> Also to clarify, we are /already/ using that model for all >> cycle-with-intermediary deliverables like Swift or Ironic or Karbor or >> Vitrage or Monasca or CloudKitty or Horizon... It's not as if it was a >> new thing that would break packagers workflows. They already support it. > Can we at least enforce a rule that when tagging the "OpenStack" release, the > y number should be increased? Bonus points for having the stabilization (RC) > release use y=0, and the stable one starts from y=1. > > I know it may not sound to important for a computer, but it's useful for a > human eye to know that the first release has a final 0 somewhere? (I really > dislike the apache and mysql model in this regard) > > Ciao One of the benefits of Thierry's proposal is it would eliminate the need to retag the last release in order to get the final coordinated release version. This would reintroduce the need to do that. So instead of retagging something like 12.0.0.0rc2 to be 12.0.0, we would be retagging 12.0.0 to be 12.0.1. If we think that is important, I would rather keep the RC designation on those stabilization releases to make sure it's clear what is officially ready, and what is in preparation to be deemed officially ready. One other challenge I see with getting rid of RCs would be what would be allowed to be merged. Only occasionally, but it does happen, we need to merge something during the RC period that would be considered a breaking change. We find a bug that requires reverting a patch that added a config option. Or we need to fix a bug by changing the default value for a config option. Or some other weird major change. If we are strictly enforcing SemVer rules, that could mean that a project could release a final end of cycle 12.0.0 release, then within a week or two need to release a 13.0.0 release because of that one breaking change. They are just numbers to convey what is included, so it's not like that is not possible to do. But I think that could cause confusion for downstream and potentially could cause issues with someone picking up that 12.0.0 major release thinking it is legitimate, not realizing that 13.0.0 is actually a fix for some issue in it. Again, minor concerns, but something we should be aware of thinking through how things would work. Sean From sundar.nadathur at intel.com Thu Jun 11 12:24:45 2020 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Thu, 11 Jun 2020 12:24:45 +0000 Subject: [cyborg][neutron][nova] Networking support in Cyborg In-Reply-To: References: Message-ID: Hi Sean, > From: Sean Mooney > Sent: Thursday, June 11, 2020 4:31 AM > On Thu, 2020-06-11 at 11:04 +0000, Nadathur, Sundar wrote: > > [...] > > * Ideally, the admin should be able to formulate the device profile in > > the same way, independent of whether it is a single-component or > > multi-component device. For that, the device profile must have a > > single resource group that includes the resource, traits and Cyborg > properties for both the accelerator and NIC. The device profile for a Neutron > port will presumably have only one request group. So, the device profile > would look something like this: > > > > { "name": "my-smartnic-dp", > > "groups": [{ > > "resources:FPGA": "1", > > "resources:CUSTOM_NIC_X": "1", > > "trait:CUSTOM_FPGA_REGION_ID_FOO": "required", > > "trait:CUSTOM_NIC_TRAIT_BAR": "required", > > "trait:CUSTOM_PHYSNET_VLAN3": "required", > > "accel:bitstream_id": "3AFE" > > }] > > } > having "trait:CUSTOM_PHYSNET_VLAN3": "required", in the device profile > means you have to create a seperate device profile with the same details for > each physnet and the user then need to fine the profile that matches there > neutron network's physnet which is also problematic if they use the > multiprovidernet extention. > so we shoud keep the physnet seperate and have nova or neutorn append > that when we make the placment query. True, we did discuss this at the PTG, and I agree. The physnet can be passed in from the command line during port creation. > > [...] > > * We discussed the physnet trait at the PTG. My suggestion is to keep > > Cyborg out of this, and out of networking in general, if possible. > well this feature is more or less the opisite of that intent but i get that you > dont want cyborg to have to confiure the networking atribute of the interface. The admin could apply the trait to the right RP. Or, the OpenStack installer could automate this. That's similar in spirit to having the admin configure the physnet in PCI whitelist. Regards, Sundar From ltoscano at redhat.com Thu Jun 11 12:44:19 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 11 Jun 2020 14:44:19 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <20200611121633.tabzk6labz372qc3@yuggoth.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <19655737.0c2gjJ1VT2@whitebase.usersys.redhat.com> <20200611121633.tabzk6labz372qc3@yuggoth.org> Message-ID: <2747553.2VHbPRQshP@whitebase.usersys.redhat.com> On Thursday, 11 June 2020 14:16:34 CEST Jeremy Stanley wrote: > On 2020-06-11 14:05:18 +0200 (+0200), Luigi Toscano wrote: > [...] > > > Can we at least enforce a rule that when tagging the "OpenStack" > > release, the y number should be increased? Bonus points for having > > the stabilization (RC) release use y=0, and the stable one starts > > from y=1. > > > > I know it may not sound to important for a computer, but it's > > useful for a human eye to know that the first release has a final > > 0 somewhere? (I really dislike the apache and mysql model in this > > regard) > > As pointed out, this already isn't the case for the many projects > currently following this model, especially those which branch > earlier in the cycle. It's also a bit of a puzzle... basically every > new release you tag on the stable branch could be your final > release, so how do you predict in advance that you won't need > additional patches? We are doing time-based release, so we know which is the last version. We know it now (and we call it something.other.0), it would be in the case we moved forward with this exact proposal (but it would be called something.other.somethingelse). Just make sure you bump y too and still use .0 for that specific release. Of course there will be others bugfixes, exactly like we have now. I'm just talking about still retaining a way to more easily identify the first offiical tagged version for a specific coordinated OpenStack release. > I guess you could make every new tag in stable prior to the release > a semantic versioning "feature" addition (even though it's really > just patch level fixes), so you branch at 14.0.0, and then decide > you need some fixes and tag 14.1.0, and then realize you need more > fixes so tag 14.2.0, but then after the coordinated release you only > increase the patch level of the version to 14.2.1, 14.2.2 and so on. > Is that basically what you're suggesting? You may need to also increase .y after the coordinated release (or so it has happened), so not sure it could be done. But as I said I'm focusing on the 1st release. That said, do projects which tag early needs to increase .y in the time between their first release for a certain cycle and the official first coordinate release? Is the time so long that it may happen? If it's not the case, then I believe that: 14.0.whatever -> during rc time 14.1.0 -> official release 14.y.z, y>=1 rest of lifecycle It means an extra tag if there are no changes between the tagging of 14.0.0 and the tagging of the coodinated release, but I guess that case can be automated. -- Luigi From balazs.gibizer at est.tech Thu Jun 11 13:14:07 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 11 Jun 2020 15:14:07 +0200 Subject: [cyborg][neutron][nova] Networking support in Cyborg In-Reply-To: References: Message-ID: On Thu, Jun 11, 2020 at 11:04, "Nadathur, Sundar" wrote: > Hi all, > Based on the Victoria PTG discussion [1], here's a stab at making > some aspects of the Nova-Neutron-Cyborg interaction more concrete. > > * Background: A smart NIC may have a single 'device' that combines > the accelerator and the NIC, or two (or more) components in a single > PCI card, with separate accelerator and NIC components. > [snip] > * Who creates these RPs? I suggest Cyborg create it in all cases, to > keep it uniform. Neutron creates RPs today for the bandwidth > provider. But, if different services create RPs depend on which > feature is enabled and whether it is a single/multi-component device, > that can get complex and problematic. So, could we discuss the > possibility of Neutron not creating the RP? The admin should not > configure Neutron to handle such NICs. Neutron today creates the RP only if bandwidth is configured in [ovs]/resource_provider_bandwidths or [sriov_nic]/resource_provider_bandwidths . So as an initial step you can state that smartNIC does not support QoS minimum bandwidth policy rules and therefore require the admin not set the above neutron configurations. However I think in the long term we would like to support QoS minimum bandwidth rules so either we have to find a way that the neutron created RP could coexists with the Cyborg proposal, or Cyborg needs to grow support for QoS for smartNIC on its own. [snip] Cheers, gibi From fungi at yuggoth.org Thu Jun 11 13:16:29 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 11 Jun 2020 13:16:29 +0000 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <2747553.2VHbPRQshP@whitebase.usersys.redhat.com> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <19655737.0c2gjJ1VT2@whitebase.usersys.redhat.com> <20200611121633.tabzk6labz372qc3@yuggoth.org> <2747553.2VHbPRQshP@whitebase.usersys.redhat.com> Message-ID: <20200611131628.k2bz3fjyxzlpysrp@yuggoth.org> On 2020-06-11 14:44:19 +0200 (+0200), Luigi Toscano wrote: [...] > We are doing time-based release, so we know which is the last > version. We know it now (and we call it something.other.0), it > would be in the case we moved forward with this exact proposal > (but it would be called something.other.somethingelse). How about a concrete example from the last cycle... Horizon. At the time of the Train coordinated release, the latest version of Horizon was 16.0.0 (since then Horizon has tagged 16.1.0 and 16.2.0 in their stable/train branch). In their master branch during the Ussuri cycle, Horizon made some backward-incompatible changes and tagged 17.0.0, then made feature changes and followed that with 17.1.0, then made some more backward-incompatible changes and tagged 18.0.0, then more feature changes for 18.1.0, 18.2.0, 18.3.0... at that point the end of the cycle was nearing so they branched stable/ussuri from the 18.3.0 tag, but still had some fixes in master to backport after that for the coordinated Ussuri release which resulted in 18.3.1 and 18.3.2 tags. At the time of the Ussuri coordinated release the latest version of Horizon in stable/ussuri was 18.3.2 so that's what was announced as part of the coordinated release. > Just make sure you bump y too and still use .0 for that specific > release. So to be clear, applying your suggestion to the Horizon example above, after they branched stable/ussuri they should only have done feature-level semantic versioning increases, tagging 18.4.0 and 18.5.0 instead of 18.3.1 and 18.3.2? > Of course there will be others bugfixes, exactly like we have now. > I'm just talking about still retaining a way to more easily > identify the first offiical tagged version for a specific > coordinated OpenStack release. [...] And in your opinion, 18.5.0 looks more official than 18.3.2 as a part of the coordinated release? Keep in mind that over the course of the Ussuri cycle, Horizon tagged no fewer than 6 new versions ending in .0 so are you similarly concerned that those might look "too official" when they were just marking points in master branch development? > You may need to also increase .y after the coordinated release (or > so it has happened), so not sure it could be done. But as I said > I'm focusing on the 1st release. Yes, projects who are concerned with doing that (remember that those following stable branch policy should *not* merge changes which would require a feature level semantic version increase in their stable branches) can artificially increase the first version component in master to make sure it will always be greater than any feature bumps in latest stable. > That said, do projects which tag early needs to increase .y in the > time between their first release for a certain cycle and the > official first coordinate release? Is the time so long that it may > happen? If it's not the case, then I believe that: > > 14.0.whatever -> during rc time > 14.1.0 -> official release > 14.y.z, y>=1 rest of lifecycle > > It means an extra tag if there are no changes between the tagging > of 14.0.0 and the tagging of the coodinated release, but I guess > that case can be automated. It already is automated with the re-tagging of the latest rcN version to some release version in cycle-with-rc projects today, and this re-tagging dance is a big part of what we're hoping to do away with to simplify the release process. If it really is important that the coordinated release versions have a .0 on the end of them, then I'd rather just have a policy that all subsequent versions tagged in stable branches before release day must be feature level semantic version increases rather than patch level. At least then nothing needs to be re-tagged. Granted, I find this slightly obsessive, and don't understand what's wrong with the many, many versions we include in our coordinated releases already which don't end in a .0 patch level. It also makes it harder to identify actual feature changes in libraries which need requirements freeze exceptions, and so will likely mean a lot more work on the requirements team to figure out if a new lib version is actually safe at that time (bug fixes masquerading as a feature increase). Your alternative, re-tagging them, means additional churn in requirements as well, at a rather disruptive time in the release cycle. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From aschultz at redhat.com Thu Jun 11 13:30:02 2020 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 11 Jun 2020 07:30:02 -0600 Subject: [tripleo] Proposing Francesco Pantano as core on TripleO/Ceph In-Reply-To: <9d4cc3fc-2f6e-cd74-ec5c-413ba173913a@redhat.com> References: <9d4cc3fc-2f6e-cd74-ec5c-413ba173913a@redhat.com> Message-ID: +1 On Wed, Jun 10, 2020 at 8:30 AM Giulio Fidente wrote: > > Hi all, > > Francesco (fmount on freenode) started working on the Ceph integration > bits in TripleO more than a year ago now [1], contributing over time to > all components, heat templates, validations, puppet and ansible repos. > > He understood the tight relationship in between TripleO and ceph-ansible > and contributed directly to ceph-ansible as well, when necessary [2]. > > I think he'd be a great addition to the TripleO cores group and I hope > for him to work more in the future even outside the Ceph integration > efforts. > > I would like to propose Francesco core for the TripleO group on the Ceph > bits. > > Please vote here or leave any feedback for him. > > Thanks, > Giulio > > 1. > https://review.opendev.org/#/q/owner:fpantano%2540redhat.com+status:merged > > 2. https://github.com/ceph/ceph-ansible/commits?author=fmount > -- > Giulio Fidente > GPG KEY: 08D733BA > > From Sathia.Nadarajah.2 at team.telstra.com Thu Jun 11 01:45:57 2020 From: Sathia.Nadarajah.2 at team.telstra.com (Nadarajah, Sathia) Date: Thu, 11 Jun 2020 01:45:57 +0000 Subject: Idempotency for multiple nics In-Reply-To: References: Message-ID: Hi Monty, Thanks for the prompt reply. Would this change be factored into openstacksdk 0.47.0 or ansible version later than 2.9.9 ? This is our current ansible version used. (mypy3) [root at ansnvlonls01 bin]# ansible --version ansible 2.9.9 config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /var/lib/awx/venv/mypy3/lib64/python3.6/site-packages/ansible executable location = /var/lib/awx/venv/mypy3/bin/ansible python version = 3.6.9 (default, Sep 11 2019, 16:40:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] Thanks. Regards, Sathia -----Original Message----- From: Monty Taylor Sent: Wednesday, 10 June 2020 10:44 PM To: Nadarajah, Sathia Cc: openstack-discuss at lists.openstack.org; kevin at cloudnull.com Subject: Re: Idempotency for multiple nics Importance: High [External Email] This email was sent from outside the organisation – be cautious, particularly with links and attachments. > On Jun 10, 2020, at 1:35 AM, Nadarajah, Sathia wrote: > > Hi All, > > We are facing the exact issue that is outlined here, > > “When a server is created with multiple nics which holds security groups, module was applying default SG to the server & remove SG on the nics.” > > https://github.com/ansible/ansible/pull/58509 > https://github.com/ansible/ansible/issues/58495 > > And we are on the following version of openstacksdk > > (ansible) [root at ansnvlonls01 bin]# source > /var/lib/awx/venv/mypy3/bin/activate > (mypy3) [root at ansnvlonls01 bin]# pip show openstacksdk > Name: openstacksdk > Version: 0.46.0 > Summary: An SDK for building applications to work with OpenStack > Home-page: https://docs.openstack.org/openstacksdk/ > Author: OpenStack > Author-email: openstack-discuss at lists.openstack.org > License: UNKNOWN > Location: /var/lib/awx/venv/mypy3/lib/python3.6/site-packages > Requires: netifaces, iso8601, keystoneauth1, os-service-types, PyYAML, > cryptography, six, requestsexceptions, decorator, pbr, jmespath, > appdirs, munch, jsonpatch, dogpile.cache > > When can we expect to have this fix factored into openstacksdk ? > It isn’t really an openstacksdk issue, it’s an issue the ansible modules. I’ve submitted this: https://review.opendev.org/734810 to the ansible-collections-openstack repo. > Thanks. > > Regards, > > Sathia Nadarajah > Security and Enterprise Engineering CH2 Networks and IT, Telstra > P   0386946619 > M 0437302281 > E   Sathia.Nadarajah.2 at team.telstra.com > W www.telstra.com From zhang.lei.fly at gmail.com Thu Jun 11 09:05:46 2020 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Thu, 11 Jun 2020 17:05:46 +0800 Subject: [ironic][centos]grubaa64.efi request different filename Message-ID: hey guys, I am testing a standalone ironic node on arm64 node through centos 7. Then dnsmasq is configured as following ``` enable-tftp tftp-root=/var/lib/ironic/public/boot/tftp # dhcp-option=option:router,192.168.122.1 # use static dhcp-range=10.0.0.167,static,60s log-queries log-dhcp dhcp-match=set:efi-arm64,option:client-arch,11 dhcp-boot=tag:efi-arm64,grubaa64.efi ``` the grubaa64.efi file come from `/boot/efi/EFI/centos/grubaa64.efi` on centos 7 But seems grubaa64.efi file are trying different grub.cfg filename like `grub.cfg-xx-xx-xx-xx`( check bellow) Whereas ironic generate filename like `xx:xx:xx:xx:xx.conf`. Is this a bug in ironic? or I made something wrong? ``` dnsmasq-dhcp[1]: 140425456 vendor class: PXEClient:Arch:00011:UNDI:003000 dnsmasq-dhcp[1]: 140425456 DHCPREQUEST(eth1) 10.0.0.171 52:54:00:ea:56:f2 dnsmasq-dhcp[1]: 140425456 tags: known, efi-arm64, eth1 dnsmasq-dhcp[1]: 140425456 DHCPACK(eth1) 10.0.0.171 52:54:00:ea:56:f2 pxe-uefi dnsmasq-dhcp[1]: 140425456 requested options: 1:netmask, 2:time-offset, 3:router, 4, 5, dnsmasq-dhcp[1]: 140425456 requested options: 6:dns-server, 12:hostname, 13:boot-file-size, dnsmasq-dhcp[1]: 140425456 requested options: 15:domain-name, 17:root-path, 18:extension-path, dnsmasq-dhcp[1]: 140425456 requested options: 22:max-datagram-reassembly, 23:default-ttl, dnsmasq-dhcp[1]: 140425456 requested options: 28:broadcast, 40:nis-domain, 41:nis-server, dnsmasq-dhcp[1]: 140425456 requested options: 42:ntp-server, 43:vendor-encap, 50:requested-address, dnsmasq-dhcp[1]: 140425456 requested options: 51:lease-time, 54:server-identifier, 58:T1, dnsmasq-dhcp[1]: 140425456 requested options: 59:T2, 60:vendor-class, 66:tftp-server, 67:bootfile-name, dnsmasq-dhcp[1]: 140425456 requested options: 97:client-machine-id, 128, 129, 130, 131, dnsmasq-dhcp[1]: 140425456 requested options: 132, 133, 134, 135 dnsmasq-dhcp[1]: 140425456 next server: 10.0.0.167 dnsmasq-dhcp[1]: 140425456 broadcast response dnsmasq-dhcp[1]: 140425456 sent size: 1 option: 53 message-type 5 dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 54 server-identifier 10.0.0.167 dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 51 lease-time 2m dnsmasq-dhcp[1]: 140425456 sent size: 13 option: 67 bootfile-name grubaa64.efi dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 58 T1 1m dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 59 T2 1m45s dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 1 netmask 255.255.255.0 dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 28 broadcast 10.0.0.255 dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 3 router 10.0.0.167 dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 6 dns-server 10.0.0.167 dnsmasq-dhcp[1]: 140425456 sent size: 8 option: 12 hostname pxe-uefi dnsmasq-tftp[1]: error 8 User aborted the transfer received from 10.0.0.171 dnsmasq-tftp[1]: failed sending /var/lib/ironic/public/boot/tftp/grubaa64.efi to 10.0.0.171 dnsmasq-tftp[1]: sent /var/lib/ironic/public/boot/tftp/grubaa64.efi to 10.0.0.171 dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-01-52-54-00-ea-56-f2 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0000AB not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0000A not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0000 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A000 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A00 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-01-52-54-00-ea-56-f2 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0000AB not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0000A not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0000 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A000 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A00 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0 not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/command.lst not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/fs.lst not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/crypto.lst not found dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/terminal.lst not found ``` --- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jun 11 13:43:30 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 11 Jun 2020 14:43:30 +0100 Subject: [cyborg][neutron][nova] Networking support in Cyborg In-Reply-To: References: Message-ID: <4f31c35b3900ddae7e90c53cd4411dc8c4e5a55e.camel@redhat.com> On Thu, 2020-06-11 at 12:24 +0000, Nadathur, Sundar wrote: > Hi Sean, > > > From: Sean Mooney > > Sent: Thursday, June 11, 2020 4:31 AM > > > > On Thu, 2020-06-11 at 11:04 +0000, Nadathur, Sundar wrote: > > > [...] > > > * Ideally, the admin should be able to formulate the device profile in > > > the same way, independent of whether it is a single-component or > > > multi-component device. For that, the device profile must have a > > > single resource group that includes the resource, traits and Cyborg > > > > properties for both the accelerator and NIC. The device profile for a Neutron > > port will presumably have only one request group. So, the device profile > > would look something like this: > > > > > > { "name": "my-smartnic-dp", > > > "groups": [{ > > > "resources:FPGA": "1", > > > "resources:CUSTOM_NIC_X": "1", > > > "trait:CUSTOM_FPGA_REGION_ID_FOO": "required", > > > "trait:CUSTOM_NIC_TRAIT_BAR": "required", > > > "trait:CUSTOM_PHYSNET_VLAN3": "required", > > > "accel:bitstream_id": "3AFE" > > > }] > > > } > > > > having "trait:CUSTOM_PHYSNET_VLAN3": "required", in the device profile > > means you have to create a seperate device profile with the same details for > > each physnet and the user then need to fine the profile that matches there > > neutron network's physnet which is also problematic if they use the > > multiprovidernet extention. > > so we shoud keep the physnet seperate and have nova or neutorn append > > that when we make the placment query. > > True, we did discuss this at the PTG, and I agree. The physnet can be passed in from the command line during port > creation. that is not how that works. when you create a neutron network with segmenation type vlan or flat it is automatically assigned a segmeantion_id and phsynet. As an admin you can chose both but as a tenant this is managed by neutron ignoring the multiprovidernet for a second all vlan and flat network have 1 phyesnet and the port get a phsynet form the network it is created on. the multiprovidernet extension allow a singlel neutron provider network to have multiple physnets but nova does not support that today. so nova can get the physnet from the port/network/segment and incorporate that in the placment request but we cant pass it in during port creation. in general tenants are not aware of physnets. > > > > [...] > > > * We discussed the physnet trait at the PTG. My suggestion is to keep > > > Cyborg out of this, and out of networking in general, if possible. > > > > well this feature is more or less the opisite of that intent but i get that you > > dont want cyborg to have to confiure the networking atribute of the interface. > > The admin could apply the trait to the right RP. Or, the OpenStack installer could automate this. That's similar in > spirit to having the admin configure the physnet in PCI whitelist. yes they could its not a partially good user experience as it quite tedious to do but yes it a viable option and likely sufficnet for the initial work. installer could automate it but having to do it manually would not be ideal. > > Regards, > Sundar From kaifeng.w at gmail.com Thu Jun 11 14:15:42 2020 From: kaifeng.w at gmail.com (Kaifeng Wang) Date: Thu, 11 Jun 2020 22:15:42 +0800 Subject: [ironic][centos]grubaa64.efi request different filename In-Reply-To: References: Message-ID: Hi Jeffrey, Different firmware may have different search path, ironic takes a simple path to have a main grub.cfg to distribute a request to the correct configuration file with grub built-in variables, you can get detailed steps here [1]. [1] https://docs.openstack.org/ironic/latest/install/configure-pxe.html#uefi-pxe-grub-setup // kaifeng On Thu, Jun 11, 2020 at 9:39 PM Jeffrey Zhang wrote: > hey guys, > > I am testing a standalone ironic node on arm64 node through centos 7. > Then dnsmasq is configured as following > > ``` > enable-tftp > tftp-root=/var/lib/ironic/public/boot/tftp > > # dhcp-option=option:router,192.168.122.1 > # use static > dhcp-range=10.0.0.167,static,60s > log-queries > log-dhcp > dhcp-match=set:efi-arm64,option:client-arch,11 > dhcp-boot=tag:efi-arm64,grubaa64.efi > ``` > > the grubaa64.efi file come from `/boot/efi/EFI/centos/grubaa64.efi` on > centos 7 > > But seems grubaa64.efi file are trying different grub.cfg filename like > `grub.cfg-xx-xx-xx-xx`( check bellow) > Whereas ironic generate filename like `xx:xx:xx:xx:xx.conf`. Is this a bug > in ironic? or I made something wrong? > > ``` > dnsmasq-dhcp[1]: 140425456 vendor class: PXEClient:Arch:00011:UNDI:003000 > dnsmasq-dhcp[1]: 140425456 DHCPREQUEST(eth1) 10.0.0.171 52:54:00:ea:56:f2 > dnsmasq-dhcp[1]: 140425456 tags: known, efi-arm64, eth1 > dnsmasq-dhcp[1]: 140425456 DHCPACK(eth1) 10.0.0.171 52:54:00:ea:56:f2 > pxe-uefi > dnsmasq-dhcp[1]: 140425456 requested options: 1:netmask, 2:time-offset, > 3:router, 4, 5, > dnsmasq-dhcp[1]: 140425456 requested options: 6:dns-server, 12:hostname, > 13:boot-file-size, > dnsmasq-dhcp[1]: 140425456 requested options: 15:domain-name, > 17:root-path, 18:extension-path, > dnsmasq-dhcp[1]: 140425456 requested options: 22:max-datagram-reassembly, > 23:default-ttl, > dnsmasq-dhcp[1]: 140425456 requested options: 28:broadcast, 40:nis-domain, > 41:nis-server, > dnsmasq-dhcp[1]: 140425456 requested options: 42:ntp-server, > 43:vendor-encap, 50:requested-address, > dnsmasq-dhcp[1]: 140425456 requested options: 51:lease-time, > 54:server-identifier, 58:T1, > dnsmasq-dhcp[1]: 140425456 requested options: 59:T2, 60:vendor-class, > 66:tftp-server, 67:bootfile-name, > dnsmasq-dhcp[1]: 140425456 requested options: 97:client-machine-id, 128, > 129, 130, 131, > dnsmasq-dhcp[1]: 140425456 requested options: 132, 133, 134, 135 > dnsmasq-dhcp[1]: 140425456 next server: 10.0.0.167 > dnsmasq-dhcp[1]: 140425456 broadcast response > dnsmasq-dhcp[1]: 140425456 sent size: 1 option: 53 message-type 5 > dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 54 server-identifier > 10.0.0.167 > dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 51 lease-time 2m > dnsmasq-dhcp[1]: 140425456 sent size: 13 option: 67 bootfile-name > grubaa64.efi > dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 58 T1 1m > dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 59 T2 1m45s > dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 1 netmask 255.255.255.0 > dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 28 broadcast 10.0.0.255 > dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 3 router 10.0.0.167 > dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 6 dns-server 10.0.0.167 > dnsmasq-dhcp[1]: 140425456 sent size: 8 option: 12 hostname pxe-uefi > dnsmasq-tftp[1]: error 8 User aborted the transfer received from 10.0.0.171 > dnsmasq-tftp[1]: failed sending > /var/lib/ironic/public/boot/tftp/grubaa64.efi to 10.0.0.171 > dnsmasq-tftp[1]: sent /var/lib/ironic/public/boot/tftp/grubaa64.efi to > 10.0.0.171 > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/grub.cfg-01-52-54-00-ea-56-f2 not found > dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0000AB > not found > dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0000A > not found > dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0000 not > found > dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A000 not > found > dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A00 not > found > dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0 not > found > dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A not > found > dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0 not found > dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-01-52-54-00-ea-56-f2 > not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0000AB not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0000A not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0000 not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A000 not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A00 not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0 not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0 not found > dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg > not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/command.lst not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/fs.lst not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/crypto.lst not found > dnsmasq-tftp[1]: file > /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/terminal.lst not found > ``` > --- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Jun 11 14:17:57 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 11 Jun 2020 19:47:57 +0530 Subject: [glance] Weekly priorities Message-ID: Hi Glance Members, We are approaching milestone 1 which is just a week away. Before milestone 1 we need to review below specs on priority. sparse image upload - https://review.opendev.org/733157 Unified limits - https://review.opendev.org/729187 Image encryption - https://review.opendev.org/609667 Cinder store multiple stores support - https://review.opendev.org/695152 Kindly review the specs. Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Jun 11 15:03:14 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 11 Jun 2020 17:03:14 +0200 Subject: Openstack user surver - questions In-Reply-To: <272339AC-F0D8-4FD7-8D1C-34F166449962@openstack.org> References: <20200531155958.y3t2en2jydrdx3gx@skaplons-mac> <272339AC-F0D8-4FD7-8D1C-34F166449962@openstack.org> Message-ID: <20200611150314.gmdqw4saztz6q5ym@skaplons-mac> Hi, Thx for the info Allison. On Sun, May 31, 2020 at 01:40:16PM -0500, Allison Price wrote: > Hi Slawek, > > Thanks for reaching out with your questions about the user survey! > > > > On May 31, 2020, at 10:59 AM, Slawek Kaplonski wrote: > > > > Hi, > > > > First of all sorry if I didn't look for it long enough but I have couple of > > questions about user surver and I couldn't find answer for them anywhere. > > > > 1. I was looking at [1] for some Neutron related data and I found only one > > questions about used backends in "Deployment Decisions". Problem is that this > > graph is for me a bit unreadable due to many things on x-axis which overlaps > > each other. Is there any place where I can find some "raw data" to check? > > The Foundation can pull raw data for you as long as the information remains anonymized and share. Is the used backends question the only one you want data on? Or would you also like data o the percentage of users interested in, testing, and deploying Neutron? This data is also available in the analytics dashboard [1], but can often be hard to read as well. That would be great to see such data anonymized data. Info about users who are testing or interested in some features would be also great. That may give us some hints about what to focus on during next cycles. > > > > > 2. Another question about the same chart is: is there any way to maybe change > > possible replies in the surver for next years? I'm asking about that becasue I > > have a feeling that e.g. responses "Open vSwitch" and "ML2 - Open vSwitch" may > > be confusing for users. > > My understanding is that "Open vSwitch" means simply old "Open vSwitch" core > > plugin instead of ML2 plugin but this old plugin was removed around "Libery" > > cycle so I really don't think that still 37% of users are using it. > > We try to keep most questions static from survey to survey for comparison reasoning. However, if you think that some responses are confusing and can propose alternative language, we can consider that and make those changes. Ok, I understand. Is there any repo or other place when I can check all those possible answers? > > > > > 3. Is there any way to propose new, more detailed questions about e.g Neutron? > > For example what service plugins they are using. > > We have let each PTL add 1-2 optional questions at the end of the survey for respondents who indicated they were working with a particular project. The current Neutron question is: Which of the following features in the Neutron project are you actively using, interested in using or looking forward to using in your OpenStack deployment? That Neutron question is exactly what I want but again, where I can find answers for that question from last survey? > > The current user survey cycle ends in late August. That is when we will circulate the anonymized results to this question with the openstack-discuss mailing list along with other project-specific questions. At that time, PTLs can let us know if they would like to change their question. > > Let me know if you have any other questions - happy to help! I’ll also be around this week during the PTG if you would like me to jump in and clarify anything. > > Allison Price > IRC: aprice > > > > > [1] https://www.openstack.org/analytics > > > > -- > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > -- Slawek Kaplonski Senior software engineer Red Hat From haleyb.dev at gmail.com Thu Jun 11 19:03:41 2020 From: haleyb.dev at gmail.com (Brian Haley) Date: Thu, 11 Jun 2020 15:03:41 -0400 Subject: [neutron] subnet policy for ip allocation In-Reply-To: References: Message-ID: <2c00396b-687a-ab9c-25f0-28a082bb81f9@gmail.com> On 6/11/20 4:46 AM, Ruslanas Gžibovskis wrote: > Hi team, > > I need that my IP addresses in subnet would not be allocated from the > beginning, but it would go to the next IP compared to previous used: > > instance1 created: 1.1.1.1 > Instance2 created: 1.1.1.2 > instance1 deleted. > instance2 deleted. > Instance3 created: 1.1.1.3 (not 1.1.1.1 again) > > I remember have read about such, do not remember how to google it... Hi Ruslanas, The Neutron IP allocator does not do sequential allocation any more, it will choose randomly from a set of available IPs in the subnet. So the odds of this happening are small, although not zero. The only way to change this would be to write your own IPAM driver to do allocation in a specific way if that is required. -Brian From pramchan at yahoo.com Thu Jun 11 23:41:55 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Thu, 11 Jun 2020 23:41:55 +0000 (UTC) Subject: [all][interopWG] Following up on approval of Interop Guidelines for Ussuri References: <1097166057.3042161.1591918915900.ref@mail.yahoo.com> Message-ID: <1097166057.3042161.1591918915900@mail.yahoo.com> Hi all, This is a follow up to send feedback to all of core teams  (Nova, Neutron, Cinder, Keystone, Glance) + (Swift) + (Add-ons DNS  Designate & Orchestration Heat). This was the patch which Mark submitted to kickstart the next refstack test for https://www.openstack.org/brand/openstack-powered/ A. We need to decide on content for "OpenStack Powered DNS" and "OpenStack Powered Orchestration" to this site  https://review.opendev.org/#/c/735159/ Community Reporting - Should start seeing program staring 2020.06 new ones  and please let us know when you have something to share for Ussuri https://refstack.openstack.org/#/community_results B. We need Marketplace Vendors to validate their OpenStack Powered offers for Ussuri and send emails which we will discuss at tomorrows meeting. C. Meeting will be on irc  Friday as usual weekly  like #openstack-interopwg @17.00 UTC Friday (PDT 10 AM) Will plan on using zoom call after IRC logins if needed. Agenda we will take as what was discussed at Board and follow ups to close for Usuri and plan further for committee formation for future of interop. ThanksPrakash / Mrak/ GhanshyamFor interopwg -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Jun 12 03:29:51 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 11 Jun 2020 22:29:51 -0500 Subject: [nova][doc] nova pdf building failing on new sphinx 3.1.0 Message-ID: <172a6939d46.c7977846169208.342239994777984515@ghanshyammann.com> Hello Everyone, With the new release of Sphinx 3.1.0, nova pdf-docs building started failing due to TeX memory issue (maybe this one - https://github.com/sphinx-doc/sphinx/issues/3099). I could not find the related changes in 3.1.0 which can cause this issue. The issue is to include the giant sample files(policy for now) in pdf doc. I am sure that was discussed previously also when we started pdf building in the Train cycle but do not know what was the solution? Luckily neutron is not facing this as their config sample files are as large as nova. I have logged bug[1] and provided the workaround[2] to unblock the nova gate. Workaround is to include the link to the sample file instead if its full content. Any thoughts on how to fix this? [1] https://bugs.launchpad.net/nova/+bug/1883200 [2] https://review.opendev.org/#/c/735279/ -gmann From thierry at openstack.org Fri Jun 12 09:16:11 2020 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 12 Jun 2020 11:16:11 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> Message-ID: <61c28590-8bd8-23b7-a532-7f4079f23c71@openstack.org> Thierry Carrez wrote: > As you know[1] I'm trying to push toward simplification of OpenStack > processes, to make them easier to navigate for new members of our > community and generally remove weight. A good example of that is release > models. [...] After having discussed this here and in several IRC discussions, there appears to still be enough cases warranting keeping two cycle-tied models (one with RCs and a round version number, the other strictly following semver). The simplification gains may not be worth disrupting long-established habits and tweaking all our validation toolchain. Instead, I'll work on improving documentation to guide new deliverables in this choice, and reduce corner cases and exceptions. Thanks for entertaining the idea and reaching out. Periodically reconsidering why we do things the way we do them is healthy, and avoids cargo-culting processes forever. -- Thierry Carrez (ttx) From xin-ran.wang at intel.com Fri Jun 12 10:23:47 2020 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Fri, 12 Jun 2020 10:23:47 +0000 Subject: [cyborg][neutron][nova] Networking support in Cyborg In-Reply-To: <4f31c35b3900ddae7e90c53cd4411dc8c4e5a55e.camel@redhat.com> References: <4f31c35b3900ddae7e90c53cd4411dc8c4e5a55e.camel@redhat.com> Message-ID: Hi all, I prefer that physnet related stuff still managed by neutron, because it is a notion of neutron. If we let Cyborg update this traits to Placement, what will we do if Neutron enabled bandwidth feature and how we know whether this feature is enabled or not. Can we just let Neutron always report physnet traits. I am not very familiar with neutron, is there any gap? Otherwise, if Cyborg do need report this to placement, my proposal is: Neutron will provide an interface which allow Cyborg to get physnet trait/RP, if this feature is not configured, it will return 404, then Cyborg will know that neutron did not configured bandwidth feature, and Cyborg can report all by itself. If Neutron returns something meaningful, Cyborg should use the same RP and update other traits on this RP. In this way, Cyborg and Neutron will use the same RP and keep the consistency. Thanks, Xin-Ran -----Original Message----- From: Sean Mooney Sent: Thursday, June 11, 2020 9:44 PM To: Nadathur, Sundar ; openstack-discuss Subject: Re: [cyborg][neutron][nova] Networking support in Cyborg On Thu, 2020-06-11 at 12:24 +0000, Nadathur, Sundar wrote: > Hi Sean, > > > From: Sean Mooney > > Sent: Thursday, June 11, 2020 4:31 AM > > > > On Thu, 2020-06-11 at 11:04 +0000, Nadathur, Sundar wrote: > > > [...] > > > * Ideally, the admin should be able to formulate the device > > > profile in the same way, independent of whether it is a > > > single-component or multi-component device. For that, the device > > > profile must have a single resource group that includes the > > > resource, traits and Cyborg > > > > properties for both the accelerator and NIC. The device profile for > > a Neutron port will presumably have only one request group. So, the > > device profile would look something like this: > > > > > > { "name": "my-smartnic-dp", > > > "groups": [{ > > > "resources:FPGA": "1", > > > "resources:CUSTOM_NIC_X": "1", > > > "trait:CUSTOM_FPGA_REGION_ID_FOO": "required", > > > "trait:CUSTOM_NIC_TRAIT_BAR": "required", > > > "trait:CUSTOM_PHYSNET_VLAN3": "required", > > > "accel:bitstream_id": "3AFE" > > > }] > > > } > > > > having "trait:CUSTOM_PHYSNET_VLAN3": "required", in the device > > profile means you have to create a seperate device profile with the > > same details for each physnet and the user then need to fine the > > profile that matches there neutron network's physnet which is also > > problematic if they use the multiprovidernet extention. > > so we shoud keep the physnet seperate and have nova or neutorn > > append that when we make the placment query. > > True, we did discuss this at the PTG, and I agree. The physnet can be > passed in from the command line during port creation. that is not how that works. when you create a neutron network with segmenation type vlan or flat it is automatically assigned a segmeantion_id and phsynet. As an admin you can chose both but as a tenant this is managed by neutron ignoring the multiprovidernet for a second all vlan and flat network have 1 phyesnet and the port get a phsynet form the network it is created on. the multiprovidernet extension allow a singlel neutron provider network to have multiple physnets but nova does not support that today. so nova can get the physnet from the port/network/segment and incorporate that in the placment request but we cant pass it in during port creation. in general tenants are not aware of physnets. > > > > [...] > > > * We discussed the physnet trait at the PTG. My suggestion is to > > > keep Cyborg out of this, and out of networking in general, if possible. > > > > well this feature is more or less the opisite of that intent but i > > get that you dont want cyborg to have to confiure the networking atribute of the interface. > > The admin could apply the trait to the right RP. Or, the OpenStack > installer could automate this. That's similar in spirit to having the admin configure the physnet in PCI whitelist. yes they could its not a partially good user experience as it quite tedious to do but yes it a viable option and likely sufficnet for the initial work. installer could automate it but having to do it manually would not be ideal. > > Regards, > Sundar From sean.mcginnis at gmx.com Fri Jun 12 12:01:35 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 12 Jun 2020 07:01:35 -0500 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <61c28590-8bd8-23b7-a532-7f4079f23c71@openstack.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <61c28590-8bd8-23b7-a532-7f4079f23c71@openstack.org> Message-ID: > After having discussed this here and in several IRC discussions, there > appears to still be enough cases warranting keeping two cycle-tied > models (one with RCs and a round version number, the other strictly > following semver). The simplification gains may not be worth > disrupting long-established habits and tweaking all our validation > toolchain. > > Instead, I'll work on improving documentation to guide new > deliverables in this choice, and reduce corner cases and exceptions. > > Thanks for entertaining the idea and reaching out. Periodically > reconsidering why we do things the way we do them is healthy, and > avoids cargo-culting processes forever. > Thanks for bringing up the idea Thierry. I agree, it's worth looking at what we're doing and why occasionally to make sure we're not doing things just because "that's what we do." I think some good feedback came out of all of this at least, so maybe we can still simplify some things, even if we can't fully collapse our release models. Sean From sean.mcginnis at gmx.com Fri Jun 12 13:19:44 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 12 Jun 2020 08:19:44 -0500 Subject: [release] Release countdown for week R-17, Jun 15 - Jun 19 Message-ID: <20200612131944.GA735753@sm-workstation> Development Focus ----------------- The victoria-1 milestone is next week, on June 18! Project team plans for the Victoria cycle should now be solidified. General Information ------------------- If you planned to change the release model for any of your deliverables this cycle, please remember to do so ASAP, before milestone-1. Libraries need to be released at least once per milestone period. Next week, the release team will propose releases for any library which had changes but has not been otherwise released since the Ussuri release. PTL's or release liaisons, please watch for these and give a +1 to acknowledge them. If there is some reason to hold off on a release, let us know that as well, by posting a -1. If we do not hear anything at all by the end of the week, we will assume things are OK to proceed. NB: If one of your libraries is still releasing 0.x versions, start thinking about when it will be appropriate to do a 1.0 version. The version number does signal the state, real or perceived, of the library, so we strongly encourage going to a full major version once things are in a good and usable state. Upcoming Deadlines & Dates -------------------------- Victoria-1 milestone: June 18 Victoria-2 milestone: July 30 From mark at stackhpc.com Fri Jun 12 16:20:54 2020 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 12 Jun 2020 17:20:54 +0100 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <61c28590-8bd8-23b7-a532-7f4079f23c71@openstack.org> Message-ID: On Fri, 12 Jun 2020 at 13:02, Sean McGinnis wrote: > > > > After having discussed this here and in several IRC discussions, there > > appears to still be enough cases warranting keeping two cycle-tied > > models (one with RCs and a round version number, the other strictly > > following semver). The simplification gains may not be worth > > disrupting long-established habits and tweaking all our validation > > toolchain. > > > > Instead, I'll work on improving documentation to guide new > > deliverables in this choice, and reduce corner cases and exceptions. > > > > Thanks for entertaining the idea and reaching out. Periodically > > reconsidering why we do things the way we do them is healthy, and > > avoids cargo-culting processes forever. > > > Thanks for bringing up the idea Thierry. I agree, it's worth looking at > what we're doing and why occasionally to make sure we're not doing > things just because "that's what we do." > > I think some good feedback came out of all of this at least, so maybe we > can still simplify some things, even if we can't fully collapse our > release models. I would be interested in a relaxation of the requirement for RC1 and stable branch cut to coincide, if possible. This would simplify the kolla release process. > > Sean > > From pramchan at yahoo.com Fri Jun 12 18:08:29 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 12 Jun 2020 18:08:29 +0000 (UTC) Subject: [Interop_WG] Today Fiday 6th June 10 AM PDT call In-Reply-To: <172a3937132.cddcd633148001.6957812506478624345@ghanshyammann.com> References: <1023999942.3265728.1591369582416.ref@mail.yahoo.com> <1023999942.3265728.1591369582416@mail.yahoo.com> <369587381.1529633.1591384677970@mail.yahoo.com> <4E313131-9595-48AE-9E45-29B6373B5F66@vmware.com> <1447261624.2544824.1591841233132@mail.yahoo.com> <172a3937132.cddcd633148001.6957812506478624345@ghanshyammann.com> Message-ID: <1570818328.333307.1591985309704@mail.yahoo.com> Hi all, Can you help me on irc, should we be using #openstack-interop or #openstack-interopwgWHich of this we will be able to use the bit to record meetings besides the call? #topic 1 - Review of content for "OpenStack Powered DNS" and "OpenStack Powered Orchestration"#info We need to update the site https://www.openstack.org/brand/openstack-powered/ #topic 2 - Marketplace Vendors to validate their OpenStack Powered offers for Ussuri This can be tested on https://refstack.openstack.org/#/community_results I don't see any tests being made by any one on 11th or 12 th June, any reasons, is there a transition goin on or there is some issue with refstack-client? #action Prakash to follow up with Mark & Ghanshyam on both web site update for Logo add-on programs and refstack-client working wrt Refstack portal #topic 3 - Follow ups to close for Usuri actions items #action Seek help from Staff to update the formality to make "one or more co-chairs" in B192#link https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=doc/source/process/2017A.rst;h=a8... #info updated the googledocs for interops on CinderV2 apis being removed , correction it should have appeared as deprecated in 2019.11 , no issues as we have now dropped it out in 2020.11 approval#link https://docs.google.com/document/d/1vf8rxKxuFoXmZCxfMIIfHm4TzK1kuANktXyUHiurMcE/edit?userstoinvite=s... topic 4 - Formation new committee for Future Interop work - we discuss this before OpenDev June 29 and help get approvals next Board meeting #info consider Conformance vs Compliance to keep off legal part for new certifications or Logo programs Q1. Data points from MarketPlace?Please educate us how we collect Marketplace data - How customers use the Marketplace? Can we get Data points from Marketplace? If so How? Any answers from TC community or Users Community folks? Q2. Does Tempest testing Infra provider/Vendors offerings occur in real time or you run a batch and send report to interop and who receives this test results? Appreciate all the help from especially Mark, Ghanshyam and we like to have all support for community for interop continuation. ThanksPrakash On Thursday, June 11, 2020, 06:31:21 AM PDT, Ghanshyam Mann wrote: Thanks Prakash for all your effort and energy. I can definitely help with guidelines and their testing part from Tempest point of view or you can count that as help from the community side. My concern is almost the same as Mark, I am involved in multiple areas so if interop requires more time then it might be an issue. As of now, I am ok to help to handle some code side part or as 2nd vice-chair hoping that would not take much bandwidth (i think we do not need 2nd vice-chair things as such :) but no strong opinion ). But the main issue for interop (i mentioned earlier also) is the maintainer for refstack and other repo. We really need some developers for that. -gmann ---- On Wed, 10 Jun 2020 21:07:13 -0500 prakash RAMCHANDRAN wrote ---- >        Mark, > OK let me handle it as chair, but let's refer or nominate Ghanshyam too as second vice chair and if needs to vote it lets do that. > I have submitted the change of irc channel to #openstack-interop as tested last week. and try pile all logs therein with startmeeting and endmeeting weekly to debate and arrive at how we proceed on this after tomorrows presentation which I like both of you to attend. > https://review.opendev.org/#/c/735034/1 > If we need anything to merge the above go ahead. > Arkady as usual is with me on Board is there to help us all, so we do have two from Board and two from TC and community. > So let me answer your latest email, and revise the presentation and web link for Board accordingly. > ThanksPrakash > > > >                                                                                On Friday, June 5, 2020, 12:32:07 PM PDT, Mark Voelker wrote:                                >                >                > So let us wait and check with OpenStack staff how they can help us if they can fix the web irc/time links, if not I will try fix it over week end. > Well, that’s sort of the point of having the meeting info housed in public repository: community members actually don’t need to go through the Foundation staff for things like this, they can just fix them with a few lines of text submitted via git!  =)  > If you’re not sure how to do that, let me know…it’ll only take a moment to submit a patch so I’m happy to help. > At Your Service, > Mark T. Voelker > > > On Jun 5, 2020, at 3:17 PM, prakash RAMCHANDRAN wrote: > Mark, > I see that using irc does help take notes for weekly meetings, especially if there is  frequent meetings weekly alternate week etc. > So let us wait and check with OpenStack staff how they can help us if they can fix the web irc/time links, if not I will try fix it over week end. > ThanksPrakash > On Friday, June 5, 2020, 11:49:22 AM PDT, Mark Voelker wrote: > > Prakash, > Quick suggestion: since it looks you’re running these meetings on Zoom going forward, you may want to add the Zoom information to the upstream meetings calendar or alternately at least remove the old IRC meeting entry.  See the "IRC meetings schedule” section here for information: > http://eavesdrop.openstack.org/ > The file that you’d want to submit a patch for is: > https://opendev.org/opendev/irc-meetings/src/branch/master/meetings/interop-wg-meeting.yaml > (Note that the times presented in the these files are in UTC rather than a local timezone.) > At Your Service, > Mark T. Voelker > > > On Jun 5, 2020, at 11:06 AM, prakash RAMCHANDRAN wrote: > Hi all, > Prakash Ramchandran is inviting you to a scheduled Zoom meeting.To keep  reliable zoom link have updated the weekly call to this one and see if you can join inspite of our ongoin PTG at 10 AM PDT. > Join from a browser: https://Dell.zoom.com/wc/join/99829924353 > Password: 101045 > Trying to consolidate  what we got from PTG interactions to close the Draft updates. > https://etherpad.opendev.org/p/victoria-ptg-interop-wg > > > Closure in next week for Interop Guidline? > > What is the suggession for Board presentaion from team for Prakash on June 11th? > Anything you like to add on the call, we can keep it short and try close after PTG next week. > ThanksPrakash > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Jun 12 19:06:08 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 12 Jun 2020 12:06:08 -0700 Subject: [TC] [PTG] Victoria vPTG Summary of Conversations and Action Items Message-ID: Hello Everyone! I hope you all had a productive and enjoyable PTG! While it’s still reasonably fresh, I wanted to take a moment to summarize discussions and actions that came out of TC discussions. If there is a particular action item you are interested in taking, please reply on this thread! For the long version, check out the etherpad from the PTG[1]. Tuesday ====== Ussuri Retrospective ---------------------------- As usual we accomplished a lot. Some of the things we accomplished were around enumerating operating systems per release (again), removing python2 support, and adding the ideas repository. Towards the end of the release, we had a lot of discussions around what to do with leaderless projects, the role of PTLs, and what to do with projects that were missing PTL candidates for the next release. We discussed office hours, their history and reason for existence, and clarified how we can strengthen communication amongst ourselves, the projects, and the larger community. TC Onboarding -------------------- It was brought up that those elected most recently (and even new members the election before) felt like there wasn’t enough onboarding into the TC. Through discussion about what we can do to better support returning members is to better document the daily, weekly and monthly tasks TC members are supposed to be doing. Kendall Nelson proposed a patch to start adding more detail to a guide for TC members already[2]. It was also proposed that we have a sort of mentorship or shadow program for people interested in joining the TC or new TC members by more experienced TC members. The discussion about the shadow/mentorship program is to be continued. TC/UC Merge ------------------ Thierry gave an update on the merge of the committees. The simplified version is that the current proposal is that UC members are picked from TC members, the UC operates within the TC, and that we are already setup for this given the number of TC members that have AUC status. None of this requires a by-laws change. One next step that has already begun is the merging of the openstack-users ML into openstack-discuss ML. Other next steps are to decide when to do the actual transition (disbanding the separate UC, probably at the next election?) and when to setup AUC’s to be defined as extra-ATC’s to be included in the electorate for elections. For more detail, check out the openstack-discuss ML thread[3]. Wednesday ========= Help Wanted List ----------------------- We settled on a format for the job postings and have several on the list. We talked about how often we want to look through, update or add to it. The proposal is to do this yearly. We need to continue pushing on the board to dedicate contributors at their companies to work on these items, and get them to understand that it's an investment that will take longer than a year in a lot of cases; interns are great, but not enough. TC Position on Foundation Member Community Contributions ---------------------------------------------------------------------------------- The discussion started with a state of things today - the expectations of platinum members, the benefits the members get being on the board and why they should donate contributor resources for these benefits, etc. A variety of proposals were made: either enforce or remove the minimum contribution level, give gold members the chance to have increased visibility (perhaps giving them some of the platinum member advantages) if they supplement their monetary contributions with contributor contributions, etc. The #ACTION that was decided was for Mohammed to take these ideas to the board and see what they think. OpenStack User-facing APIs -------------------------------------- Users are confused about the state of the user facing API’s; they’ve been told to use the OpenStackClient(OSC) but upon use, they discover that there are features missing that exist in the python-*clients. Partial implementation in the OSC is worse than if the service only used their specific CLI. Members of the OpenStackSDK joined discussions and explained that many of the barriers that projects used to have behind implementing certain commands have been resolved. The proposal is to create a pop up team and that they start with fully migrating Nova, documenting the process and collecting any other unresolved blocking issues with the hope that one day we can set the migration of the remaining projects as a community goal. Supplementally, a new idea was proposed- enforcing new functionality to services is only added to the SDK (and optionally the OSC) and not the project’s specific CLI to stop increasing the disparity between the two. The #ACTION here is to start the pop up team, if you are interested, please reply! Additionally, if you disagree with this kind of enforcement, please contact the TC as soon as possible and explain your concerns. PTL Role in OpenStack today & Leaderless Projects --------------------------------------------------------------------- This was a veeeeeeeerrrry long conversation that went in circles a few times. The very short version is that we, the TC, are willing to let project teams decide for themselves if they want to have a more deconstructed kind of PTL role by breaking it into someone responsible for releases and someone responsible for security issues. This new format also comes with setting the expectation that for things like project updates and signing up for PTG time, if someone on the team doesn’t actively take that on, the default assumption is that the project won’t participate. The #ACTION we need someone to take on is to write a resolution about how this will work and how it can be done. Ideally, this would be done before the next technical election, so that teams can choose it at that point. If you are interested in taking on the writing of this resolution, please speak up! Cross Project Work ------------------------- -Pop Up Teams- The two teams we have right now are Encryption and Secure Consistent Policy Groups. Both are making slow progress and will continue. -Reducing Community Goals Per Cycle- Historically we have had two goals per cycle, but for smaller teams this can be a HUGE lift. The #ACTION is to clearly outline the documentation for the goal proposal and selection process to clarify that selecting only one goal is fine. No one has claimed this action item yet. -Victoria Goal Finalization- Currently, we have three proposals and one accepted goal. If we are going to select a second goal, it needs to be done ASAP as Victoria development has already begun. All TC members should review the last proposal requesting selection[4]. -Wallaby Cycle Goal Discussion Kick Off- Firstly, there is a #ACTION that one or two TC members are needed to guide the W goal selection. If you are interested, please reply to this thread! There were a few proposed goals for VIctoria that didn’t make it that could be the starting point for W discussions, in particular, the rootwrap goal which would be good for operators. The OpenStackCLI might be another goal to propose for Wallaby. Detecting Unmaintained Projects Early --------------------------------------------------- The TC liaisons program had been created a few releases ago, but the initial load on TC members was large. We discussed bringing this program back and making the project health checks happen twice a release, either the start or end of the release and once in the middle. TC liaisons will look at previously proposed releases, release activity of the team, the state of tempest plugins, if regular meetings are happening, if there are patches in progress and how busy the project’s IRC channel is to make a determination. Since more than one liaison will be assigned to each project, those liaisons can divvy up the work how they see fit. The other aspect that still needs to be decided is where the health checks will be recorded- in a wiki? In a meeting and meeting logs? That decision is still to be continued. The current #ACTION currently unassigned is that we need to assign liaisons for the Victoria cycle and decide when to do the first health check. Friday ===== Reducing Systems and Friction to Drive Change ---------------------------------------------------------------- This was another conversation that went in circles a bit before realizing that we should make a list of the more specific problems we want to address and then brainstorm solutions for them. The list we created (including things already being worked) are as follows: - TC separate from UC (solution in progress) - Stable releases being approved by a separate team (solution in progress) - Making repository creation faster (especially for established project teams) - Create a process blueprint for project team mergers - Requirements Team being one person - Stable Team - Consolidate the agent experience - Figure out how to improve project <--> openstack client/sdk interaction. If you feel compelled to pick one of these things up and start proposing solutions or add to the list, please do! Monitoring in OpenStack (Ceilometer + Telemetry + Gnocchi State) ----------------------------------------------------------------------------------------- This conversation is also ongoing, but essentially we talked about the state of things right now- largely they are not well maintained and there is added complexity with Ceilometers being partially dependent on Gnocchi. There are a couple of ideas to look into like using oslo.metrics for the interface between all the tools or using Ceilometer without Gnocchi if we can clean up those dependencies. No specific action items here, just please share your thoughts if you have them. Ideas Repo Next Steps ------------------------------- Out of the Ussuri retrospective, it was brought up that we probably needed to talk a little more about what we wanted for this repo. Essentially we just want it to be a place to collect ideas into without worrying about the how. It should be a place to document ideas we have had (old and new) and keep all the discussion in one place as opposed to historic email threads, meetings logs, other IRC logs, etc. We decided it would be good to periodically go through this repo, likely as a forum session at a summit to see if there is any updating that could happen or promotion of ideas to community goals, etc. ‘tc:approved-release’ Tag --------------------------------- This topic was proposed by the Manila team from a discussion they had earlier in the week. We talked about the history of the tag and how usage of tags has evolved. At this point, the proposal is to remove the tag as anything in the releases repo is essentially tc-approved. Ghanshyam has volunteered to document this and do the removal. The board also needs to be notified of this and to look at projects.yaml in the governance repo as the source of truth for TC approved projects. The unassigned #ACTION item is to review remaining tags and see if there are others that need to be modified/removed/added to drive common behavior across OpenSack components. Board Proposals ---------------------- This was a pretty quick summary of all discussions we had that had any impact on the board and largely decided who would mention them. Session Feedback ------------------------ This was also a pretty quick topic compared to many of the others, we talked about how things went across all our discussions (largely we called the PTG a success) logistically. We tried to make good use of the raising hands feature which mostly worked, but it lacks context and its possible that the conversation has moved on by the time it’s your turn (if you even remember what you want to say). OpenStack 2.0: k8s Native ----------------------------------- This topic was brought up at the end of our time so we didn’t have time to discuss it really. Basically Mohammed wanted to start the conversation about adding k8s as a base service[5] and what we would do if a project proposed required k8s. Adding services that work with k8s could open a door to new innovation in OpenStack. Obviously this topic will need to be discussed further as we barely got started before we had to wrap things up. So. The tldr; Here are the #ACTION items we need owners for: - Start the User Facing API Pop Up Team - Write a resolution about how the deconstructed PTL roles will work - Update Goal Selection docs to explain that one or more goals is fine; it doesn’t have to be more than one - Two volunteers to start the W goal selection process - Assign two TC liaisons per project - Review Tags to make sure they are still good for driving common behavior across all openstack projects Here are the things EVERYONE needs to do: - Review last goal proposal so that we can decide to accept or reject it for the V release[4] - Add systems that are barriers to progress in openstack to the Reducing Systems and Friction list - Continue conversations you find important Thanks everyone for your hard work and great conversations :) Enjoy the attached (photoshopped) team photo :) -Kendall (diablo_rojo) [1] TC PTG Etherpad: https://etherpad.opendev.org/p/tc-victoria-ptg [2] TC Guide Patch: https://review.opendev.org/#/c/732983/ [3] UC TC Merge Thread: http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014736.html [4] Proposed V Goal: https://review.opendev.org/#/c/731213/ [5] Base Service Description: https://governance.openstack.org/tc/reference/base-services.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: TC_v3.png Type: image/png Size: 3911635 bytes Desc: not available URL: From pramchan at yahoo.com Fri Jun 12 19:21:31 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 12 Jun 2020 19:21:31 +0000 (UTC) Subject: [Interop_WG] Following up on Interop Ussuri cycle actions to close - need help In-Reply-To: <1570818328.333307.1591985309704@mail.yahoo.com> References: <1023999942.3265728.1591369582416.ref@mail.yahoo.com> <1023999942.3265728.1591369582416@mail.yahoo.com> <369587381.1529633.1591384677970@mail.yahoo.com> <4E313131-9595-48AE-9E45-29B6373B5F66@vmware.com> <1447261624.2544824.1591841233132@mail.yahoo.com> <172a3937132.cddcd633148001.6957812506478624345@ghanshyammann.com> <1570818328.333307.1591985309704@mail.yahoo.com> Message-ID: <728523816.49914.1591989691085@mail.yahoo.com> Hi all, Help  interop for 1 - Reviewed of content for "OpenStack Powered DNS" and "OpenStack Powered Orchestration"We need to update the site - https://www.openstack.org/brand/interop/ Add to site following in the tables? Volunteers? - please first cum first serve !!! OpenStack Powered with Orchestration` Must include all compute-specific and orchestration heat-specific code and pass all compute-specific and heat-specific capabilities tests. Qualifying products may use the OpenStack Powered logo and use the phrase "OpenStack Powered with Orchestration" in their product name. OpenStack Powered with DNS Must include all compute-specific and DNA designate-specific code and pass all compute-specific and designate-specific capabilities tests Qualifying products may use the OpenStack Powered logo and use the phrase "OpenStack Powered with DNS" in their product name  2 - Marketplace Vendors to validate their OpenStack Powered offers for Ussuri This can be tested on https://refstack.openstack.org/#/community_results Has anyone tested their distros for Ussuri?Yes indeed : VMware Integrated OpenStack (7.0), Can you make it official with a rerun please - as I could not see in Marketplace?Congratulations  to Mark and VMware for stepping up. Please let us know who all can earn the incentive to clear before June 29, OpenDev - Please do it asap, we plan to decare incentivize, those products that clear before End of June. 3. Can any one help from Staff to update the formality to make "one or more co-chairs" in B192#link https://review.opendev.org/gitweb?p=openstack/interop.git;a=blob;f=doc/source/process/2017A.rst;h=a8... 4. #info-error fixed updated the google docs for interops on CinderV2 apis being removed , correction it should have appeared as deprecated in 2019.11 , no issues as we have now removed it out in 2020.06 approval#link https://docs.google.com/document/d/1vf8rxKxuFoXmZCxfMIIfHm4TzK1kuANktXyUHiurMcE/edit?userstoinvite=s...5. - Formation new committee for Future Interop work - we discuss this before OpenDev June 29 and help get approvals next Board meeting Some suggestions: Add yours here below if anya) consider Conformance vs Compliance to keep off legal part for new certifications or Logo programsb) Powered Programs for Object Storage has no dependency on compute, can it just be made Swift API dependent? This will mean any Object Storage that is currently tested can also be extended to Vendor backends as long as they pass current Swift API testing for Object Storage?... 6. Data points from MarketPlace? Please comment as how we collect Marketplace data - How customers use the Marketplace? Can we get Data points from Marketplace? If so How? Any answers from TC community or Users Community folks? 7. Does Tempest testing Infra provider/Vendors offerings occur in real time or you run a batch and send report to interop and who receives this test results? Appreciate all the help from especially Mark, Ghanshyam and we like to have all support for community for interop continuation. ThanksPrakash -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Jun 12 19:35:52 2020 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 12 Jun 2020 20:35:52 +0100 Subject: [cyborg][kolla] OPAE & CentOS 8 Message-ID: Hi, I asked in another thread, but didn't get a response. In the kolla cyborg-agent image we install OPAE. There is a comment claiming this is a required dependency. It doesn't seem to be available for CentOS 8. If that is the case, we will need to drop support for the cyborg-agent image from Ussuri. Could someone confirm if OPAE is a hard dependency or not? Thanks, Mark From gmann at ghanshyammann.com Fri Jun 12 19:47:07 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 12 Jun 2020 14:47:07 -0500 Subject: [nova][doc] nova pdf building failing on new sphinx 3.1.0 In-Reply-To: <172a6939d46.c7977846169208.342239994777984515@ghanshyammann.com> References: <172a6939d46.c7977846169208.342239994777984515@ghanshyammann.com> Message-ID: <172aa12554a.c4026823209983.7113952599249125019@ghanshyammann.com> ---- On Thu, 11 Jun 2020 22:29:51 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > With the new release of Sphinx 3.1.0, nova pdf-docs building started failing due to > TeX memory issue (maybe this one - https://github.com/sphinx-doc/sphinx/issues/3099). > I could not find the related changes in 3.1.0 which can cause this issue. > > The issue is to include the giant sample files(policy for now) in pdf doc. I am sure that was discussed previously also > when we started pdf building in the Train cycle but do not know what was the solution? > > Luckily neutron is not facing this as their config sample files are as large as nova. > > I have logged bug[1] and provided the workaround[2] to unblock the nova gate. Workaround is to > include the link to the sample file instead if its full content. > > Any thoughts on how to fix this? I found the issue. One place in the admin configuration doc, sample file was included in pdf and that caused this issue. Currently, nova master gate is blocked, please hold recheck until the below fix is merged. Fix: https://review.opendev.org/#/c/735279/ -gmann > > [1] https://bugs.launchpad.net/nova/+bug/1883200 > [2] https://review.opendev.org/#/c/735279/ > > -gmann > > From dtantsur at redhat.com Sun Jun 14 10:15:49 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Sun, 14 Jun 2020 12:15:49 +0200 Subject: [ironic] advanced partitioning discussion In-Reply-To: References: Message-ID: Hi everyone, Thank you for voting. The meeting will take place on Wednesday, 17th July at 2pm UTC. The bluejeans link is https://bluejeans.com/179667546. Dmitry On Tue, Jun 9, 2020 at 1:06 PM Dmitry Tantsur wrote: > Hi folks, > > As a follow up to the PTG discussion, I'd like to schedule a call about > advanced partitioning in ironic. Please vote for the date and time next > week: https://doodle.com/poll/5yg93gv7casu3ate > > Dmitry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun Jun 14 14:02:55 2020 From: zigo at debian.org (Thomas Goirand) Date: Sun, 14 Jun 2020 16:02:55 +0200 Subject: [announce] [debian] New deb-status page and extrepo configuration available for OpenStack on Debian Message-ID: <42f74f47-176d-3270-6c5f-84ddc8598190@debian.org> Hi! We've just setup online a new deb-status system [2] to compare what's been released in OpenStack upstream and what has been packaged in Debian. This is an effort to keep each stable branch up-to-date in Debian. This has been possible thanks to Michal Arbet from ultimum.io who wrote os-version-checker [1]. So, a big up to him and ultimum technologies. Note that the Rocky page may not reflect reality, since most of the components are available directly in Debian Buster. Browsing this page, you will notice that both Train and Ussuri are pretty much in an up-to-date shape! :) Note that this page is updated every hours. Also, the Debian packages are now available using extrepo, to make it easier to install. Extrepo is an initiative from Wouter Verhelst to have something comparable to PPA, at least in the way to install the repositories (ie: super easy now...). For example, to install Ussuri in Buster, one can do: apt-get install -t buster-backports extrepo extrepo enable openstack_ussuri This will install the repository as: /etc/apt/sources.list.d/extrepo_openstack_train.sources then you can start using Ussuri using apt as usual. Last, we're searching for volunteers to mirror the osbpo.debian.net Buster backport repositories (to make sure it isn't only on the hands of me and my employer). The only thing that shall be done is to setup an rsync mirroring of the repositories. Best would be to have a mirror in north-America and in Asia. If you're volunteering for this, especially if you are administering a CDN, please get in touch. Hoping this will improve the OpenStack on Debian experience, Cheers, Thomas Goirand (zigo) [1] https://salsa.debian.org/openstack-team/debian/os-version-checker [2] http://osbpo.debian.net/deb-status/ From zigo at debian.org Sun Jun 14 14:35:07 2020 From: zigo at debian.org (Thomas Goirand) Date: Sun, 14 Jun 2020 16:35:07 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <61c28590-8bd8-23b7-a532-7f4079f23c71@openstack.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <61c28590-8bd8-23b7-a532-7f4079f23c71@openstack.org> Message-ID: On 6/12/20 11:16 AM, Thierry Carrez wrote: > Thierry Carrez wrote: >> As you know[1] I'm trying to push toward simplification of OpenStack >> processes, to make them easier to navigate for new members of our >> community and generally remove weight. A good example of that is >> release models. [...] > > After having discussed this here and in several IRC discussions, there > appears to still be enough cases warranting keeping two cycle-tied > models (one with RCs and a round version number, the other strictly > following semver). The simplification gains may not be worth disrupting > long-established habits and tweaking all our validation toolchain. > > Instead, I'll work on improving documentation to guide new deliverables > in this choice, and reduce corner cases and exceptions. > > Thanks for entertaining the idea and reaching out. Periodically > reconsidering why we do things the way we do them is healthy, and avoids > cargo-culting processes forever. > Thanks for the idea, and considering opinions of others before moving forward. Much appreciated. Cheers, Thomas Goirand (zigo) From iwienand at redhat.com Mon Jun 15 03:07:22 2020 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 15 Jun 2020 13:07:22 +1000 Subject: [all][infra] Upcoming removal of preinstalled pip and virtualenv from base images In-Reply-To: <20200530004314.GA1770592@fedora19.localdomain> References: <20200530004314.GA1770592@fedora19.localdomain> Message-ID: <20200615030722.GA2531935@fedora19.localdomain> On Sat, May 30, 2020 at 10:43:14AM +1000, Ian Wienand wrote: > This is to notify the community of the planned upcoming removal of the > "pip-and-virtualenv" element from our infra image builds. Just a note that [1] has merged and new images should start appearing soon. Just to re-iterate; if virtualenv goes missing, you probabaly just want to use the "ensure-virtualenv" role. However, in the Python 3 world, you should probably consider if you actually *need* virtualenv or can use "venv" which comes with Python 3. The "ensure-pip" role exports "ensure_pip_virtualenv_command" which you can use with the "pip:" role from Ansible. An example is [1]. If you don't need the full virtualenv experience, longer term it's one less thing likely to break in your tests. Please pop into #opendev or reply mail and we can work through any issues. Thanks, -i [1] https://review.opendev.org/#/c/735267/ From xin-ran.wang at intel.com Mon Jun 15 03:15:03 2020 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Mon, 15 Jun 2020 03:15:03 +0000 Subject: [cyborg][kolla] OPAE & CentOS 8 In-Reply-To: References: Message-ID: Hi Mark, According to the discussion in PTG, Cyborg community has an agreement that all driver dependency should be removed in kolla and devstack. Please refer to https://etherpad.opendev.org/p/cyborg-victoria-goals L261. Thanks, Xin-Ran -----Original Message----- From: Mark Goddard Sent: Saturday, June 13, 2020 3:36 AM To: openstack-discuss Subject: [cyborg][kolla] OPAE & CentOS 8 Hi, I asked in another thread, but didn't get a response. In the kolla cyborg-agent image we install OPAE. There is a comment claiming this is a required dependency. It doesn't seem to be available for CentOS 8. If that is the case, we will need to drop support for the cyborg-agent image from Ussuri. Could someone confirm if OPAE is a hard dependency or not? Thanks, Mark From akekane at redhat.com Mon Jun 15 05:11:38 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 15 Jun 2020 10:41:38 +0530 Subject: [glance] Victoria Virtual PTG Message-ID: Hi All, We had our first virtual summit last week from 1st June to 5th June 2020. Using meetpad we had lots of discussion around different topics for glance and glance + cinder. I have created etherpad [1] with Notes from session and which also includes the recordings of each discussion. Below is the list of what we discussed during PTG. 1. Image encryption (glance side work) 2. Unified Limits Integration with Glance 3. Calculate virtual_size during image creation 4. Remove usage of six library 5. Remove single store configuration from glance_store 6. Cluster awareness 7. Cache-API 8. Improve performance of ceph/rbd store of glance (multi-threaded rbd driver) 9. Duplicate downloads 10. Code cleanup, enhancements 11. Cinder/Glance creating image from volume with Ceph 12. Make a cinder store of glance compatible with multiple stores 13. Gate job for glance cinder store 14. Support for sparse image upload to ceph 15. Victoria priorities Kindly let me know if you have any questions about the same. [1] https://etherpad.opendev.org/p/glance-victoria-ptg Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Jun 15 07:42:00 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=c5=82aw_Piliszek?=) Date: Mon, 15 Jun 2020 09:42:00 +0200 Subject: [cyborg][kolla] OPAE & CentOS 8 In-Reply-To: References: Message-ID: <0a065c68-db79-5de2-2fe3-f5166fccd167@gmail.com> Hi Xin-Ran, thank you for the info. We will then proceed with OPAE removal to unblock cyborg images on all supported platforms. -yoctozepto On 2020-06-15 05:15, Wang, Xin-ran wrote: > Hi Mark, > > According to the discussion in PTG, Cyborg community has an agreement that all driver dependency should be removed in kolla and devstack. > > Please refer to https://etherpad.opendev.org/p/cyborg-victoria-goals L261. > > Thanks, > Xin-Ran > > -----Original Message----- > From: Mark Goddard > Sent: Saturday, June 13, 2020 3:36 AM > To: openstack-discuss > Subject: [cyborg][kolla] OPAE & CentOS 8 > > Hi, > > I asked in another thread, but didn't get a response. In the kolla cyborg-agent image we install OPAE. There is a comment claiming this is a required dependency. It doesn't seem to be available for CentOS 8. If that is the case, we will need to drop support for the cyborg-agent image from Ussuri. > > Could someone confirm if OPAE is a hard dependency or not? > > Thanks, > Mark > From dtantsur at redhat.com Mon Jun 15 07:53:41 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 15 Jun 2020 09:53:41 +0200 Subject: [ironic] advanced partitioning discussion In-Reply-To: References: Message-ID: My apologies, of course I meant JUNE 17th, not July, i.e. this Wednesday. On Sun, Jun 14, 2020 at 12:15 PM Dmitry Tantsur wrote: > Hi everyone, > > Thank you for voting. The meeting will take place on Wednesday, 17th July > at 2pm UTC. The bluejeans link is https://bluejeans.com/179667546. > > Dmitry > > On Tue, Jun 9, 2020 at 1:06 PM Dmitry Tantsur wrote: > >> Hi folks, >> >> As a follow up to the PTG discussion, I'd like to schedule a call about >> advanced partitioning in ironic. Please vote for the date and time next >> week: https://doodle.com/poll/5yg93gv7casu3ate >> >> Dmitry >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Jun 15 08:04:07 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 15 Jun 2020 09:04:07 +0100 Subject: [cyborg][kolla] OPAE & CentOS 8 In-Reply-To: References: Message-ID: On Mon, 15 Jun 2020 at 04:15, Wang, Xin-ran wrote: > > Hi Mark, > > According to the discussion in PTG, Cyborg community has an agreement that all driver dependency should be removed in kolla and devstack. > > Please refer to https://etherpad.opendev.org/p/cyborg-victoria-goals L261. Thanks for your response. I can see that this should allow us to remove the dependency in Victoria. Will it be safe to do so in Ussuri and Train also? > > Thanks, > Xin-Ran > > -----Original Message----- > From: Mark Goddard > Sent: Saturday, June 13, 2020 3:36 AM > To: openstack-discuss > Subject: [cyborg][kolla] OPAE & CentOS 8 > > Hi, > > I asked in another thread, but didn't get a response. In the kolla cyborg-agent image we install OPAE. There is a comment claiming this is a required dependency. It doesn't seem to be available for CentOS 8. If that is the case, we will need to drop support for the cyborg-agent image from Ussuri. > > Could someone confirm if OPAE is a hard dependency or not? > > Thanks, > Mark > From thierry at openstack.org Mon Jun 15 08:05:19 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 15 Jun 2020 10:05:19 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <61c28590-8bd8-23b7-a532-7f4079f23c71@openstack.org> Message-ID: Mark Goddard wrote: > On Fri, 12 Jun 2020 at 13:02, Sean McGinnis wrote: >> I think some good feedback came out of all of this at least, so maybe we >> can still simplify some things, even if we can't fully collapse our >> release models. > > I would be interested in a relaxation of the requirement for RC1 and > stable branch cut to coincide, if possible. This would simplify the > kolla release process. We looked into that and it appears release note generation (through reno) might rely on tags at the start of stable branches in order to properly compute which changes apply where. So for the moment we'll keep the requirement, until that limitation is lifted. -- Thierry Carrez (ttx) From aj at suse.com Mon Jun 15 08:55:29 2020 From: aj at suse.com (Andreas Jaeger) Date: Mon, 15 Jun 2020 10:55:29 +0200 Subject: [all][qa] uWSGI release broke devstack jobs Message-ID: <0d329ad2-3e2b-cc13-c0ad-7289068608ca@suse.com> The new uWSGI 2.0.19 release changed packaging (filename and content) and thus broke devstack. The QA team is currently fixing this, changes need backporting to fix grenade as well. On master, change [1] was merged to use distro packages for Ubuntu and Fedora instead of installation from source for uWSGI. CentOS and openSUSE installation is not fixed yet ([2] proposed for openSUSE). Thus, right now this should work again: * Fedora and Ubuntu the devstack jobs on master Other distributions and branches and grenade are still broken. Please do NOT recheck until all fixes are in. If you want to help, best reach out on #openstack-qa. Thanks especially to Jens Harbott for driving this! Andreas [1] https://review.opendev.org/577955 [2] https://review.opendev.org/735519 -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From xin-ran.wang at intel.com Mon Jun 15 09:31:50 2020 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Mon, 15 Jun 2020 09:31:50 +0000 Subject: [cyborg][kolla] OPAE & CentOS 8 In-Reply-To: References: Message-ID: Yes, it can be removed safely in previous release. The BKM is let admin install all cyborg driver dependency by themselves, not only OPAE, but also other drivers if needed. In this way, we can avoid such incompatibility between driver version and OS version. Thanks, Xin-Ran -----Original Message----- From: Mark Goddard Sent: Monday, June 15, 2020 4:04 PM To: Wang, Xin-ran Cc: openstack-discuss Subject: Re: [cyborg][kolla] OPAE & CentOS 8 On Mon, 15 Jun 2020 at 04:15, Wang, Xin-ran wrote: > > Hi Mark, > > According to the discussion in PTG, Cyborg community has an agreement that all driver dependency should be removed in kolla and devstack. > > Please refer to https://etherpad.opendev.org/p/cyborg-victoria-goals L261. Thanks for your response. I can see that this should allow us to remove the dependency in Victoria. Will it be safe to do so in Ussuri and Train also? > > Thanks, > Xin-Ran > > -----Original Message----- > From: Mark Goddard > Sent: Saturday, June 13, 2020 3:36 AM > To: openstack-discuss > Subject: [cyborg][kolla] OPAE & CentOS 8 > > Hi, > > I asked in another thread, but didn't get a response. In the kolla cyborg-agent image we install OPAE. There is a comment claiming this is a required dependency. It doesn't seem to be available for CentOS 8. If that is the case, we will need to drop support for the cyborg-agent image from Ussuri. > > Could someone confirm if OPAE is a hard dependency or not? > > Thanks, > Mark > From ignaziocassano at gmail.com Mon Jun 15 09:53:32 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 15 Jun 2020 11:53:32 +0200 Subject: [openstackl][cinder][nova] attacched volume retype error Message-ID: Hello All, I am facing an issue when I retype a volume attached to a vm from netapp nfs to ceph on stein. It work fron nfs to iscsi but not from nfs to ceph. It seems a nova issue. I got an error in nova compute log: Failed to swap volume 26add23f-d643-4020-9935-9e856a0e9d93 for 1a2edc3a-cc3e-4a92-9488-6e6fabd01825: NotImplementedError: Swap only supports host devices I read there is an old blueprint for this but I do know if it has been implemented. Any help, please? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Mon Jun 15 10:29:18 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 15 Jun 2020 12:29:18 +0200 Subject: [openstackl][cinder][nova] attacched volume retype error In-Reply-To: References: Message-ID: On Mon, Jun 15, 2020 at 11:53, Ignazio Cassano wrote: > Hello All, > I am facing an issue when I retype a volume attached to a vm from > netapp nfs to ceph on stein. > It work fron nfs to iscsi but not from nfs to ceph. > It seems a nova issue. > I got an error in nova compute log: > > Failed to swap volume 26add23f-d643-4020-9935-9e856a0e9d93 for > 1a2edc3a-cc3e-4a92-9488-6e6fabd01825: NotImplementedError: Swap only > supports host devices > > I read there is an old blueprint for this but I do know if it has > been implemented. I think you this bug[1] and the fix[2] for it is related to your problem. The fix is included in the Ussuri release but it also potentially needs fresh libvirt and qemu versions to work. Cheers, gibi [1] https://bugs.launchpad.net/nova/+bug/1868996 [2] https://review.opendev.org/#/c/696834/ > > Any help, please? > Ignazio From ignaziocassano at gmail.com Mon Jun 15 10:42:32 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 15 Jun 2020 12:42:32 +0200 Subject: [openstackl][cinder][nova] attacched volume retype error In-Reply-To: References: Message-ID: Many thanks. I'll try it Ignazio Il giorno lun 15 giu 2020 alle ore 12:29 Balázs Gibizer ha scritto: > > > On Mon, Jun 15, 2020 at 11:53, Ignazio Cassano > wrote: > > Hello All, > > I am facing an issue when I retype a volume attached to a vm from > > netapp nfs to ceph on stein. > > It work fron nfs to iscsi but not from nfs to ceph. > > It seems a nova issue. > > I got an error in nova compute log: > > > > Failed to swap volume 26add23f-d643-4020-9935-9e856a0e9d93 for > > 1a2edc3a-cc3e-4a92-9488-6e6fabd01825: NotImplementedError: Swap only > > supports host devices > > > > I read there is an old blueprint for this but I do know if it has > > been implemented. > > I think you this bug[1] and the fix[2] for it is related to your > problem. The fix is included in the Ussuri release but it also > potentially needs fresh libvirt and qemu versions to work. > > Cheers, > gibi > > [1] https://bugs.launchpad.net/nova/+bug/1868996 > [2] https://review.opendev.org/#/c/696834/ > > > > > Any help, please? > > Ignazio > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Mon Jun 15 12:05:03 2020 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 15 Jun 2020 14:05:03 +0200 Subject: [cyborg][neutron][nova] Networking support in Cyborg In-Reply-To: References: <4f31c35b3900ddae7e90c53cd4411dc8c4e5a55e.camel@redhat.com> Message-ID: Hi, You don't need neutron to get resource provider information, you can fetch everything from placement. If you list RPs all that is created by neutron will be under a compute (as root provider) and will be named something like this for neutron agents: *:Open vSwitch agent* or *:NIC Switch agent*. If the agent has configured bandwidth, than it has a leaf like this : *:Open vSwitch agent: *or *:NIC Switch agent: *as it is configured in the relevant neutron config on the given host. Neutron can't give you up-to-date information of the RPs (like generation) as it is placement who has all these. The same is true for traits, to get the traits for RPs placement is your service: https://docs.openstack.org/api-ref/placement/?expanded=list-resource-provider-traits-detail#list-resource-provider-traits regards Lajos Katona (lajoskatona) Wang, Xin-ran ezt írta (időpont: 2020. jún. 12., P, 12:31): > Hi all, > > I prefer that physnet related stuff still managed by neutron, because it > is a notion of neutron. If we let Cyborg update this traits to Placement, > what will we do if Neutron enabled bandwidth feature and how we know > whether this feature is enabled or not. > > Can we just let Neutron always report physnet traits. I am not very > familiar with neutron, is there any gap? > > Otherwise, if Cyborg do need report this to placement, my proposal is: > > Neutron will provide an interface which allow Cyborg to get physnet > trait/RP, if this feature is not configured, it will return 404, then > Cyborg will know that neutron did not configured bandwidth feature, and > Cyborg can report all by itself. If Neutron returns something meaningful, > Cyborg should use the same RP and update other traits on this RP. > > In this way, Cyborg and Neutron will use the same RP and keep the > consistency. > > Thanks, > Xin-Ran > > -----Original Message----- > From: Sean Mooney > Sent: Thursday, June 11, 2020 9:44 PM > To: Nadathur, Sundar ; openstack-discuss < > openstack-discuss at lists.openstack.org> > Subject: Re: [cyborg][neutron][nova] Networking support in Cyborg > > On Thu, 2020-06-11 at 12:24 +0000, Nadathur, Sundar wrote: > > Hi Sean, > > > > > From: Sean Mooney > > > Sent: Thursday, June 11, 2020 4:31 AM > > > > > > > On Thu, 2020-06-11 at 11:04 +0000, Nadathur, Sundar wrote: > > > > [...] > > > > * Ideally, the admin should be able to formulate the device > > > > profile in the same way, independent of whether it is a > > > > single-component or multi-component device. For that, the device > > > > profile must have a single resource group that includes the > > > > resource, traits and Cyborg > > > > > > properties for both the accelerator and NIC. The device profile for > > > a Neutron port will presumably have only one request group. So, the > > > device profile would look something like this: > > > > > > > > { "name": "my-smartnic-dp", > > > > "groups": [{ > > > > "resources:FPGA": "1", > > > > "resources:CUSTOM_NIC_X": "1", > > > > "trait:CUSTOM_FPGA_REGION_ID_FOO": "required", > > > > "trait:CUSTOM_NIC_TRAIT_BAR": "required", > > > > "trait:CUSTOM_PHYSNET_VLAN3": "required", > > > > "accel:bitstream_id": "3AFE" > > > > }] > > > > } > > > > > > having "trait:CUSTOM_PHYSNET_VLAN3": "required", in the device > > > profile means you have to create a seperate device profile with the > > > same details for each physnet and the user then need to fine the > > > profile that matches there neutron network's physnet which is also > > > problematic if they use the multiprovidernet extention. > > > so we shoud keep the physnet seperate and have nova or neutorn > > > append that when we make the placment query. > > > > True, we did discuss this at the PTG, and I agree. The physnet can be > > passed in from the command line during port creation. > that is not how that works. > > when you create a neutron network with segmenation type vlan or flat it is > automatically assigned a segmeantion_id and phsynet. > As an admin you can chose both but as a tenant this is managed by neutron > > ignoring the multiprovidernet for a second all vlan and flat network have > 1 phyesnet and the port get a phsynet form the network it is created on. > > the multiprovidernet extension allow a singlel neutron provider network to > have multiple physnets but nova does not support that today. > > so nova can get the physnet from the port/network/segment and incorporate > that in the placment request but we cant pass it in during port creation. > > in general tenants are not aware of physnets. > > > > > > > [...] > > > > * We discussed the physnet trait at the PTG. My suggestion is to > > > > keep Cyborg out of this, and out of networking in general, if > possible. > > > > > > well this feature is more or less the opisite of that intent but i > > > get that you dont want cyborg to have to confiure the networking > atribute of the interface. > > > > The admin could apply the trait to the right RP. Or, the OpenStack > > installer could automate this. That's similar in spirit to having the > admin configure the physnet in PCI whitelist. > yes they could its not a partially good user experience as it quite > tedious to do but yes it a viable option and likely sufficnet for the > initial work. installer could automate it but having to do it manually > would not be ideal. > > > > Regards, > > Sundar > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Mon Jun 15 12:08:06 2020 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Mon, 15 Jun 2020 12:08:06 +0000 Subject: =?utf-8?B?562U5aSNOiBbY3lib3JnXVtrb2xsYV0gT1BBRSAmIENlbnRPUyA4?= In-Reply-To: References: <22e85711bcfd6fce13bf583e35a51c72@sslemail.net> Message-ID: <0beb3101f4c640b2b0fb545e5c321d89@inspur.com> We handed over the OPAE driver version and OS version to users. Cyborg will no longer maintain a supported version of the hardware. The removal patch is https://review.opendev.org/#/c/735526/2, after this patch merged in V release, that will be cherry-pick to Ussuri, it's safe. -----邮件原件----- 发件人: Wang, Xin-ran [mailto:xin-ran.wang at intel.com] 发送时间: 2020年6月15日 17:32 收件人: Mark Goddard 抄送: openstack-discuss 主题: [lists.openstack.org代发]RE: [cyborg][kolla] OPAE & CentOS 8 Yes, it can be removed safely in previous release. The BKM is let admin install all cyborg driver dependency by themselves, not only OPAE, but also other drivers if needed. In this way, we can avoid such incompatibility between driver version and OS version. Thanks, Xin-Ran -----Original Message----- From: Mark Goddard Sent: Monday, June 15, 2020 4:04 PM To: Wang, Xin-ran Cc: openstack-discuss Subject: Re: [cyborg][kolla] OPAE & CentOS 8 On Mon, 15 Jun 2020 at 04:15, Wang, Xin-ran wrote: > > Hi Mark, > > According to the discussion in PTG, Cyborg community has an agreement that all driver dependency should be removed in kolla and devstack. > > Please refer to https://etherpad.opendev.org/p/cyborg-victoria-goals L261. Thanks for your response. I can see that this should allow us to remove the dependency in Victoria. Will it be safe to do so in Ussuri and Train also? > > Thanks, > Xin-Ran > > -----Original Message----- > From: Mark Goddard > Sent: Saturday, June 13, 2020 3:36 AM > To: openstack-discuss > Subject: [cyborg][kolla] OPAE & CentOS 8 > > Hi, > > I asked in another thread, but didn't get a response. In the kolla cyborg-agent image we install OPAE. There is a comment claiming this is a required dependency. It doesn't seem to be available for CentOS 8. If that is the case, we will need to drop support for the cyborg-agent image from Ussuri. > > Could someone confirm if OPAE is a hard dependency or not? > > Thanks, > Mark > From noonedeadpunk at ya.ru Mon Jun 15 13:14:17 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Mon, 15 Jun 2020 16:14:17 +0300 Subject: [openstack-ansible] Upstream contributions Message-ID: <1485431592225947@mail.yandex.ru> Hi there! We've accidentally noticed, that you have forked some of the openstack-ansible repositories and do have some fixes and commits to the forked versions. Additionally, you have adjutant role, which we don't have but was talking about creating it one day. We would really love to get your patches merged upstream and work together on making OSA better and that it would satisfy your requirements as well. As it's really better to work together on fixing things or implementing features, than doing exact same things on our own. I think that all parties would benefit from it. As a plus, in upstream we have CI resources, which means that we can test all patches, which should make them work for you as well in case of some updates or changes. So we'd really love to work with you and getting changes and features from your cloned repos to be implemented upstream. You can join us on IRC at #openstack-ansible channel on Freenode for further conversation and cooperation. -- Kind regards, Dmitriy Rabotyagov From mdulko at redhat.com Mon Jun 15 14:08:33 2020 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Mon, 15 Jun 2020 16:08:33 +0200 Subject: [kuryr][kuryr-libnetwork] Message-ID: <4bfc158cff65127acd989800490cf3a702a0ccbb.camel@redhat.com> Hi, Due to shortage of kuryr-libnetwork core reviewers Hongbin proposed that we should soften the rules a bit and allow merging patches with just one +2. I'm totally fine with that, especially as the volume of changes showing up in the project is pretty small. If nobody will have anything against the idea, I'll just go on and update the docs later this week. Thanks, Michał From Zion.Alfia at teoco.com Mon Jun 15 08:18:40 2020 From: Zion.Alfia at teoco.com (Alfia, Zion) Date: Mon, 15 Jun 2020 08:18:40 +0000 Subject: Upstream Openstack Stein + Octavia - why is the installation procedure so different than other projects ? Message-ID: <04fdb8f56a814f9b98e98f16d21b9de4@teoco.com> Hello, Opposed to other Openstack Stein projects' installation procedure documentation - Octavia is no less than ... vague. https://docs.openstack.org/octavia/stein/contributor/guides/dev-quick-start.html#running-octavia-in-production Why is that ? Kind Regards, Zion ________________________________ PRIVILEGED AND CONFIDENTIAL PLEASE NOTE: The information contained in this message is privileged and confidential, and is intended only for the use of the individual to whom it is addressed and others who have been specifically authorized to receive it. If you are not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, or if any problems occur with transmission, please contact sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlibosva at redhat.com Mon Jun 15 08:31:36 2020 From: jlibosva at redhat.com (Jakub Libosvar) Date: Mon, 15 Jun 2020 10:31:36 +0200 Subject: [Neutron] Neutron Bug Deputy report Jun 8-15 Message-ID: <342971ba-c242-81d2-41f0-07a8b679aa61@redhat.com> Hi, I was bug deputy for the week of June 8, there are no critical bugs, which is good 🙂 Here is the summary: * Needs further triage * Improper routing between private networks * https://bugs.launchpad.net/neutron/+bug/1883288 * Needs more investigation * after FIP is assigned vm lost network connection * https://bugs.launchpad.net/neutron/+bug/1882860 * may be related to https://github.com/ovn-org/ovn/commit/fda9a1dd3c995f25cad9e828e701f8b41d347bbb * or DVR + VLAN provider network * internal server error on updating no-gateway on the dhcpv6 subnet * https://bugs.launchpad.net/neutron/+bug/1882873 * needs more info about reproducer * High * Fix flows l2 population related on br-tun being cleaned after RabbitMQ cluster has experienced a network partition * https://bugs.launchpad.net/neutron/+bug/1883071 * Triaged as High as it has an impact on tenant traffic because sometimes the flows related to tunnels in L2 pop gets cleaned * bagpipe: bagpipe-bgp does not start with EVPN and OVS driver * https://bugs.launchpad.net/neutron/+bug/1883102 * More options need to be passed to OVS in EVPN * Neutron OpenvSwitch DVR - connection problem * https://bugs.launchpad.net/neutron/+bug/1883321 * traffic of one VM on a compute node influences traffic of other VM, only on DVR ml2/ovs * Medium * [L3] floating IP failed to bind due to no agent gateway port(fip-ns) * https://bugs.launchpad.net/neutron/+bug/1883089 * Marked as Medium by reporter * Has a fix on review https://review.opendev.org/#/c/735432/ * Low * [neutron-tempest-plugin] "wait_for_interface_status" only retrieves the Nova server information, not the VM network * https://bugs.launchpad.net/neutron/+bug/1883095 * Fix proposed: https://review.opendev.org/#/c/735117/ * RFE * RFE: allow replacing the QoS policy of bound port * https://bugs.launchpad.net/neutron/+bug/1882804 * needs to be triaged by the drivers team From zhang.lei.fly at gmail.com Mon Jun 15 09:55:17 2020 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 15 Jun 2020 17:55:17 +0800 Subject: [ironic][centos]grubaa64.efi request different filename In-Reply-To: References: Message-ID: yeah, i have set up an ironic node based on the doc, thanks a lot. --- Regards, Jeffrey Zhang Blog: http://xcodest.me On Thu, Jun 11, 2020 at 10:16 PM Kaifeng Wang wrote: > Hi Jeffrey, > > Different firmware may have different search path, ironic takes a simple > path to have a main grub.cfg to distribute a request to the correct > configuration file with grub built-in variables, you can get detailed steps > here [1]. > > [1] > https://docs.openstack.org/ironic/latest/install/configure-pxe.html#uefi-pxe-grub-setup > > // kaifeng > > On Thu, Jun 11, 2020 at 9:39 PM Jeffrey Zhang > wrote: > >> hey guys, >> >> I am testing a standalone ironic node on arm64 node through centos 7. >> Then dnsmasq is configured as following >> >> ``` >> enable-tftp >> tftp-root=/var/lib/ironic/public/boot/tftp >> >> # dhcp-option=option:router,192.168.122.1 >> # use static >> dhcp-range=10.0.0.167,static,60s >> log-queries >> log-dhcp >> dhcp-match=set:efi-arm64,option:client-arch,11 >> dhcp-boot=tag:efi-arm64,grubaa64.efi >> ``` >> >> the grubaa64.efi file come from `/boot/efi/EFI/centos/grubaa64.efi` on >> centos 7 >> >> But seems grubaa64.efi file are trying different grub.cfg filename like >> `grub.cfg-xx-xx-xx-xx`( check bellow) >> Whereas ironic generate filename like `xx:xx:xx:xx:xx.conf`. Is this a >> bug in ironic? or I made something wrong? >> >> ``` >> dnsmasq-dhcp[1]: 140425456 vendor class: PXEClient:Arch:00011:UNDI:003000 >> dnsmasq-dhcp[1]: 140425456 DHCPREQUEST(eth1) 10.0.0.171 52:54:00:ea:56:f2 >> dnsmasq-dhcp[1]: 140425456 tags: known, efi-arm64, eth1 >> dnsmasq-dhcp[1]: 140425456 DHCPACK(eth1) 10.0.0.171 52:54:00:ea:56:f2 >> pxe-uefi >> dnsmasq-dhcp[1]: 140425456 requested options: 1:netmask, 2:time-offset, >> 3:router, 4, 5, >> dnsmasq-dhcp[1]: 140425456 requested options: 6:dns-server, 12:hostname, >> 13:boot-file-size, >> dnsmasq-dhcp[1]: 140425456 requested options: 15:domain-name, >> 17:root-path, 18:extension-path, >> dnsmasq-dhcp[1]: 140425456 requested options: 22:max-datagram-reassembly, >> 23:default-ttl, >> dnsmasq-dhcp[1]: 140425456 requested options: 28:broadcast, >> 40:nis-domain, 41:nis-server, >> dnsmasq-dhcp[1]: 140425456 requested options: 42:ntp-server, >> 43:vendor-encap, 50:requested-address, >> dnsmasq-dhcp[1]: 140425456 requested options: 51:lease-time, >> 54:server-identifier, 58:T1, >> dnsmasq-dhcp[1]: 140425456 requested options: 59:T2, 60:vendor-class, >> 66:tftp-server, 67:bootfile-name, >> dnsmasq-dhcp[1]: 140425456 requested options: 97:client-machine-id, 128, >> 129, 130, 131, >> dnsmasq-dhcp[1]: 140425456 requested options: 132, 133, 134, 135 >> dnsmasq-dhcp[1]: 140425456 next server: 10.0.0.167 >> dnsmasq-dhcp[1]: 140425456 broadcast response >> dnsmasq-dhcp[1]: 140425456 sent size: 1 option: 53 message-type 5 >> dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 54 server-identifier >> 10.0.0.167 >> dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 51 lease-time 2m >> dnsmasq-dhcp[1]: 140425456 sent size: 13 option: 67 bootfile-name >> grubaa64.efi >> dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 58 T1 1m >> dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 59 T2 1m45s >> dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 1 netmask 255.255.255.0 >> dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 28 broadcast 10.0.0.255 >> dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 3 router 10.0.0.167 >> dnsmasq-dhcp[1]: 140425456 sent size: 4 option: 6 dns-server 10.0.0.167 >> dnsmasq-dhcp[1]: 140425456 sent size: 8 option: 12 hostname pxe-uefi >> dnsmasq-tftp[1]: error 8 User aborted the transfer received from >> 10.0.0.171 >> dnsmasq-tftp[1]: failed sending >> /var/lib/ironic/public/boot/tftp/grubaa64.efi to 10.0.0.171 >> dnsmasq-tftp[1]: sent /var/lib/ironic/public/boot/tftp/grubaa64.efi to >> 10.0.0.171 >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/grub.cfg-01-52-54-00-ea-56-f2 not found >> dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0000AB >> not found >> dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0000A >> not found >> dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0000 >> not found >> dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A000 not >> found >> dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A00 not >> found >> dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A0 not >> found >> dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0A not >> found >> dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg-0 not >> found >> dnsmasq-tftp[1]: file /var/lib/ironic/public/boot/tftp/grub.cfg not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-01-52-54-00-ea-56-f2 >> not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0000AB not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0000A not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0000 not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A000 not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A00 not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A0 not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0A not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg-0 not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/grub.cfg not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/command.lst not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/fs.lst not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/crypto.lst not found >> dnsmasq-tftp[1]: file >> /var/lib/ironic/public/boot/tftp/EFI/centos/arm64-efi/terminal.lst not found >> ``` >> --- >> Regards, >> Jeffrey Zhang >> Blog: http://xcodest.me >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nazancengiz at havelsan.com.tr Mon Jun 15 13:10:53 2020 From: nazancengiz at havelsan.com.tr (=?utf-8?B?TmF6YW4gQ0VOR8SwWg==?=) Date: Mon, 15 Jun 2020 13:10:53 +0000 Subject: nova compute rbd backend error In-Reply-To: <15b20698a0bd41d090f3b79fda2f0dee@havelsan.com.tr> References: <15b20698a0bd41d090f3b79fda2f0dee@havelsan.com.tr> Message-ID: I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have saw this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 After that, I've entered the nova-compute pod and I've run below commands; apt install python-rbd apt install python3-rbd But it didn't work. I've gotten same error. A solution must be on the K8s side. Do you have any of ideas about that? Nova compute image; [cid:3132c647-854a-4414-b04c-d40d6637cfc3] Best Regards, Nazan [cid:image65cb84.PNG at b431e523.428f4649] [cid:imagea0344c.JPG at 58751da3.4a9c82d4] Nazan CENGİZ AR-GE MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:imageee2c6d.PNG at 1c63e1af.47be051b] +90 312 219 57 87 [cid:image8666e0.PNG at d175d91d.47bf87e0] +90 312 219 57 97 [cid:image84a401.JPG at 9630949c.47a4f234] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 30332 bytes Desc: pastedImage.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image65cb84.PNG Type: image/png Size: 4676 bytes Desc: image65cb84.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagea0344c.JPG Type: image/jpeg Size: 305 bytes Desc: imagea0344c.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageee2c6d.PNG Type: image/png Size: 360 bytes Desc: imageee2c6d.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image8666e0.PNG Type: image/png Size: 313 bytes Desc: image8666e0.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image84a401.JPG Type: image/jpeg Size: 28166 bytes Desc: image84a401.JPG URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: novalog.txt URL: From nazancengiz at havelsan.com.tr Mon Jun 15 13:31:45 2020 From: nazancengiz at havelsan.com.tr (=?utf-8?B?TmF6YW4gQ0VOR8SwWg==?=) Date: Mon, 15 Jun 2020 13:31:45 +0000 Subject: Fw: nova compute rbd backend error In-Reply-To: References: <15b20698a0bd41d090f3b79fda2f0dee@havelsan.com.tr>, Message-ID: Hi all, I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have saw this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 After that, I've entered the nova-compute pod and I've run below commands; apt install python-rbd apt install python3-rbd But it didn't work. I've gotten same error. A solution must be on the K8s side. Do you have any of ideas about that? Nova compute image; [cid:3132c647-854a-4414-b04c-d40d6637cfc3] Best Regards, Nazan [cid:image230683.PNG at 774a6097.409a32b2] [cid:image58bbef.JPG at ee4dc158.49a5b52b] Nazan CENGİZ AR-GE MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:image242e04.PNG at 1693005c.449cd239] +90 312 219 57 87 [cid:imagefe23b4.PNG at 1d0020c6.40b01938] +90 312 219 57 97 [cid:image6dc52d.JPG at d0d24803.4b93baf8] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 30332 bytes Desc: pastedImage.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image230683.PNG Type: image/png Size: 4676 bytes Desc: image230683.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image58bbef.JPG Type: image/jpeg Size: 305 bytes Desc: image58bbef.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image242e04.PNG Type: image/png Size: 360 bytes Desc: image242e04.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagefe23b4.PNG Type: image/png Size: 313 bytes Desc: imagefe23b4.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image6dc52d.JPG Type: image/jpeg Size: 28166 bytes Desc: image6dc52d.JPG URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: novalog.txt URL: From nazancengiz at havelsan.com.tr Mon Jun 15 13:33:35 2020 From: nazancengiz at havelsan.com.tr (=?utf-8?B?TmF6YW4gQ0VOR8SwWg==?=) Date: Mon, 15 Jun 2020 13:33:35 +0000 Subject: Fw: nova compute rbd backend error In-Reply-To: References: <15b20698a0bd41d090f3b79fda2f0dee@havelsan.com.tr>, , Message-ID: Hi all, I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have saw this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 After that, I've entered the nova-compute pod and I've run below commands; apt install python-rbd apt install python3-rbd But it didn't work. I've gotten same error. A solution must be on the K8s side. Do you have any of ideas about that? Nova compute image; [cid:3132c647-854a-4414-b04c-d40d6637cfc3] error on below; 2020-06-15 12:32:23.008 21543 ERROR nova.compute.manager 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager [req-eb6c1020-ad0d-43a5-981f-24e55e9179aa - - - - -] Error updating resources for node telco1-srv-16.local.: RuntimeError: rbd python libraries not found 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager Traceback (most recent call last): 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/compute/manager.py", line 8256, in _update_available_resource_for_node 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager startup=startup) 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/compute/resource_tracker.py", line 732, in update_available_resource 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7084, in get_available_resource 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info() 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 5787, in _get_local_gb_info 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info() 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1216, in _get_rbd_driver 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager rbd_user=CONF.libvirt.rbd_user) 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 128, in __init__ 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager raise RuntimeError(_('rbd python libraries not found')) 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager RuntimeError: rbd python libraries not found 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager [req-eb6c1020-ad0d-43a5-981f-24e55e9179aa - - - - -] Error updating resources for node telco1-srv-16.local.: RuntimeError: rbd python libraries not found 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager Traceback (most recent call last): 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/compute/manager.py", line 8256, in _update_available_resource_for_node 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager startup=startup) 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/compute/resource_tracker.py", line 732, in update_available_resource 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 7084, in get_available_resource 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info() 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 5787, in _get_local_gb_info 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info() 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1216, in _get_rbd_driver 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager rbd_user=CONF.libvirt.rbd_user) 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 128, in __init__ 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager raise RuntimeError(_('rbd python libraries not found')) 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager RuntimeError: rbd python libraries not found Best Regards, Nazan [cid:image4763ee.PNG at d240faa6.4b9655fd] [cid:image45b538.JPG at 2072701f.4b8339d1] Nazan CENGİZ AR-GE MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:image017c27.PNG at 7ce32167.47807fdd] +90 312 219 57 87 [cid:imageaa7743.PNG at b943cc09.408efb9f] +90 312 219 57 97 [cid:image93ea67.JPG at ef2fa35c.47b1497f] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 30332 bytes Desc: pastedImage.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image4763ee.PNG Type: image/png Size: 4676 bytes Desc: image4763ee.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image45b538.JPG Type: image/jpeg Size: 305 bytes Desc: image45b538.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image017c27.PNG Type: image/png Size: 360 bytes Desc: image017c27.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageaa7743.PNG Type: image/png Size: 313 bytes Desc: imageaa7743.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image93ea67.JPG Type: image/jpeg Size: 28166 bytes Desc: image93ea67.JPG URL: From nazancengiz at havelsan.com.tr Mon Jun 15 14:57:32 2020 From: nazancengiz at havelsan.com.tr (=?utf-8?B?TmF6YW4gQ0VOR8SwWg==?=) Date: Mon, 15 Jun 2020 14:57:32 +0000 Subject: Fw: nova compute rbd backend error In-Reply-To: References: <15b20698a0bd41d090f3b79fda2f0dee@havelsan.com.tr>, , , Message-ID: <6b8a5cdf55704cc694bab1c129f56714@havelsan.com.tr> Hi all, I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have saw this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 After that, I've entered the nova-compute pod and I've run below commands; apt install python-rbd apt install python3-rbd But it didn't work. I've gotten same error. A solution must be on the K8s side. Do you have any of ideas about that? Nova compute image; [cid:3132c647-854a-4414-b04c-d40d6637cfc3] error on below; 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager [req-eb6c1020-ad0d-43a5-981f-24e55e9179aa - - - - -] Error updating resources for node telco1-srv-16.local.: RuntimeError: rbd python libraries not found 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager Traceback (most recent call last): 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/compute/manager.py", line 8256, in _update_available_resource_for_node 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager startup=startup) 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/compute/resource_tracker.py", line 732, in update_available_resource 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager disk_info_dict = self._get_local_gb_info() 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 5787, in _get_local_gb_info 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager info = LibvirtDriver._get_rbd_driver().get_pool_info() 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1216, in _get_rbd_driver 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager rbd_user=CONF.libvirt.rbd_user) 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 128, in __init__ 2020-06-15 12:33:22.613 21543 ERROR nova.compute.manager raise RuntimeError(_('rbd python libraries not found')) Best Regards, Nazan [cid:imagedd78e6.PNG at d45dedcb.4d84b0a4] [cid:imaged66bb3.JPG at 29634d0b.4ea893b1] Nazan CENGİZ AR-GE MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:imagedd103e.PNG at 052261f2.408ccbb2] +90 312 219 57 87 [cid:image0747d1.PNG at aefe8faf.42886a54] +90 312 219 57 97 [cid:image840826.JPG at 0c0aaf5e.42b6b822] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 30332 bytes Desc: pastedImage.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagedd78e6.PNG Type: image/png Size: 4676 bytes Desc: imagedd78e6.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imaged66bb3.JPG Type: image/jpeg Size: 305 bytes Desc: imaged66bb3.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagedd103e.PNG Type: image/png Size: 360 bytes Desc: imagedd103e.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image0747d1.PNG Type: image/png Size: 313 bytes Desc: image0747d1.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image840826.JPG Type: image/jpeg Size: 28166 bytes Desc: image840826.JPG URL: From nazancengiz at havelsan.com.tr Mon Jun 15 15:00:34 2020 From: nazancengiz at havelsan.com.tr (=?utf-8?B?TmF6YW4gQ0VOR8SwWg==?=) Date: Mon, 15 Jun 2020 15:00:34 +0000 Subject: Fw: nova compute rbd backend error In-Reply-To: References: Message-ID: <8562e9f842ca41d99bfe04c6bc4b1d94@havelsan.com.tr> I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have saw this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 After that, I've entered the nova-compute pod and I've run below commands; apt install python-rbd apt install python3-rbd But it didn't work. I've gotten same error. A solution must be on the K8s side. Do you have any of ideas about that? error; 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1216, in _get_rbd_driver 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager rbd_user=CONF.libvirt.rbd_user) 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 128, in __init__ 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager raise RuntimeError(_('rbd python libraries not found')) 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager RuntimeError: rbd python libraries not found Nova compute image; [cid:e78156d8-4edf-4abd-8585-7e346fb1f879] Best Regards, Nazan [cid:image179c53.PNG at 04e4c522.47980262] [cid:imagebda6b1.JPG at 83e469a0.4b8f3b2e] Nazan CENGİZ AR-GE MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:image0a77f4.PNG at 7b7ae2c0.4489d17b] +90 312 219 57 87 [cid:image1bd9c3.PNG at 4ea6bc72.42af5879] +90 312 219 57 97 [cid:image7abae9.JPG at fff6a90f.4d8b49ff] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 30332 bytes Desc: pastedImage.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image179c53.PNG Type: image/png Size: 4676 bytes Desc: image179c53.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagebda6b1.JPG Type: image/jpeg Size: 305 bytes Desc: imagebda6b1.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image0a77f4.PNG Type: image/png Size: 360 bytes Desc: image0a77f4.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image1bd9c3.PNG Type: image/png Size: 313 bytes Desc: image1bd9c3.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image7abae9.JPG Type: image/jpeg Size: 28166 bytes Desc: image7abae9.JPG URL: From nazancengiz at havelsan.com.tr Mon Jun 15 15:03:46 2020 From: nazancengiz at havelsan.com.tr (=?utf-8?B?TmF6YW4gQ0VOR8SwWg==?=) Date: Mon, 15 Jun 2020 15:03:46 +0000 Subject: No subject In-Reply-To: <8562e9f842ca41d99bfe04c6bc4b1d94@havelsan.com.tr> References: , <8562e9f842ca41d99bfe04c6bc4b1d94@havelsan.com.tr> Message-ID: Hi I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have saw this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 After that, I've entered the nova-compute pod and I've run below commands; apt install python-rbd or pyhon3-rbd I've gotten same error. A solution must be on the K8s side. Do you have any of ideas about that? error; ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1216, in _get_rbd_driver ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 128, in __init__ ERROR nova.compute.manager raise RuntimeError(_('rbd python libraries not found')) nova.compute.manager RuntimeError: rbd python libraries not found Nova compute image; [cid:e78156d8-4edf-4abd-8585-7e346fb1f879] thanks. [cid:image30db2e.PNG at 2efa093b.44b668e6] [cid:imageca7a93.JPG at fd50c198.449d60e4] Nazan CENGİZ AR-GE MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:image7933a2.PNG at b0a7fdb3.4db01dc9] +90 312 219 57 87 [cid:image42b9c0.PNG at 97a3360d.4084e802] +90 312 219 57 97 [cid:image20278f.JPG at aa752dc4.45bc8727] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 30332 bytes Desc: pastedImage.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image30db2e.PNG Type: image/png Size: 4676 bytes Desc: image30db2e.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageca7a93.JPG Type: image/jpeg Size: 305 bytes Desc: imageca7a93.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image7933a2.PNG Type: image/png Size: 360 bytes Desc: image7933a2.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image42b9c0.PNG Type: image/png Size: 313 bytes Desc: image42b9c0.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image20278f.JPG Type: image/jpeg Size: 28166 bytes Desc: image20278f.JPG URL: From nazancengiz at havelsan.com.tr Mon Jun 15 15:05:02 2020 From: nazancengiz at havelsan.com.tr (=?utf-8?B?TmF6YW4gQ0VOR8SwWg==?=) Date: Mon, 15 Jun 2020 15:05:02 +0000 Subject: Fw: In-Reply-To: References: , <8562e9f842ca41d99bfe04c6bc4b1d94@havelsan.com.tr>, Message-ID: <9d1a3b8f89e64148a7f8c172b3d6e7af@havelsan.com.tr> Hi I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have saw this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 After that, I've entered the nova-compute pod and I've run below commands; apt install python-rbd or pyhon3-rbd I've gotten same error. A solution must be on the K8s side. Do you have any of ideas about that? error; ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1216, in _get_rbd_driver ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 128, in __init__ ERROR nova.compute.manager raise RuntimeError(_('rbd python libraries not found')) nova.compute.manager RuntimeError: rbd python libraries not found Nova compute image;openstackhelm/nova:stein-ubuntu_bionic thanks. [cid:imagee904ff.PNG at 8cdc4146.4eb022ab] [cid:image876e2c.JPG at 3a47581b.40952143] Nazan CENGİZ AR-GE MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:imagec99682.PNG at 507f7881.4a9ca7e4] +90 312 219 57 87 [cid:image63c28b.PNG at 43015da6.459441d7] +90 312 219 57 97 [cid:image5e9e05.JPG at b82e08d4.44a041cd] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagee904ff.PNG Type: image/png Size: 4676 bytes Desc: imagee904ff.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image876e2c.JPG Type: image/jpeg Size: 305 bytes Desc: image876e2c.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagec99682.PNG Type: image/png Size: 360 bytes Desc: imagec99682.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image63c28b.PNG Type: image/png Size: 313 bytes Desc: image63c28b.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image5e9e05.JPG Type: image/jpeg Size: 28166 bytes Desc: image5e9e05.JPG URL: From nazancengiz at havelsan.com.tr Mon Jun 15 15:06:02 2020 From: nazancengiz at havelsan.com.tr (=?utf-8?B?TmF6YW4gQ0VOR8SwWg==?=) Date: Mon, 15 Jun 2020 15:06:02 +0000 Subject: Fw: In-Reply-To: <9d1a3b8f89e64148a7f8c172b3d6e7af@havelsan.com.tr> References: , <8562e9f842ca41d99bfe04c6bc4b1d94@havelsan.com.tr>, , <9d1a3b8f89e64148a7f8c172b3d6e7af@havelsan.com.tr> Message-ID: <531ea54ca9184317afcfb3e802872bc1@havelsan.com.tr> Hi I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have saw this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 After that, I've entered the nova-compute pod and I've run below commands; apt install python-rbd or pyhon3-rbd I've gotten same error. A solution must be on the K8s side. Do you have any of ideas about that? error; ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1216, in _get_rbd_driver ERROR nova.compute.manager raise RuntimeError(_('rbd python libraries not found')) nova.compute.manager RuntimeError: rbd python libraries not found Nova compute image;openstackhelm/nova:stein-ubuntu_bionic thanks. [cid:image8f7065.PNG at 36ea3867.4cb6fa74] [cid:imagea26db0.JPG at 811c4a2b.43ba02d1] Nazan CENGİZ AR-GE MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:imagef10ae5.PNG at b43daa3e.48a75aa1] +90 312 219 57 87 [cid:imageeb6c2f.PNG at afad8ee3.4eb05328] +90 312 219 57 97 [cid:image8c11a6.JPG at 68c181f6.448d4396] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image8f7065.PNG Type: image/png Size: 4676 bytes Desc: image8f7065.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagea26db0.JPG Type: image/jpeg Size: 305 bytes Desc: imagea26db0.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagef10ae5.PNG Type: image/png Size: 360 bytes Desc: imagef10ae5.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageeb6c2f.PNG Type: image/png Size: 313 bytes Desc: imageeb6c2f.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image8c11a6.JPG Type: image/jpeg Size: 28166 bytes Desc: image8c11a6.JPG URL: From cgoncalves at redhat.com Mon Jun 15 15:12:51 2020 From: cgoncalves at redhat.com (Carlos Goncalves) Date: Mon, 15 Jun 2020 17:12:51 +0200 Subject: Upstream Openstack Stein + Octavia - why is the installation procedure so different than other projects ? In-Reply-To: <04fdb8f56a814f9b98e98f16d21b9de4@teoco.com> References: <04fdb8f56a814f9b98e98f16d21b9de4@teoco.com> Message-ID: A new "Install and configure" section was created during the Ussuri release, and it should be applicable to a Stein cloud. Please refer to https://docs.openstack.org/octavia/latest/install/install.html On Mon, Jun 15, 2020 at 5:07 PM Alfia, Zion wrote: > Hello, > > > > Opposed to other Openstack Stein projects’ installation procedure > documentation – Octavia is no less than ... vague. > > > > > https://docs.openstack.org/octavia/stein/contributor/guides/dev-quick-start.html#running-octavia-in-production > > > > Why is that ? > > > > Kind Regards, > > Zion > > > > ------------------------------ > > PRIVILEGED AND CONFIDENTIAL > PLEASE NOTE: The information contained in this message is privileged and > confidential, and is intended only for the use of the individual to whom it > is addressed and others who have been specifically authorized to receive > it. If you are not the intended recipient, you are hereby notified that any > dissemination, distribution or copying of this communication is strictly > prohibited. If you have received this communication in error, or if any > problems occur with transmission, please contact sender. Thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nazancengiz at havelsan.com.tr Mon Jun 15 15:13:56 2020 From: nazancengiz at havelsan.com.tr (=?utf-8?B?TmF6YW4gQ0VOR8SwWg==?=) Date: Mon, 15 Jun 2020 15:13:56 +0000 Subject: Fw: In-Reply-To: <531ea54ca9184317afcfb3e802872bc1@havelsan.com.tr> References: , <8562e9f842ca41d99bfe04c6bc4b1d94@havelsan.com.tr>, , <9d1a3b8f89e64148a7f8c172b3d6e7af@havelsan.com.tr>, <531ea54ca9184317afcfb3e802872bc1@havelsan.com.tr> Message-ID: Hi I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have saw this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 After that, I've entered the nova-compute pod and I've run below commands; apt install python-rbd or pyhon3-rbd I've gotten same error. Do you have any of ideas about that? error; ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1216, in _get_rbd_driver ERROR nova.compute.manager raise RuntimeError(_('rbd python libraries not found')) nova.compute.manager RuntimeError: rbd python libraries not found Nova compute image;openstackhelm/nova:stein-ubuntu_bionic thanks. [cid:image47277c.PNG at 146ac1bb.4c8b75f8] [cid:imagee651da.JPG at e5153302.4a9d9d16] Nazan CENGİZ AR-GE MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:imageb41e09.PNG at 387b9fc4.47a0a622] +90 312 219 57 87 [cid:image03c6c8.PNG at 15784b7c.47a0bf68] +90 312 219 57 97 [cid:imageb545a5.JPG at 16c4292d.4497585a] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image47277c.PNG Type: image/png Size: 4676 bytes Desc: image47277c.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagee651da.JPG Type: image/jpeg Size: 305 bytes Desc: imagee651da.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageb41e09.PNG Type: image/png Size: 360 bytes Desc: imageb41e09.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image03c6c8.PNG Type: image/png Size: 313 bytes Desc: image03c6c8.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageb545a5.JPG Type: image/jpeg Size: 28166 bytes Desc: imageb545a5.JPG URL: From Zion.Alfia at teoco.com Mon Jun 15 15:56:11 2020 From: Zion.Alfia at teoco.com (Alfia, Zion) Date: Mon, 15 Jun 2020 15:56:11 +0000 Subject: Upstream Openstack Stein + Octavia - why is the installation procedure so different than other projects ? In-Reply-To: References: <04fdb8f56a814f9b98e98f16d21b9de4@teoco.com>, Message-ID: Thank you very much for the reply. the link seems to address only Ubuntu based installations. I've set up my upstream Stein installation on CentOS. when is CentOS Octavia installation procedure due ? Kind Regards, Zion On Jun 15, 2020 18:13, Carlos Goncalves wrote: A new "Install and configure" section was created during the Ussuri release, and it should be applicable to a Stein cloud. Please refer to https://docs.openstack.org/octavia/latest/install/install.html On Mon, Jun 15, 2020 at 5:07 PM Alfia, Zion > wrote: Hello, Opposed to other Openstack Stein projects’ installation procedure documentation – Octavia is no less than ... vague. https://docs.openstack.org/octavia/stein/contributor/guides/dev-quick-start.html#running-octavia-in-production Why is that ? Kind Regards, Zion ________________________________ PRIVILEGED AND CONFIDENTIAL PLEASE NOTE: The information contained in this message is privileged and confidential, and is intended only for the use of the individual to whom it is addressed and others who have been specifically authorized to receive it. If you are not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, or if any problems occur with transmission, please contact sender. Thank you. ________________________________ PRIVILEGED AND CONFIDENTIAL PLEASE NOTE: The information contained in this message is privileged and confidential, and is intended only for the use of the individual to whom it is addressed and others who have been specifically authorized to receive it. If you are not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, or if any problems occur with transmission, please contact sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Jun 15 16:16:52 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 15 Jun 2020 09:16:52 -0700 Subject: [ironic] code review jam - Thursday @ 4 PM UTC Message-ID: Greetings everyone, Ironic has a bit of a review backlog to work through, and in order to help facilitate getting through the backlog we're going to hold a review jam on Thursday of this week at 4 PM UTC. Everyone is welcome, but we're likely going to focus on the items needing review or quick discussion that are inside our review queue. For this, we will use meetpad [0]. Thanks everyone! -Julia 0: https://meetpad.opendev.org/ironic From mark at stackhpc.com Mon Jun 15 16:54:13 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 15 Jun 2020 17:54:13 +0100 Subject: [kolla] Ussuri release Message-ID: Hi, I think it's time we made our Ussuri release. I've been through open bugs and patches to try to work out what are our release blockers. Remember that we can and will follow up with fixes, let's keep it to actual release blockers and release some code for people to use. I've started a list on the whiteboard [1] line 105. Please add items there that you think are release critical. [1] https://etherpad.opendev.org/p/KollaWhiteBoard Thanks, Mark From nazancengiz at havelsan.com.tr Mon Jun 15 17:36:37 2020 From: nazancengiz at havelsan.com.tr (=?utf-8?B?TmF6YW4gQ0VOR8SwWg==?=) Date: Mon, 15 Jun 2020 17:36:37 +0000 Subject: Fw: In-Reply-To: References: , <8562e9f842ca41d99bfe04c6bc4b1d94@havelsan.com.tr>, , <9d1a3b8f89e64148a7f8c172b3d6e7af@havelsan.com.tr>, <531ea54ca9184317afcfb3e802872bc1@havelsan.com.tr>, Message-ID: <174eaa340de04988bdb177a8b2f76df7@havelsan.com.tr> Hi I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have saw this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 on nova-compute pod and I've run below commands; apt install python-rbd or pyhon3-rbd I've gotten same error. error; ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1216, in _get_rbd_driver ERROR nova.compute.manager raise RuntimeError(_('rbd python libraries not found')) nova.compute.manager RuntimeError: rbd python libraries not found Nova compute image;openstackhelm/nova:stein-ubuntu_bionic thanks. [cid:image4f4a19.PNG at 2e7b221e.449de25f] [cid:imageab11f5.JPG at af81021d.42bd551f] Nazan CENGİZ AR-GE MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:image77e50e.PNG at 31c6d770.4c834081] +90 312 219 57 87 [cid:image0166d5.PNG at cedc302d.488ff08c] +90 312 219 57 97 [cid:image948e01.JPG at 83dda995.428e5be2] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image4f4a19.PNG Type: image/png Size: 4676 bytes Desc: image4f4a19.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageab11f5.JPG Type: image/jpeg Size: 305 bytes Desc: imageab11f5.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image77e50e.PNG Type: image/png Size: 360 bytes Desc: image77e50e.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image0166d5.PNG Type: image/png Size: 313 bytes Desc: image0166d5.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image948e01.JPG Type: image/jpeg Size: 28166 bytes Desc: image948e01.JPG URL: From ewan.hamilton at managed.co.uk Mon Jun 15 17:43:15 2020 From: ewan.hamilton at managed.co.uk (Ewan Hamilton) Date: Mon, 15 Jun 2020 17:43:15 +0000 Subject: python-dracclient Message-ID: Hi guys, Your documentation for python-dracclient begins here: https://docs.openstack.org/python-dracclient/latest/usage.html with Usage Create a client object by providing the connection details of the DRAC card: client = wsmanclient.client.DRACClient('1.2.3.4', 'username', 's3cr3t') There is no import statement - and when I have searched google and found "import dracclient" because the assumed "import python-dracclient" doesn't work due to a hyphen (why would you name your module with a hyphen in the first place?!), it doesn't recognise "wsmanclient" in the editor still. Can you see just how frustrating this is for someone who expects documentation that actually works and explains how to actually use the module? This is a pathetic job here. Ewan Hamilton 3rd Line Networks Analyst t: 0800 033 4800 ext. 2212 | m: 07772 001 625 ​Technology House, 151 Silbury Boulevard, Milton Keynes, MK9 1LH managed. is a trading name of Managed247 Ltd ​Managed247 Ltd is a company registered in England and Wales under number 7019261 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image799935.png Type: image/png Size: 6379 bytes Desc: image799935.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image682018.png Type: image/png Size: 833 bytes Desc: image682018.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image056713.png Type: image/png Size: 959 bytes Desc: image056713.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image193517.png Type: image/png Size: 1196 bytes Desc: image193517.png URL: From gmann at ghanshyammann.com Mon Jun 15 17:57:25 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 15 Jun 2020 12:57:25 -0500 Subject: [tc][uc][all] Starting community-wide goals ideas for V series In-Reply-To: <171e602423a.de63edb72241.7523160282331774635@ghanshyammann.com> References: <17016a63ba1.dc0cafe2322988.5181705946513725916@ghanshyammann.com> <69323bb6-c236-9634-14dd-e93736428795@debian.org> <171e602423a.de63edb72241.7523160282331774635@ghanshyammann.com> Message-ID: <172b920f947.10748ffe6284814.1941384486790959504@ghanshyammann.com> Hello Everyone, Sending the final updates on Victoria cycle goal selection(on top of email for easy read). community-wide goals for Victoria cycle are finalized: - https://governance.openstack.org/tc/goals/selected/victoria/index.html Selected goals for Victoria cycle: ---------------------------------------- 1. Switch legacy Zuul jobs to native - https://governance.openstack.org/tc/goals/selected/victoria/native-zuulv3-jobs.html 2. Migrate CI/CD jobs to new Ubuntu LTS Focal - https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html Please start planning to work on those goals if not yet done. TC will be starting the W cycle goals process soon. -gmann & njohnston ---- On Tue, 05 May 2020 13:03:59 -0500 Ghanshyam Mann wrote ---- > ---- On Thu, 30 Apr 2020 06:20:29 -0500 Thomas Goirand wrote ---- > > On 2/5/20 7:39 PM, Ghanshyam Mann wrote: > > > Hello everyone, > > > > > > We are in R14 week of Ussuri cycle which means It's time to start the > > > discussions about community-wide goals ideas for the V series. > > > > > > Community-wide goals are important in term of solving and improving a technical > > > area across OpenStack as a whole. It has lot more benefits to be considered from > > > users as well from a developers perspective. See [1] for more details about > > > community-wide goals and process. > > > > > > We have the Zuulv3 migration goal already accepted and pre-selected for v cycle. > > > If you are interested in proposing a goal, please write down the idea on this etherpad[2] > > > - https://etherpad.openstack.org/p/YVR-v-series-goals > > > > > > Accordingly, we will start the separate ML discussion over each goal idea. > > > > > > Also, you can refer to the backlogs of community-wide goals from this[3] and ussuri > > > cycle goals[4]. > > > > > > NOTE: TC has defined the goal process schedule[5] to streamlined the process and > > > ready with goals for projects to plan/implement at the start of the cycle. I am > > > hoping to start that schedule for W cycle goals. > > > > > > [1] https://governance.openstack.org/tc/goals/index.html > > > [2] https://etherpad.openstack.org/p/YVR-v-series-goals > > > [3] https://etherpad.openstack.org/p/community-goals > > > [4] https://etherpad.openstack.org/p/PVG-u-series-goals > > > [5] https://governance.openstack.org/tc/goals/#goal-selection-schedule > > > > > > -gmann > > > > I've added 3 major paint points to the etherpad which I think are very > > important for operators: > > > > 8. Get all services to systemd-notify > > > > 9. Make it possible to reload service configurations dynamically without > > restarting daemons > > > > 10. All API to provide a /healthcheck URL (like the Keystone one...). > > > > I don't have the time to implement all of this, but that's still super > > useful things to have. Does anyone have the time to work on this? > > #10 looks interacting to me and useful from the user's point of view. I can help with this. > Key thing will be do we need more generic backends than file existence one, for example, DB checks > or service-based backends. > > But we can discuss all those details in separate threads, thanks for bringing this. > > -gmann > > > > > Cheers, > > > > Thomas Goirand (zigo) > > > > > > From stephenfin at redhat.com Mon Jun 15 20:12:22 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Mon, 15 Jun 2020 21:12:22 +0100 Subject: python-dracclient In-Reply-To: References: Message-ID: <19d9a83b48c7aa2a63f2d70405f5aca428f577d5.camel@redhat.com> On Mon, 2020-06-15 at 17:43 +0000, Ewan Hamilton wrote: > Hi guys, > > Your documentation for python-dracclient begins here: > https://docs.openstack.org/python-dracclient/latest/usage.html with > > > Usage > Create a client object by providing the connection details of the > DRAC card: > > client = wsmanclient.client.DRACClient('1.2.3.4', 'username', > 's3cr3t') > > > There is no import statement – and when I have searched google and > found “import dracclient” because the assumed “import python- > dracclient” doesn’t work due to a hyphen (why would you name your > module with a hyphen in the first place?!), > it doesn’t recognise “wsmanclient” in the editor still. > > Can you see just how frustrating this is for someone who expects > documentation that actually works and explains how to actually use > the module? > > This is a pathetic job here. This isn't helpful. There's a code of conduct and I suggest you read it before requesting other volunteer their time assisting you. Cheers, Stephen > > > Ewan Hamilton3rd Line Networks Analyst t: 0800 033 4800 > ext. 2212 | > m: 07772 001 625Technology House, 151 Silbury Boulevard, Milton Keyn > es, MK9 1LH > managed. is a trading name of Managed247 Ltd > Managed247 Ltd is a company registered in England and Wales under > number 7019261 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image193517.png Type: image/png Size: 1196 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image056713.png Type: image/png Size: 959 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image682018.png Type: image/png Size: 833 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image799935.png Type: image/png Size: 6379 bytes Desc: not available URL: From juliaashleykreger at gmail.com Mon Jun 15 20:17:25 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 15 Jun 2020 13:17:25 -0700 Subject: python-dracclient In-Reply-To: References: Message-ID: Greetings, Wow! Thanks for the heads up on this. It is kind of shockingly sparse... and I think the docs are being incorrectly published for that library as well. Anyway, I _think_ what the original author meant was for "dracclient" to be imported as "wsmanclient". The structure seems to match up with that. Looking at the existing code, it states "dracclient.client.DRACClient". That changed over four years ago so I think what you're viewing is a very old copy of the documentation that was once published in the openstack project namespace. As for where, I don't think it is presently being published, which makes this problem ultimately worse. I'm going to reach out to the maintainers and inquire if they can fix the larger issue of their docs publishing. Again, Thanks for the heads up. -Julia p.s. https://review.opendev.org/735674 On Mon, Jun 15, 2020 at 11:04 AM Ewan Hamilton wrote: > Hi guys, > > > > Your documentation for python-dracclient begins here: > https://docs.openstack.org/python-dracclient/latest/usage.html with > > > > > > Usage > > Create a client object by providing the connection details of the DRAC > card: > > > > client = wsmanclient.client.DRACClient('1.2.3.4', 'username', 's3cr3t') > > > > > > There is no import statement – and when I have searched google and found > “import dracclient” because the assumed “import python-dracclient” doesn’t > work due to a hyphen (why would you name your module with a hyphen in the > first place?!), it doesn’t recognise “wsmanclient” in the editor still. > > > > Can you see just how frustrating this is for someone who expects > documentation that actually works and explains how to actually use the > module? > > > > This is a pathetic job here. > > > > > Ewan Hamilton​ > 3rd Line Networks Analyst > t: 0800 033 4800 > ext. *2212* <2212> | > m: 07772 001 625 > [image: LinkedIn] > [image: Twitter] [image: Website] > > ​Technology House, 151 Silbury Boulevard, Milton Keynes, MK9 1LH > managed. is a trading name of Managed247 Ltd > ​Managed247 Ltd is a company registered in England and Wales under number > 7019261 > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image799935.png Type: image/png Size: 6379 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image682018.png Type: image/png Size: 833 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image056713.png Type: image/png Size: 959 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image193517.png Type: image/png Size: 1196 bytes Desc: not available URL: From Christopher.Dearborn at dell.com Mon Jun 15 20:51:33 2020 From: Christopher.Dearborn at dell.com (Christopher.Dearborn at dell.com) Date: Mon, 15 Jun 2020 20:51:33 +0000 Subject: python-dracclient In-Reply-To: References: Message-ID: Hey Ewan, Good to meet you! We’re thrilled that you are looking at using python-dracclient to work with your Dell EMC servers. Python-dracclient was originally developed for use by the iDRAC driver in the Ironic project. The iDRAC driver documentation needed quite a bit of work as well, and I’m happy to say that we did overhaul it in a recent release. Since python-dracclient has been a library primarily used by Ironic, the documentation for it has been overlooked up to this point. We instead have focused on adding new features and functionality, as well as fixing the occasional bug in both python-dracclient and the Ironic iDRAC driver. As a result of your email, we’re looking at updating the documentation for python-dracclient in our upcoming release. In the meantime, there are some great opensource examples that use python-dracclient, which will hopefully be helpful to you as examples: * This is a simple utility that checks to see if the iDRAC is ready to receive commands: * https://github.com/dsp-jetpack/JetPack/blob/master/src/pilot/is_idrac_ready.py * Note that this “is ready?” check is built into python-dracclient, so it’s not something you will need to do in your code. You could replace the call to is_idrac_ready() in the example with another call though. * This is a utility that resets the iDRAC, clears the job queue, then configures the boot mode, boot device, iDRAC settings, and it will even optionally change the iDRAC password. Note that this script works in a tripleo environment, so it will need some tweaking if you want to use it stand-alone: * https://github.com/dsp-jetpack/JetPack/blob/master/src/pilot/config_idrac.py * This is a utility that discovers iDRACs in an IP range that you provide including discovering the service tag and server model: * https://github.com/dsp-jetpack/JetPack/blob/master/src/pilot/discover_nodes/discover_nodes.py * And finally, the Ironic iDRAC driver makes extensive use of python-dracclient, but it is also probably the most complicated example: * https://github.com/openstack/ironic/tree/master/ironic/drivers/modules/drac As far as how python-dracclient originally got it’s name, I’m not really sure about that as we inherited the original repo from other developers who no longer work on the project. I suspect it was picked because it followed the naming conventions of at least some other library repos in OpenStack at that time. The WSManClient class is defined in https://github.com/openstack/python-dracclient/blob/master/dracclient/client.py, and it is a class that is only used internally to python-dracclient. To use python-dracclient, you should only have to: from dracclient import client Then, you can call any method in the DRACClient class here: https://github.com/openstack/python-dracclient/blob/master/dracclient/client.py#L41 If you want to view the python-dracclient code in a development environment editor, then you would need to modify the PYTHONPATH or equivalent in the dev environment to include the path to the directory containing python-dracclient/dracclient/client.py, and then it should be able to resolve everything. Feel free to reply on this list if you need a hand, or you can always email me directly. Thanks and happy hacking! Chris Dearborn Software Sr Principal Engr Dell EMC | Service Provider Engineering Christopher.Dearborn at Dell.com From: Ewan Hamilton Sent: Monday, June 15, 2020 1:43 PM To: openstack-discuss at lists.openstack.org Subject: python-dracclient [EXTERNAL EMAIL] Hi guys, Your documentation for python-dracclient begins here: https://docs.openstack.org/python-dracclient/latest/usage.html with Usage Create a client object by providing the connection details of the DRAC card: client = wsmanclient.client.DRACClient('1.2.3.4', 'username', 's3cr3t') There is no import statement – and when I have searched google and found “import dracclient” because the assumed “import python-dracclient” doesn’t work due to a hyphen (why would you name your module with a hyphen in the first place?!), it doesn’t recognise “wsmanclient” in the editor still. Can you see just how frustrating this is for someone who expects documentation that actually works and explains how to actually use the module? This is a pathetic job here. [cid:image001.png at 01D6432D.E183F840] Ewan Hamilton​ 3rd Line Networks Analyst t: 0800 033 4800 ext. 2212 | m: 07772 001 625 [LinkedIn] [Twitter] [Website] ​Technology House, 151 Silbury Boulevard, Milton Keynes, MK9 1LH managed. is a trading name of Managed247 Ltd ​Managed247 Ltd is a company registered in England and Wales under number 7019261 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 6379 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 833 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 959 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 1196 bytes Desc: image004.png URL: From ewan.hamilton at managed.co.uk Mon Jun 15 21:22:01 2020 From: ewan.hamilton at managed.co.uk (Ewan Hamilton) Date: Mon, 15 Jun 2020 21:22:01 +0000 Subject: python-dracclient In-Reply-To: References: Message-ID: Hi guys, sorry for my tone in the post – I was extremely frustrated at how a simple thing can be missed out. I write documentation for staff at my work and I know just how basic you have to be in order for them to understand it. I was able to get this working using: from dracclient import client service_instance = connect.SmartConnectNoSSL(host="x", user="x", pwd="x") Looking into it further, the author has another project called python-wsmanclient so I think he just got confused writing the documentation examples. My apologies again for my tone – I’m happy again now that it’s working! Ewan Ewan Hamilton 3rd Line Networks Analyst t: 0800 033 4800 ext. 2212 | m: 07772 001 625 ​Technology House, 151 Silbury Boulevard, Milton Keynes, MK9 1LH managed. is a trading name of Managed247 Ltd ​Managed247 Ltd is a company registered in England and Wales under number 7019261 From: Julia Kreger Sent: 15 June 2020 21:17 To: Ewan Hamilton Cc: openstack-discuss at lists.openstack.org Subject: Re: python-dracclient THIS MESSAGE ORIGINATED OUTSIDE YOUR ORGANISATION ________________________________ Greetings, Wow! Thanks for the heads up on this. It is kind of shockingly sparse... and I think the docs are being incorrectly published for that library as well. Anyway, I _think_ what the original author meant was for "dracclient" to be imported as "wsmanclient". The structure seems to match up with that. Looking at the existing code, it states "dracclient.client.DRACClient". That changed over four years ago so I think what you're viewing is a very old copy of the documentation that was once published in the openstack project namespace. As for where, I don't think it is presently being published, which makes this problem ultimately worse. I'm going to reach out to the maintainers and inquire if they can fix the larger issue of their docs publishing. Again, Thanks for the heads up. -Julia p.s. https://review.opendev.org/735674 On Mon, Jun 15, 2020 at 11:04 AM Ewan Hamilton > wrote: Hi guys, Your documentation for python-dracclient begins here: https://docs.openstack.org/python-dracclient/latest/usage.html with Usage Create a client object by providing the connection details of the DRAC card: client = wsmanclient.client.DRACClient('1.2.3.4', 'username', 's3cr3t') There is no import statement – and when I have searched google and found “import dracclient” because the assumed “import python-dracclient” doesn’t work due to a hyphen (why would you name your module with a hyphen in the first place?!), it doesn’t recognise “wsmanclient” in the editor still. Can you see just how frustrating this is for someone who expects documentation that actually works and explains how to actually use the module? This is a pathetic job here. [cid:image001.png at 01D64363.2BCCBE70] Ewan Hamilton​ 3rd Line Networks Analyst t: 0800 033 4800 ext. 2212 | m: 07772 001 625 [LinkedIn] [Twitter] [Website] ​Technology House, 151 Silbury Boulevard, Milton Keynes, MK9 1LH managed. is a trading name of Managed247 Ltd ​Managed247 Ltd is a company registered in England and Wales under number 7019261 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 6379 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 833 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 959 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 1196 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image672279.png Type: image/png Size: 6379 bytes Desc: image672279.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image340246.png Type: image/png Size: 833 bytes Desc: image340246.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image818723.png Type: image/png Size: 959 bytes Desc: image818723.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image841784.png Type: image/png Size: 1196 bytes Desc: image841784.png URL: From sorrison at gmail.com Tue Jun 16 01:21:12 2020 From: sorrison at gmail.com (Sam Morrison) Date: Tue, 16 Jun 2020 11:21:12 +1000 Subject: [neutron] Tap-as-a-service releases on pypi behind Message-ID: <7C9434A5-F2F8-425E-97C8-53BB80FFCEF3@gmail.com> It looks like tap-as-a-service project isn’t getting it’s latest releases pushed to pypi and this is breaking networking-midonet in train release. Source [1] has tags for releases 4.0.0, 5.0.0 and 6.0.0 but these aren’t in pypi [2] I need a fix [3] to upper-constraints to get this to work but just realised pypi is behind Can someone help me please. Thanks, Sam [1] https://opendev.org/x/tap-as-a-service/ [2] https://pypi.org/project/tap-as-a-service/ [3] https://review.opendev.org/#/c/735754/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Tue Jun 16 04:04:47 2020 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 16 Jun 2020 06:04:47 +0200 Subject: [neutron] Tap-as-a-service releases on pypi behind In-Reply-To: <7C9434A5-F2F8-425E-97C8-53BB80FFCEF3@gmail.com> References: <7C9434A5-F2F8-425E-97C8-53BB80FFCEF3@gmail.com> Message-ID: Hi, I can check it. Regards Lajos Sam Morrison ezt írta (időpont: 2020. jún. 16., K, 3:25): > It looks like tap-as-a-service project isn’t getting it’s latest releases > pushed to pypi and this is breaking networking-midonet in train release. > > Source [1] has tags for releases 4.0.0, 5.0.0 and 6.0.0 but these aren’t > in pypi [2] > > I need a fix [3] to upper-constraints to get this to work but just > realised pypi is behind > > Can someone help me please. > > > Thanks, > Sam > > > [1] https://opendev.org/x/tap-as-a-service/ > [2] https://pypi.org/project/tap-as-a-service/ > [3] https://review.opendev.org/#/c/735754/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Tue Jun 16 04:19:09 2020 From: hongbin034 at gmail.com (Hongbin Lu) Date: Tue, 16 Jun 2020 00:19:09 -0400 Subject: [kuryr][kuryr-libnetwork] In-Reply-To: <4bfc158cff65127acd989800490cf3a702a0ccbb.camel@redhat.com> References: <4bfc158cff65127acd989800490cf3a702a0ccbb.camel@redhat.com> Message-ID: Michal, Thanks for sending this email. +1 from me of course. Best regards, Hongbin On Mon, Jun 15, 2020 at 10:08 AM Michał Dulko wrote: > Hi, > > Due to shortage of kuryr-libnetwork core reviewers Hongbin proposed > that we should soften the rules a bit and allow merging patches with > just one +2. I'm totally fine with that, especially as the volume of > changes showing up in the project is pretty small. > > If nobody will have anything against the idea, I'll just go on and > update the docs later this week. > > Thanks, > Michał > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Tue Jun 16 04:59:56 2020 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 16 Jun 2020 06:59:56 +0200 Subject: [neutron] Tap-as-a-service releases on pypi behind In-Reply-To: <7C9434A5-F2F8-425E-97C8-53BB80FFCEF3@gmail.com> References: <7C9434A5-F2F8-425E-97C8-53BB80FFCEF3@gmail.com> Message-ID: Hi Sam, It seems that as taas is not under openstack governance Yamamo is the only maintainer on pypi, so he has only right to upload new release. Regards Lajos Sam Morrison ezt írta (időpont: 2020. jún. 16., K, 3:25): > It looks like tap-as-a-service project isn’t getting it’s latest releases > pushed to pypi and this is breaking networking-midonet in train release. > > Source [1] has tags for releases 4.0.0, 5.0.0 and 6.0.0 but these aren’t > in pypi [2] > > I need a fix [3] to upper-constraints to get this to work but just > realised pypi is behind > > Can someone help me please. > > > Thanks, > Sam > > > [1] https://opendev.org/x/tap-as-a-service/ > [2] https://pypi.org/project/tap-as-a-service/ > [3] https://review.opendev.org/#/c/735754/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Tue Jun 16 05:09:34 2020 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Tue, 16 Jun 2020 14:09:34 +0900 Subject: [neutron] Tap-as-a-service releases on pypi behind In-Reply-To: References: <7C9434A5-F2F8-425E-97C8-53BB80FFCEF3@gmail.com> Message-ID: i can add you as a maintainer. please tell me your pypi account. On Tue, Jun 16, 2020 at 2:00 PM Lajos Katona wrote: > > Hi Sam, > It seems that as taas is not under openstack governance Yamamo is the only maintainer on pypi, so he has only right to upload new release. > > Regards > Lajos > > Sam Morrison ezt írta (időpont: 2020. jún. 16., K, 3:25): >> >> It looks like tap-as-a-service project isn’t getting it’s latest releases pushed to pypi and this is breaking networking-midonet in train release. >> >> Source [1] has tags for releases 4.0.0, 5.0.0 and 6.0.0 but these aren’t in pypi [2] >> >> I need a fix [3] to upper-constraints to get this to work but just realised pypi is behind >> >> Can someone help me please. >> >> >> Thanks, >> Sam >> >> >> [1] https://opendev.org/x/tap-as-a-service/ >> [2] https://pypi.org/project/tap-as-a-service/ >> [3] https://review.opendev.org/#/c/735754/ >> From katonalala at gmail.com Tue Jun 16 07:04:49 2020 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 16 Jun 2020 09:04:49 +0200 Subject: [neutron] Tap-as-a-service releases on pypi behind In-Reply-To: References: <7C9434A5-F2F8-425E-97C8-53BB80FFCEF3@gmail.com> Message-ID: Hi Takashi, Thanks, my pypi account name is: lajoskatona https://pypi.org/user/lajoskatona/ Regards Lajos Takashi Yamamoto ezt írta (időpont: 2020. jún. 16., K, 7:09): > i can add you as a maintainer. > please tell me your pypi account. > > On Tue, Jun 16, 2020 at 2:00 PM Lajos Katona wrote: > > > > Hi Sam, > > It seems that as taas is not under openstack governance Yamamo is the > only maintainer on pypi, so he has only right to upload new release. > > > > Regards > > Lajos > > > > Sam Morrison ezt írta (időpont: 2020. jún. 16., K, > 3:25): > >> > >> It looks like tap-as-a-service project isn’t getting it’s latest > releases pushed to pypi and this is breaking networking-midonet in train > release. > >> > >> Source [1] has tags for releases 4.0.0, 5.0.0 and 6.0.0 but these > aren’t in pypi [2] > >> > >> I need a fix [3] to upper-constraints to get this to work but just > realised pypi is behind > >> > >> Can someone help me please. > >> > >> > >> Thanks, > >> Sam > >> > >> > >> [1] https://opendev.org/x/tap-as-a-service/ > >> [2] https://pypi.org/project/tap-as-a-service/ > >> [3] https://review.opendev.org/#/c/735754/ > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue Jun 16 08:58:04 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 16 Jun 2020 09:58:04 +0100 Subject: [neutron] Failed to create a duplicate DefaultSecurityGroup Message-ID: With Ussuri I'm hitting this in the neutron server: Failed to create a duplicate DefaultSecurityGroup: for attribute(s) ['PRIMARY'] with value(s) 11447be9beda4bf78dab27cdb75058e2 pymysql.err.IntegrityError: (1062, "Duplicate entry '11447be9beda4bf78dab27cdb75058e2' for key 'PRIMARY'") oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry '11447be9beda4bf78dab27cdb75058e2' for key 'PRIMARY'") [SQL: INSERT INTO default_security_group (project_id, security_group_id) VALUES (%(project_id)s, %(security_group_id)s)] [parameters: {'project_id': '11447be9beda4bf78dab27cdb75058e2', 'security_group_id': '9f3a473c-b08a-4cf2-8327-10ecc8b87301'}] neutron_lib.objects.exceptions.NeutronDbObjectDuplicateEntry: Failed to create a duplicate DefaultSecurityGroup: for attribute(s) ['PRIMARY'] with value(s) 11447be9beda4bf78dab27cdb75058e2 (Those are all, I believe, reports of the same problem, at different levels of the stack.) IIUC, this is triggered by my Neutron driver calling rules = self.db.get_security_group_rules( context, filters={'security_group_id': sgids} ) where the context has project_id 11447be9beda4bf78dab27cdb75058e2. Deep down inside that call, Neutron tries to ensure that there is a default security group for that project, and somehow that hits the reported exception. Here's the code in securitygroups_db.py: def _ensure_default_security_group(self, context, tenant_id): """Create a default security group if one doesn't exist. :returns: the default security group id for given tenant. """ default_group_id = self._get_default_sg_id(context, tenant_id) if default_group_id: return default_group_id security_group = { 'security_group': {'name': 'default', 'tenant_id': tenant_id, 'description': _('Default security group')} } return self.create_security_group(context, security_group, default_sg=True)['id'] Obviously it checks first if the default SG already exists for the project, before creating it if not. So why would that code hit the duplicate exception as shown above? Any ideas welcome! Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Jun 16 10:16:03 2020 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 16 Jun 2020 11:16:03 +0100 Subject: [kolla] Kolla Klub meeting Message-ID: Hi, The next Kolla klub meeting is scheduled for Thursday at 15:00 UTC. We'll have a summary of the recent PTG discussions and some open discussion. Please bring ideas for discussion topics! https://docs.google.com/document/d/1EwQs2GXF-EvJZamEx9vQAOSDB5tCjsDCJyHQN5_4_Sw/edit# Thanks, Mark From skaplons at redhat.com Tue Jun 16 10:46:06 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 16 Jun 2020 12:46:06 +0200 Subject: [neutron][fwaas] Removal of neutron-fwaas projects from the neutron stadium Message-ID: <20200616104606.mndzm2bgkzchq645@skaplons-mac> Hi, In Shanghai PTG we agreed that due to lack of maintainers of neutron-fwaas project we are going to deprecate it in neutron stadium in Ussuri cycle. Since then we asked couple of times about volunteers who would like to maintain this project but unfortunatelly there is still lack of such maintainers. So now, as we are already in Victoria cycle, I just proposed serie of patches [1] to remove master branch of neutron-fwaas and neutron-fwaas-dashboard from the neutron stadium. Stable branches will be still there and can be maintained but there will be no any code in master branch and there will be no new releases of those 2 projects in Victoria. If You are using this project and wants to maintain it, You can respin it in x/ namespace if needed. Feel free to ping me on IRC (slaweq) or by email if You would have any questions about that. [1] https://review.opendev.org/#/q/topic:retire-neutron-fwaas+(status:open+OR+status:merged) -- Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Tue Jun 16 12:12:01 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 16 Jun 2020 14:12:01 +0200 Subject: [neutron] Failed to create a duplicate DefaultSecurityGroup In-Reply-To: References: Message-ID: <20200616121201.bkpdknqyaitumy4n@skaplons-mac> Hi, Can You report a LP bug for that and attach full stack traces from the neutron server? On Tue, Jun 16, 2020 at 09:58:04AM +0100, Neil Jerram wrote: > With Ussuri I'm hitting this in the neutron server: > > Failed to create a duplicate DefaultSecurityGroup: for attribute(s) > ['PRIMARY'] with value(s) 11447be9beda4bf78dab27cdb75058e2 > pymysql.err.IntegrityError: (1062, "Duplicate entry > '11447be9beda4bf78dab27cdb75058e2' for key 'PRIMARY'") > oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, > "Duplicate entry '11447be9beda4bf78dab27cdb75058e2' for key 'PRIMARY'") > [SQL: INSERT INTO default_security_group (project_id, security_group_id) > VALUES (%(project_id)s, %(security_group_id)s)] > [parameters: {'project_id': '11447be9beda4bf78dab27cdb75058e2', > 'security_group_id': '9f3a473c-b08a-4cf2-8327-10ecc8b87301'}] > neutron_lib.objects.exceptions.NeutronDbObjectDuplicateEntry: Failed to > create a duplicate DefaultSecurityGroup: for attribute(s) ['PRIMARY'] with > value(s) 11447be9beda4bf78dab27cdb75058e2 > > (Those are all, I believe, reports of the same problem, at different levels > of the stack.) > > IIUC, this is triggered by my Neutron driver calling > > rules = self.db.get_security_group_rules( > context, filters={'security_group_id': sgids} > ) > > where the context has project_id 11447be9beda4bf78dab27cdb75058e2. Deep > down inside that call, Neutron tries to ensure that there is a default > security group for that project, and somehow that hits the reported > exception. > > Here's the code in securitygroups_db.py: > > def _ensure_default_security_group(self, context, tenant_id): > """Create a default security group if one doesn't exist. > > :returns: the default security group id for given tenant. > """ > default_group_id = self._get_default_sg_id(context, tenant_id) > if default_group_id: > return default_group_id > > security_group = { > 'security_group': > {'name': 'default', > 'tenant_id': tenant_id, > 'description': _('Default security group')} > } > return self.create_security_group(context, security_group, > default_sg=True)['id'] > > Obviously it checks first if the default SG already exists for the project, > before creating it if not. So why would that code hit the duplicate > exception as shown above? > > Any ideas welcome! > > Best wishes, > Neil -- Slawek Kaplonski Senior software engineer Red Hat From neil at tigera.io Tue Jun 16 12:44:58 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 16 Jun 2020 13:44:58 +0100 Subject: [neutron] Failed to create a duplicate DefaultSecurityGroup In-Reply-To: <20200616121201.bkpdknqyaitumy4n@skaplons-mac> References: <20200616121201.bkpdknqyaitumy4n@skaplons-mac> Message-ID: Thanks Slawek. I'm happy to do that, but I thought I should write here first in case it is some kind of user error, and not really a bug in the Neutron code. On Tue, Jun 16, 2020 at 1:12 PM Slawek Kaplonski wrote: > Hi, > > Can You report a LP bug for that and attach full stack traces from the > neutron > server? > > On Tue, Jun 16, 2020 at 09:58:04AM +0100, Neil Jerram wrote: > > With Ussuri I'm hitting this in the neutron server: > > > > Failed to create a duplicate DefaultSecurityGroup: for attribute(s) > > ['PRIMARY'] with value(s) 11447be9beda4bf78dab27cdb75058e2 > > pymysql.err.IntegrityError: (1062, "Duplicate entry > > '11447be9beda4bf78dab27cdb75058e2' for key 'PRIMARY'") > > oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, > > "Duplicate entry '11447be9beda4bf78dab27cdb75058e2' for key 'PRIMARY'") > > [SQL: INSERT INTO default_security_group (project_id, security_group_id) > > VALUES (%(project_id)s, %(security_group_id)s)] > > [parameters: {'project_id': '11447be9beda4bf78dab27cdb75058e2', > > 'security_group_id': '9f3a473c-b08a-4cf2-8327-10ecc8b87301'}] > > neutron_lib.objects.exceptions.NeutronDbObjectDuplicateEntry: Failed to > > create a duplicate DefaultSecurityGroup: for attribute(s) ['PRIMARY'] > with > > value(s) 11447be9beda4bf78dab27cdb75058e2 > > > > (Those are all, I believe, reports of the same problem, at different > levels > > of the stack.) > > > > IIUC, this is triggered by my Neutron driver calling > > > > rules = self.db.get_security_group_rules( > > context, filters={'security_group_id': sgids} > > ) > > > > where the context has project_id 11447be9beda4bf78dab27cdb75058e2. Deep > > down inside that call, Neutron tries to ensure that there is a default > > security group for that project, and somehow that hits the reported > > exception. > > > > Here's the code in securitygroups_db.py: > > > > def _ensure_default_security_group(self, context, tenant_id): > > """Create a default security group if one doesn't exist. > > > > :returns: the default security group id for given tenant. > > """ > > default_group_id = self._get_default_sg_id(context, tenant_id) > > if default_group_id: > > return default_group_id > > > > security_group = { > > 'security_group': > > {'name': 'default', > > 'tenant_id': tenant_id, > > 'description': _('Default security group')} > > } > > return self.create_security_group(context, security_group, > > default_sg=True)['id'] > > > > Obviously it checks first if the default SG already exists for the > project, > > before creating it if not. So why would that code hit the duplicate > > exception as shown above? > > > > Any ideas welcome! > > > > Best wishes, > > Neil > > -- > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue Jun 16 13:10:08 2020 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 16 Jun 2020 14:10:08 +0100 Subject: [neutron][qos] Meeting cancelled Message-ID: Hello: Due to the lack of agenda, I'll cancel today's meeting. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Christopher.Dearborn at dell.com Tue Jun 16 13:46:53 2020 From: Christopher.Dearborn at dell.com (Christopher.Dearborn at dell.com) Date: Tue, 16 Jun 2020 13:46:53 +0000 Subject: python-dracclient In-Reply-To: References: Message-ID: My apologies if this email is a duplicate. Resending because I had some issues with my subscription and can’t tell if the original was actually delivered to the list or not. Thanks, Chris From: Dearborn, Chris Sent: Monday, June 15, 2020 4:52 PM To: 'Ewan Hamilton' Cc: 'openstack-discuss at lists.openstack.org' Subject: RE: python-dracclient Hey Ewan, Good to meet you! We’re thrilled that you are looking at using python-dracclient to work with your Dell EMC servers. Python-dracclient was originally developed for use by the iDRAC driver in the Ironic project. The iDRAC driver documentation needed quite a bit of work as well, and I’m happy to say that we did overhaul it in a recent release. Since python-dracclient has been a library primarily used by Ironic, the documentation for it has been overlooked up to this point. We instead have focused on adding new features and functionality, as well as fixing the occasional bug in both python-dracclient and the Ironic iDRAC driver. As a result of your email, we’re looking at updating the documentation for python-dracclient in our upcoming release. In the meantime, there are some great opensource examples that use python-dracclient, which will hopefully be helpful to you: * This is a simple utility that checks to see if the iDRAC is ready to receive commands: * https://github.com/dsp-jetpack/JetPack/blob/master/src/pilot/is_idrac_ready.py * Note that this “is ready?” check is built into python-dracclient, so it’s not something you will need to do in your code. You could replace the call to is_idrac_ready() in the example with another call though. * This is a utility that resets the iDRAC, clears the job queue, then configures the boot mode, boot device, iDRAC settings, and it will even optionally change the iDRAC password. Note that this script works in a tripleo environment, so it will need some tweaking if you want to use it stand-alone: * https://github.com/dsp-jetpack/JetPack/blob/master/src/pilot/config_idrac.py * This is a utility that discovers iDRACs in an IP range that you provide including discovering the service tag and server model: * https://github.com/dsp-jetpack/JetPack/blob/master/src/pilot/discover_nodes/discover_nodes.py * And finally, the Ironic iDRAC driver makes extensive use of python-dracclient, but it is also probably the most complicated example: * https://github.com/openstack/ironic/tree/master/ironic/drivers/modules/drac As far as how python-dracclient originally got it’s name, I’m not really sure about that as we inherited the original repo from other developers who no longer work on the project. I suspect it was picked because it followed the naming conventions of at least some other library repos in OpenStack at that time. The WSManClient class is defined in https://github.com/openstack/python-dracclient/blob/master/dracclient/client.py, and it is a class that is only used internally to python-dracclient. To use python-dracclient, you should only have to: from dracclient import client Then, you can call any method in the DRACClient class here: https://github.com/openstack/python-dracclient/blob/master/dracclient/client.py#L41 If you want to view the python-dracclient code in a development environment editor, then you would need to modify the PYTHONPATH or equivalent in the dev environment to include the path to the directory containing python-dracclient/dracclient/client.py, and then it should be able to resolve everything. Feel free to reply on this list if you need a hand, or you can always email me directly. Thanks and happy hacking! Chris Dearborn Software Sr Principal Engr Dell EMC | Service Provider Engineering Christopher.Dearborn at Dell.com From: Ewan Hamilton > Sent: Monday, June 15, 2020 1:43 PM To: openstack-discuss at lists.openstack.org Subject: python-dracclient [EXTERNAL EMAIL] Hi guys, Your documentation for python-dracclient begins here: https://docs.openstack.org/python-dracclient/latest/usage.html with Usage Create a client object by providing the connection details of the DRAC card: client = wsmanclient.client.DRACClient('1.2.3.4', 'username', 's3cr3t') There is no import statement – and when I have searched google and found “import dracclient” because the assumed “import python-dracclient” doesn’t work due to a hyphen (why would you name your module with a hyphen in the first place?!), it doesn’t recognise “wsmanclient” in the editor still. Can you see just how frustrating this is for someone who expects documentation that actually works and explains how to actually use the module? -------------- next part -------------- An HTML attachment was scrubbed... URL: From maslan at havelsan.com.tr Tue Jun 16 06:16:45 2020 From: maslan at havelsan.com.tr (Merve ASLAN) Date: Tue, 16 Jun 2020 06:16:45 +0000 Subject: Ceph Integration on Openstack Platform Message-ID: <3a868f5583c140edb68e850b26d68078@havelsan.com.tr> Hi, I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have seen this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 Bug #1860990 “RBD image backend tries to flatten images even if ...” : Series stein : Bugs : OpenStack Compute (nova) bugs.launchpad.net When [DEFAULT]show_multiple_locations option is not set in glance, and both glance and nova use ceph as their backend, with properly configured accesses, nova will fail with the following exception: 2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [req-8021fd76-d5ab-4a9b-bd17-f5eb4d4faf62 0e96a04f360644818632b7e46fe8d3e7 ac01daacc7424a40b8b464a163902dcb - default default] [instance: fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] Instance failed to spawn: rbd.InvalidArgument: [errno 22] error f... After that, I've entered the nova-compute pod and I've run below commands; apt install python-rbd apt install python3-rbd But it didn't work. I've gotten same error. A solution must be on the K8s side. error; 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1216, in _get_rbd_driver 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager rbd_user=CONF.libvirt.rbd_user) 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 128, in __init__ 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager raise RuntimeError(_('rbd python libraries not found')) 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager RuntimeError: rbd python libraries not found Nova compute image; [cid:e78156d8-4edf-4abd-8585-7e346fb1f879] Do you have any of ideas about that? Best Regards, [cid:image100503.PNG at 466e127b.4087c56f] [cid:image9bdac0.JPG at 2b00e707.4cb2e69f] Merve ASLAN SİSTEM MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:imagea7e2da.PNG at 49bdc19b.4eabdf05] +90 312 219 57 87 [cid:imaged072fc.PNG at 50ceb928.4cb632fd] +90 312 219 57 97 [cid:image987ca6.JPG at 943a32e9.4ebc9425] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image100503.PNG Type: image/png Size: 4676 bytes Desc: image100503.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image9bdac0.JPG Type: image/jpeg Size: 305 bytes Desc: image9bdac0.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagea7e2da.PNG Type: image/png Size: 360 bytes Desc: imagea7e2da.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imaged072fc.PNG Type: image/png Size: 313 bytes Desc: imaged072fc.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image987ca6.JPG Type: image/jpeg Size: 28166 bytes Desc: image987ca6.JPG URL: From nazancengiz at havelsan.com.tr Tue Jun 16 06:20:11 2020 From: nazancengiz at havelsan.com.tr (=?utf-8?B?TmF6YW4gQ0VOR8SwWg==?=) Date: Tue, 16 Jun 2020 06:20:11 +0000 Subject: Fw: Ceph Integration on Openstack Platform In-Reply-To: <3a868f5583c140edb68e850b26d68078@havelsan.com.tr> References: <3a868f5583c140edb68e850b26d68078@havelsan.com.tr> Message-ID: Hi, I've wanted to use ceph storage as a backend for nova, glance and cinder. I've succeded to use ceph for cinder and glance but I couldn't at nova. I have seen this bug for "Stein" version ---> https://bugs.launchpad.net/nova/stein/+bug/1860990 Bug #1860990 “RBD image backend tries to flatten images even if ...” : Series stein : Bugs : OpenStack Compute (nova) bugs.launchpad.net When [DEFAULT]show_multiple_locations option is not set in glance, and both glance and nova use ceph as their backend, with properly configured accesses, nova will fail with the following exception: 2020-01-23 14:36:43.617 8647 ERROR nova.compute.manager [req-8021fd76-d5ab-4a9b-bd17-f5eb4d4faf62 0e96a04f360644818632b7e46fe8d3e7 ac01daacc7424a40b8b464a163902dcb - default default] [instance: fa9e4118-1bb1-4d52-a2e1-9f61b0e20dc6] Instance failed to spawn: rbd.InvalidArgument: [errno 22] error f... After that, I've entered the nova-compute pod and I've run below commands; apt install python-rbd apt install python3-rbd But it didn't work. I've gotten same error. A solution must be on the K8s side. error; 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1216, in _get_rbd_driver 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager rbd_user=CONF.libvirt.rbd_user) 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager File "/var/lib/openstack/lib/python3.6/site-packages/nova/virt/libvirt/storage/rbd_utils.py", line 128, in __init__ 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager raise RuntimeError(_('rbd python libraries not found')) 2020-06-15 12:34:23.617 21543 ERROR nova.compute.manager RuntimeError: rbd python libraries not found Nova compute image;openstackhelm/nova:stein-ubuntu_bionic Do you have any of ideas about that? Best Regards, [cid:image100503.PNG at 466e127b.4087c56f] [cid:image9bdac0.JPG at 2b00e707.4cb2e69f] Merve ASLAN SİSTEM MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:imagea7e2da.PNG at 49bdc19b.4eabdf05] +90 312 219 57 87 [cid:imaged072fc.PNG at 50ceb928.4cb632fd] +90 312 219 57 97 [cid:image987ca6.JPG at 943a32e9.4ebc9425] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email [cid:image67f865.PNG at dcd7e1b0.4180ba2b] [cid:imageb6c961.JPG at 7775dc8f.41a1ef2f] Nazan CENGİZ AR-GE MÜHENDİSİ Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE [cid:imageb768a7.PNG at ee5ec72e.449c6050] +90 312 219 57 87 [cid:image01f028.PNG at 9990722a.41b3ecb8] +90 312 219 57 97 [cid:image996d9d.JPG at a25c3ae1.4fa7f973] YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image100503.PNG Type: image/png Size: 4676 bytes Desc: image100503.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image9bdac0.JPG Type: image/jpeg Size: 305 bytes Desc: image9bdac0.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagea7e2da.PNG Type: image/png Size: 360 bytes Desc: imagea7e2da.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imaged072fc.PNG Type: image/png Size: 313 bytes Desc: imaged072fc.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image987ca6.JPG Type: image/jpeg Size: 28166 bytes Desc: image987ca6.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image67f865.PNG Type: image/png Size: 4676 bytes Desc: image67f865.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageb6c961.JPG Type: image/jpeg Size: 305 bytes Desc: imageb6c961.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageb768a7.PNG Type: image/png Size: 360 bytes Desc: imageb768a7.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image01f028.PNG Type: image/png Size: 313 bytes Desc: image01f028.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image996d9d.JPG Type: image/jpeg Size: 28166 bytes Desc: image996d9d.JPG URL: From neil at tigera.io Tue Jun 16 15:34:26 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 16 Jun 2020 16:34:26 +0100 Subject: [neutron] Failed to create a duplicate DefaultSecurityGroup In-Reply-To: References: <20200616121201.bkpdknqyaitumy4n@skaplons-mac> Message-ID: https://bugs.launchpad.net/neutron/+bug/1883730 On Tue, Jun 16, 2020 at 1:44 PM Neil Jerram wrote: > Thanks Slawek. I'm happy to do that, but I thought I should write here > first in case it is some kind of user error, and not really a bug in the > Neutron code. > > > On Tue, Jun 16, 2020 at 1:12 PM Slawek Kaplonski > wrote: > >> Hi, >> >> Can You report a LP bug for that and attach full stack traces from the >> neutron >> server? >> >> On Tue, Jun 16, 2020 at 09:58:04AM +0100, Neil Jerram wrote: >> > With Ussuri I'm hitting this in the neutron server: >> > >> > Failed to create a duplicate DefaultSecurityGroup: for attribute(s) >> > ['PRIMARY'] with value(s) 11447be9beda4bf78dab27cdb75058e2 >> > pymysql.err.IntegrityError: (1062, "Duplicate entry >> > '11447be9beda4bf78dab27cdb75058e2' for key 'PRIMARY'") >> > oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, >> > "Duplicate entry '11447be9beda4bf78dab27cdb75058e2' for key 'PRIMARY'") >> > [SQL: INSERT INTO default_security_group (project_id, security_group_id) >> > VALUES (%(project_id)s, %(security_group_id)s)] >> > [parameters: {'project_id': '11447be9beda4bf78dab27cdb75058e2', >> > 'security_group_id': '9f3a473c-b08a-4cf2-8327-10ecc8b87301'}] >> > neutron_lib.objects.exceptions.NeutronDbObjectDuplicateEntry: Failed to >> > create a duplicate DefaultSecurityGroup: for attribute(s) ['PRIMARY'] >> with >> > value(s) 11447be9beda4bf78dab27cdb75058e2 >> > >> > (Those are all, I believe, reports of the same problem, at different >> levels >> > of the stack.) >> > >> > IIUC, this is triggered by my Neutron driver calling >> > >> > rules = self.db.get_security_group_rules( >> > context, filters={'security_group_id': sgids} >> > ) >> > >> > where the context has project_id 11447be9beda4bf78dab27cdb75058e2. Deep >> > down inside that call, Neutron tries to ensure that there is a default >> > security group for that project, and somehow that hits the reported >> > exception. >> > >> > Here's the code in securitygroups_db.py: >> > >> > def _ensure_default_security_group(self, context, tenant_id): >> > """Create a default security group if one doesn't exist. >> > >> > :returns: the default security group id for given tenant. >> > """ >> > default_group_id = self._get_default_sg_id(context, tenant_id) >> > if default_group_id: >> > return default_group_id >> > >> > security_group = { >> > 'security_group': >> > {'name': 'default', >> > 'tenant_id': tenant_id, >> > 'description': _('Default security group')} >> > } >> > return self.create_security_group(context, security_group, >> > default_sg=True)['id'] >> > >> > Obviously it checks first if the default SG already exists for the >> project, >> > before creating it if not. So why would that code hit the duplicate >> > exception as shown above? >> > >> > Any ideas welcome! >> > >> > Best wishes, >> > Neil >> >> -- >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Jun 16 19:44:37 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 16 Jun 2020 21:44:37 +0200 Subject: [neutron][neutron-dynamic-routing] Call for maintainers Message-ID: <20200616194437.ejoe73t7ahsbzxhe@skaplons-mac> Hi, During last, virtual PTG we discussed about health of neutron stadium projects (again). And it seems that neutron-dynamic-routing project is slowly going to be in similar state as neutron-fwaas in Ussuri. So there is basically no active maintainers of this project in our community. In Ussuri cycle there was Ryan Tidwell from SUSE who was taking care of this project but AFAIK he is not able to do that anymore in Victoria cycle. So, if You are using this project or are interested in it, please contact me by email or on IRC that You want to take care of it. Usually this don't means like a lot of work every day. But we need someone who we can ask for help e.g. when gate is broken or when there is some new bug reported and there is need to triage it. -- Slawek Kaplonski Senior software engineer Red Hat From openstack at nemebean.com Tue Jun 16 20:04:17 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 16 Jun 2020 15:04:17 -0500 Subject: [all] oslotest removing mock as a dependency Message-ID: Hi all, In keeping with the move from the third-party mock library to the built-in unittest.mock module, oslotest is removing mock as a dependency (again). We tried this at the end of the last cycle, but it turned out a number of projects were relying on oslotest to pull in mock for them so in order to not hold up the release we reverted the change. We've un-reverted it and the release request is now merged, so if you were previously relying on oslotest to install mock in your tests you will now need to do one of the following: 1) Migrate to unittest.mock (preferred) or 2) Explicitly depend on mock in your test-requirements.txt. If you have any questions about this, let us know. Thanks. -Ben From allison at openstack.org Tue Jun 16 20:17:30 2020 From: allison at openstack.org (Allison Price) Date: Tue, 16 Jun 2020 15:17:30 -0500 Subject: Openstack user surver - questions In-Reply-To: <20200611150314.gmdqw4saztz6q5ym@skaplons-mac> References: <20200531155958.y3t2en2jydrdx3gx@skaplons-mac> <272339AC-F0D8-4FD7-8D1C-34F166449962@openstack.org> <20200611150314.gmdqw4saztz6q5ym@skaplons-mac> Message-ID: Hi Slawek - sorry for the delay in response. > On Jun 11, 2020, at 10:03 AM, Slawek Kaplonski wrote: > > Hi, > > Thx for the info Allison. > > On Sun, May 31, 2020 at 01:40:16PM -0500, Allison Price wrote: >> Hi Slawek, >> >> Thanks for reaching out with your questions about the user survey! >> >> >>> On May 31, 2020, at 10:59 AM, Slawek Kaplonski wrote: >>> >>> Hi, >>> >>> First of all sorry if I didn't look for it long enough but I have couple of >>> questions about user surver and I couldn't find answer for them anywhere. >>> >>> 1. I was looking at [1] for some Neutron related data and I found only one >>> questions about used backends in "Deployment Decisions". Problem is that this >>> graph is for me a bit unreadable due to many things on x-axis which overlaps >>> each other. Is there any place where I can find some "raw data" to check? >> >> The Foundation can pull raw data for you as long as the information remains anonymized and share. Is the used backends question the only one you want data on? Or would you also like data o the percentage of users interested in, testing, and deploying Neutron? This data is also available in the analytics dashboard [1], but can often be hard to read as well. > > That would be great to see such data anonymized data. Info about users who are > testing or interested in some features would be also great. That may give us > some hints about what to focus on during next cycles. Ok - so we are planning to close the 2020 user survey on August 20, and I can circulate the raw, anonymized data at that time. If you would like it sooner (even though it won’t be a complete picture), let me know. > >> >>> >>> 2. Another question about the same chart is: is there any way to maybe change >>> possible replies in the surver for next years? I'm asking about that becasue I >>> have a feeling that e.g. responses "Open vSwitch" and "ML2 - Open vSwitch" may >>> be confusing for users. >>> My understanding is that "Open vSwitch" means simply old "Open vSwitch" core >>> plugin instead of ML2 plugin but this old plugin was removed around "Libery" >>> cycle so I really don't think that still 37% of users are using it. >> >> We try to keep most questions static from survey to survey for comparison reasoning. However, if you think that some responses are confusing and can propose alternative language, we can consider that and make those changes. > > Ok, I understand. Is there any repo or other place when I can check all those > possible answers? There isn’t a repo for the User Survey, but if you want, you can go through the existing survey to see all of the active responses. If there are any changes for the 2021 version (that will go live on August 21), please let us know by the end of July. > >> >>> >>> 3. Is there any way to propose new, more detailed questions about e.g Neutron? >>> For example what service plugins they are using. >> >> We have let each PTL add 1-2 optional questions at the end of the survey for respondents who indicated they were working with a particular project. The current Neutron question is: Which of the following features in the Neutron project are you actively using, interested in using or looking forward to using in your OpenStack deployment? > > That Neutron question is exactly what I want but again, where I can find answers > for that question from last survey? From me :) The answers to the 2019 survey are attached. We keep the answers private since it includes so much identifiable information (contact name, organization name, and deployment details) that respondents have requested to remain private. Let me know if there is anything else I can provide. Allison > >> >> The current user survey cycle ends in late August. That is when we will circulate the anonymized results to this question with the openstack-discuss mailing list along with other project-specific questions. At that time, PTLs can let us know if they would like to change their question. >> >> Let me know if you have any other questions - happy to help! I’ll also be around this week during the PTG if you would like me to jump in and clarify anything. >> >> Allison Price >> IRC: aprice >> >>> >>> [1] https://www.openstack.org/analytics >>> >>> -- >>> Slawek Kaplonski >>> Senior software engineer >>> Red Hat >>> >>> >> > > -- > Slawek Kaplonski > Senior software engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenStackNeutron_2019.csv Type: text/csv Size: 22925 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig at stackhpc.com Tue Jun 16 22:29:18 2020 From: stig at stackhpc.com (Stig Telfer) Date: Tue, 16 Jun 2020 23:29:18 +0100 Subject: [scientific-sig] IRC Meeting Wednesday 1100UTC - Supporting COVID workloads on OpenStack Message-ID: <9C397130-2C59-45CD-9182-2442662880BC@telfer.org> Hi all - We have a Scientific SIG meeting coming up on Wednesday at 1100 UTC in IRC channel #openstack-meeting. Everyone is welcome. In this week's agenda we are hoping to gather experiences from people working to support COVID-19 workloads on their OpenStack systems. This could be biochemistry simulations, or supporting contact tracing, or whatever. It would be interesting to hear what people are doing and share experiences if we can. The full agenda is available here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_June_17th_2020 Cheers, Stig From sagarun at gmail.com Tue Jun 16 23:29:17 2020 From: sagarun at gmail.com (Arun SAG) Date: Tue, 16 Jun 2020 16:29:17 -0700 Subject: [ironic] advanced partitioning discussion In-Reply-To: References: Message-ID: On Mon, Jun 15, 2020 at 12:55 AM Dmitry Tantsur wrote: > > My apologies, of course I meant JUNE 17th, not July, i.e. this Wednesday. > Added an etherpad for meeting notes https://etherpad.opendev.org/p/ironic-disk-partitioning-2020 -- Arun S A G http://zer0c00l.in/ From yamamoto at midokura.com Wed Jun 17 02:38:40 2020 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Wed, 17 Jun 2020 11:38:40 +0900 Subject: [neutron] Tap-as-a-service releases on pypi behind In-Reply-To: References: <7C9434A5-F2F8-425E-97C8-53BB80FFCEF3@gmail.com> Message-ID: hi, On Tue, Jun 16, 2020 at 4:05 PM Lajos Katona wrote: > > Hi Takashi, > > Thanks, my pypi account name is: lajoskatona > https://pypi.org/user/lajoskatona/ thank you. added as an owner. > > Regards > Lajos > > Takashi Yamamoto ezt írta (időpont: 2020. jún. 16., K, 7:09): >> >> i can add you as a maintainer. >> please tell me your pypi account. >> >> On Tue, Jun 16, 2020 at 2:00 PM Lajos Katona wrote: >> > >> > Hi Sam, >> > It seems that as taas is not under openstack governance Yamamo is the only maintainer on pypi, so he has only right to upload new release. >> > >> > Regards >> > Lajos >> > >> > Sam Morrison ezt írta (időpont: 2020. jún. 16., K, 3:25): >> >> >> >> It looks like tap-as-a-service project isn’t getting it’s latest releases pushed to pypi and this is breaking networking-midonet in train release. >> >> >> >> Source [1] has tags for releases 4.0.0, 5.0.0 and 6.0.0 but these aren’t in pypi [2] >> >> >> >> I need a fix [3] to upper-constraints to get this to work but just realised pypi is behind >> >> >> >> Can someone help me please. >> >> >> >> >> >> Thanks, >> >> Sam >> >> >> >> >> >> [1] https://opendev.org/x/tap-as-a-service/ >> >> [2] https://pypi.org/project/tap-as-a-service/ >> >> [3] https://review.opendev.org/#/c/735754/ >> >> From yumeng_bao at yahoo.com Wed Jun 17 03:18:20 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Wed, 17 Jun 2020 11:18:20 +0800 Subject: [cyborg]Cyborg Victoria Release Schedule and storyboard update for Contributors References: <4C8961B0-E29B-461D-86BC-A1EEA34C72AC.ref@yahoo.com> Message-ID: <4C8961B0-E29B-461D-86BC-A1EEA34C72AC@yahoo.com> Hi team and interested contributors! A detailed release schedule[1] is proposed according to all the goals[2] we've confirmed at Virtual PTG. These two materials will form the basis for our development throughout this cycle release. Goals with tag [open-to-all] are open to everyone to take, if you are interested, please feel free to ask questions directly here or ping Yumeng at IRC Channel #openstack-cyborg ! Another thing is about the storyboard. After Rocky, Cyborg has migrated from cyborg-launchpad to cyborg-storyboard to track bugs and features. But we were not using it very well. From Victoria, we will use it, use it well, and use it often. So please take some time to get familiar with the cyborg specific storyboard usage guide[3]. [1]https://wiki.openstack.org/wiki/Cyborg/Victoria_Release_Schedule#Blueprints_with_milestone [2]https://etherpad.opendev.org/p/cyborg-victoria-goals [3]https://wiki.openstack.org/wiki/Cyborg/CyborgStoryboard Regards, Yumeng From zhengyupann at 163.com Wed Jun 17 03:54:42 2020 From: zhengyupann at 163.com (Zhengyu Pan) Date: Wed, 17 Jun 2020 11:54:42 +0800 (CST) Subject: [neutron] neutron router gateway set will waste a public ip. How to avoid it? Message-ID: <41f5bcb9.2f61.172c06a2bb1.Coremail.zhengyupann@163.com> Hi, when i set a gateway for router,(neutron router-gateway-set router external-network), it will allocate a IP address for qg interface. For me, this IP address is useless. I use floatingip to connect external network. Is there a way to avoid allocating this gateway IP for router qg interface? -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Wed Jun 17 04:42:34 2020 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 17 Jun 2020 06:42:34 +0200 Subject: [neutron] Tap-as-a-service releases on pypi behind In-Reply-To: References: <7C9434A5-F2F8-425E-97C8-53BB80FFCEF3@gmail.com> Message-ID: Hi, Thanks Yamamoto. @Sam: I uploaded the releases to pypi, could you please check it Regards Lajos Katona (lajoskatona) Takashi Yamamoto ezt írta (időpont: 2020. jún. 17., Sze, 4:38): > hi, > > On Tue, Jun 16, 2020 at 4:05 PM Lajos Katona wrote: > > > > Hi Takashi, > > > > Thanks, my pypi account name is: lajoskatona > > https://pypi.org/user/lajoskatona/ > > thank you. added as an owner. > > > > > Regards > > Lajos > > > > Takashi Yamamoto ezt írta (időpont: 2020. jún. > 16., K, 7:09): > >> > >> i can add you as a maintainer. > >> please tell me your pypi account. > >> > >> On Tue, Jun 16, 2020 at 2:00 PM Lajos Katona > wrote: > >> > > >> > Hi Sam, > >> > It seems that as taas is not under openstack governance Yamamo is the > only maintainer on pypi, so he has only right to upload new release. > >> > > >> > Regards > >> > Lajos > >> > > >> > Sam Morrison ezt írta (időpont: 2020. jún. 16., > K, 3:25): > >> >> > >> >> It looks like tap-as-a-service project isn’t getting it’s latest > releases pushed to pypi and this is breaking networking-midonet in train > release. > >> >> > >> >> Source [1] has tags for releases 4.0.0, 5.0.0 and 6.0.0 but these > aren’t in pypi [2] > >> >> > >> >> I need a fix [3] to upper-constraints to get this to work but just > realised pypi is behind > >> >> > >> >> Can someone help me please. > >> >> > >> >> > >> >> Thanks, > >> >> Sam > >> >> > >> >> > >> >> [1] https://opendev.org/x/tap-as-a-service/ > >> >> [2] https://pypi.org/project/tap-as-a-service/ > >> >> [3] https://review.opendev.org/#/c/735754/ > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Jun 17 08:12:35 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 17 Jun 2020 10:12:35 +0200 Subject: [neutron] Failed to create a duplicate DefaultSecurityGroup In-Reply-To: References: <20200616121201.bkpdknqyaitumy4n@skaplons-mac> Message-ID: <20200617081235.wt5odqave3727wcu@skaplons-mac> Hi, Thx. I replied in LP. I think I know more or less how to fix that and I will propose patch this week. On Tue, Jun 16, 2020 at 04:34:26PM +0100, Neil Jerram wrote: > https://bugs.launchpad.net/neutron/+bug/1883730 > > On Tue, Jun 16, 2020 at 1:44 PM Neil Jerram wrote: > > > Thanks Slawek. I'm happy to do that, but I thought I should write here > > first in case it is some kind of user error, and not really a bug in the > > Neutron code. > > > > > > On Tue, Jun 16, 2020 at 1:12 PM Slawek Kaplonski > > wrote: > > > >> Hi, > >> > >> Can You report a LP bug for that and attach full stack traces from the > >> neutron > >> server? > >> > >> On Tue, Jun 16, 2020 at 09:58:04AM +0100, Neil Jerram wrote: > >> > With Ussuri I'm hitting this in the neutron server: > >> > > >> > Failed to create a duplicate DefaultSecurityGroup: for attribute(s) > >> > ['PRIMARY'] with value(s) 11447be9beda4bf78dab27cdb75058e2 > >> > pymysql.err.IntegrityError: (1062, "Duplicate entry > >> > '11447be9beda4bf78dab27cdb75058e2' for key 'PRIMARY'") > >> > oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, > >> > "Duplicate entry '11447be9beda4bf78dab27cdb75058e2' for key 'PRIMARY'") > >> > [SQL: INSERT INTO default_security_group (project_id, security_group_id) > >> > VALUES (%(project_id)s, %(security_group_id)s)] > >> > [parameters: {'project_id': '11447be9beda4bf78dab27cdb75058e2', > >> > 'security_group_id': '9f3a473c-b08a-4cf2-8327-10ecc8b87301'}] > >> > neutron_lib.objects.exceptions.NeutronDbObjectDuplicateEntry: Failed to > >> > create a duplicate DefaultSecurityGroup: for attribute(s) ['PRIMARY'] > >> with > >> > value(s) 11447be9beda4bf78dab27cdb75058e2 > >> > > >> > (Those are all, I believe, reports of the same problem, at different > >> levels > >> > of the stack.) > >> > > >> > IIUC, this is triggered by my Neutron driver calling > >> > > >> > rules = self.db.get_security_group_rules( > >> > context, filters={'security_group_id': sgids} > >> > ) > >> > > >> > where the context has project_id 11447be9beda4bf78dab27cdb75058e2. Deep > >> > down inside that call, Neutron tries to ensure that there is a default > >> > security group for that project, and somehow that hits the reported > >> > exception. > >> > > >> > Here's the code in securitygroups_db.py: > >> > > >> > def _ensure_default_security_group(self, context, tenant_id): > >> > """Create a default security group if one doesn't exist. > >> > > >> > :returns: the default security group id for given tenant. > >> > """ > >> > default_group_id = self._get_default_sg_id(context, tenant_id) > >> > if default_group_id: > >> > return default_group_id > >> > > >> > security_group = { > >> > 'security_group': > >> > {'name': 'default', > >> > 'tenant_id': tenant_id, > >> > 'description': _('Default security group')} > >> > } > >> > return self.create_security_group(context, security_group, > >> > default_sg=True)['id'] > >> > > >> > Obviously it checks first if the default SG already exists for the > >> project, > >> > before creating it if not. So why would that code hit the duplicate > >> > exception as shown above? > >> > > >> > Any ideas welcome! > >> > > >> > Best wishes, > >> > Neil > >> > >> -- > >> Slawek Kaplonski > >> Senior software engineer > >> Red Hat > >> > >> -- Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Wed Jun 17 08:17:31 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 17 Jun 2020 10:17:31 +0200 Subject: Openstack user surver - questions In-Reply-To: References: <20200531155958.y3t2en2jydrdx3gx@skaplons-mac> <272339AC-F0D8-4FD7-8D1C-34F166449962@openstack.org> <20200611150314.gmdqw4saztz6q5ym@skaplons-mac> Message-ID: <20200617081731.jc7sbdzcaa6l7vgr@skaplons-mac> Hi, On Tue, Jun 16, 2020 at 03:17:30PM -0500, Allison Price wrote: > Hi Slawek - sorry for the delay in response. No problem at all. It's not urgent ;) > > > > On Jun 11, 2020, at 10:03 AM, Slawek Kaplonski wrote: > > > > Hi, > > > > Thx for the info Allison. > > > > On Sun, May 31, 2020 at 01:40:16PM -0500, Allison Price wrote: > >> Hi Slawek, > >> > >> Thanks for reaching out with your questions about the user survey! > >> > >> > >>> On May 31, 2020, at 10:59 AM, Slawek Kaplonski wrote: > >>> > >>> Hi, > >>> > >>> First of all sorry if I didn't look for it long enough but I have couple of > >>> questions about user surver and I couldn't find answer for them anywhere. > >>> > >>> 1. I was looking at [1] for some Neutron related data and I found only one > >>> questions about used backends in "Deployment Decisions". Problem is that this > >>> graph is for me a bit unreadable due to many things on x-axis which overlaps > >>> each other. Is there any place where I can find some "raw data" to check? > >> > >> The Foundation can pull raw data for you as long as the information remains anonymized and share. Is the used backends question the only one you want data on? Or would you also like data o the percentage of users interested in, testing, and deploying Neutron? This data is also available in the analytics dashboard [1], but can often be hard to read as well. > > > > That would be great to see such data anonymized data. Info about users who are > > testing or interested in some features would be also great. That may give us > > some hints about what to focus on during next cycles. > > Ok - so we are planning to close the 2020 user survey on August 20, and I can circulate the raw, anonymized data at that time. If you would like it sooner (even though it won’t be a complete picture), let me know. That will be fine. Thx a lot. > > > > >> > >>> > >>> 2. Another question about the same chart is: is there any way to maybe change > >>> possible replies in the surver for next years? I'm asking about that becasue I > >>> have a feeling that e.g. responses "Open vSwitch" and "ML2 - Open vSwitch" may > >>> be confusing for users. > >>> My understanding is that "Open vSwitch" means simply old "Open vSwitch" core > >>> plugin instead of ML2 plugin but this old plugin was removed around "Libery" > >>> cycle so I really don't think that still 37% of users are using it. > >> > >> We try to keep most questions static from survey to survey for comparison reasoning. However, if you think that some responses are confusing and can propose alternative language, we can consider that and make those changes. > > > > Ok, I understand. Is there any repo or other place when I can check all those > > possible answers? > > There isn’t a repo for the User Survey, but if you want, you can go through the existing survey to see all of the active responses. If there are any changes for the 2021 version (that will go live on August 21), please let us know by the end of July. Ok. I will check them and I will let You know for sure. > > > > >> > >>> > >>> 3. Is there any way to propose new, more detailed questions about e.g Neutron? > >>> For example what service plugins they are using. > >> > >> We have let each PTL add 1-2 optional questions at the end of the survey for respondents who indicated they were working with a particular project. The current Neutron question is: Which of the following features in the Neutron project are you actively using, interested in using or looking forward to using in your OpenStack deployment? > > > > That Neutron question is exactly what I want but again, where I can find answers > > for that question from last survey? > > From me :) The answers to the 2019 survey are attached. We keep the answers private since it includes so much identifiable information (contact name, organization name, and deployment details) that respondents have requested to remain private. Thx a lot for that. > > > > Let me know if there is anything else I can provide. > > Allison > > > > > >> > >> The current user survey cycle ends in late August. That is when we will circulate the anonymized results to this question with the openstack-discuss mailing list along with other project-specific questions. At that time, PTLs can let us know if they would like to change their question. > >> > >> Let me know if you have any other questions - happy to help! I’ll also be around this week during the PTG if you would like me to jump in and clarify anything. > >> > >> Allison Price > >> IRC: aprice > >> > >>> > >>> [1] https://www.openstack.org/analytics > >>> > >>> -- > >>> Slawek Kaplonski > >>> Senior software engineer > >>> Red Hat > >>> > >>> > >> > > > > -- > > Slawek Kaplonski > > Senior software engineer > > Red Hat > -- Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Wed Jun 17 08:19:17 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 17 Jun 2020 10:19:17 +0200 Subject: [neutron] neutron router gateway set will waste a public ip. How to avoid it? In-Reply-To: <41f5bcb9.2f61.172c06a2bb1.Coremail.zhengyupann@163.com> References: <41f5bcb9.2f61.172c06a2bb1.Coremail.zhengyupann@163.com> Message-ID: <20200617081917.tf22fldsl5ynm4xe@skaplons-mac> Hi, There is no such possibility AFAIK. Maybe some plugins different than ML2 will allow to do that somehow. On Wed, Jun 17, 2020 at 11:54:42AM +0800, Zhengyu Pan wrote: > Hi, > when i set a gateway for router,(neutron router-gateway-set router external-network), it will allocate a IP address for qg interface. For me, this IP address is useless. I use floatingip to connect external network. Is there a way to avoid allocating this gateway IP for router qg interface? > > > > > > > > > > > -- -- Slawek Kaplonski Senior software engineer Red Hat From soumplis at admin.grnet.gr Wed Jun 17 09:31:40 2020 From: soumplis at admin.grnet.gr (Alexandros Soumplis) Date: Wed, 17 Jun 2020 12:31:40 +0300 Subject: [neutron] neutron router gateway set will waste a public ip. How to avoid it? In-Reply-To: <41f5bcb9.2f61.172c06a2bb1.Coremail.zhengyupann@163.com> References: <41f5bcb9.2f61.172c06a2bb1.Coremail.zhengyupann@163.com> Message-ID: <891b41e4-ed0d-bd56-2186-372a397836e0@admin.grnet.gr> Maybe this would be possible with service networks and have the peer to peer connection for the qrouter on a private /30 network and define appropriate static routes to the qrouter. On 17/6/20 6:54 π.μ., Zhengyu Pan wrote: > Hi, >    when i set a gateway for router,(neutron router-gateway-set router > external-network), it will allocate a IP address for qg interface. For > me, this IP address is useless. I use floatingip to connect external > network. Is there a way to avoid allocating this gateway IP for router > qg interface? > > > > > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3620 bytes Desc: S/MIME Cryptographic Signature URL: From smooney at redhat.com Wed Jun 17 12:15:07 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 17 Jun 2020 13:15:07 +0100 Subject: [cyborg][neutron][nova] Networking support in Cyborg In-Reply-To: References: <4f31c35b3900ddae7e90c53cd4411dc8c4e5a55e.camel@redhat.com> Message-ID: <8b796891937158c07056475cad84434b69ced746.camel@redhat.com> On Fri, 2020-06-12 at 10:23 +0000, Wang, Xin-ran wrote: > Hi all, > > I prefer that physnet related stuff still managed by neutron, because it is a notion of neutron. If we let Cyborg > update this traits to Placement, what will we do if Neutron enabled bandwidth feature and how we know whether this > feature is enabled or not. > > Can we just let Neutron always report physnet traits. I am not very familiar with neutron, is there any gap? > > Otherwise, if Cyborg do need report this to placement, my proposal is: > > Neutron will provide an interface which allow Cyborg to get physnet trait/RP, if this feature is not configured, it > will return 404, then Cyborg will know that neutron did not configured bandwidth feature, and Cyborg can report all by > itself. If Neutron returns something meaningful, Cyborg should use the same RP and update other traits on this RP. neutron cant provide physnet traits as it does not nessialary know the interface on the hsot exists. physnets are specified by the operatator in the configfile for each network backend. typically thsi is done via the bridge_mappings https://github.com/openstack/neutron/blob/master/neutron/conf/plugins/ml2/drivers/ovs_conf.py#L51-L65 although for the sriov nic agent it uses the physical_device_mappings config option instead. https://github.com/openstack/neutron/blob/master/neutron/conf/plugins/ml2/drivers/mech_sriov/agent_common.py#L25-L34 this infor is reported to the neutron server via the agent report in some case but in generall this si not avaiable at the api level. unless the operator adds all fo the cyborg fpgas to the physical_device_mappings of the sriov nic agent on the host by its netdev name neutron will not know what physnet any of the devices are conencted too. realistically the cleanest way to managen this without depending on in progress featrues is to have a similar config option in cyborg that is used by the driver to decalre the physnet the smartnic is attached too. if we had a nameing convention for thr RPs such as _ e.g. host-1_0000:00:1f.6 then we could perhaps use the new provior.yaml feature to do this instead. that would also allow neutron nova and cyborg to agree on the RP name. that does have some proablem however as really we woudl want cybrog device to be reported under nova created NUMA resouce providers where as neutron would want them to be reported under the agent resouce provider and cyborg might want a different topology. so really we need a way to the same RP to exist in multiple locations in the tree. e.g. some form of alias or symlink capablity so that each service an look at there own view but only have once instance of the resouces. since we dont have that i think for step one we have to take a different approch. if we assume that cyborg will be the only thing that creates the RPs for smartnics, and we agree on a nameing scheme then we can use thenew provider.yaml we are adding to nova to add the physnet traits. The provider.yaml could also be used to create the bandwith RPs that would normally be created by the sriov nic agent. provided the sriov nic agent could configure the VF based soly on the pci_slot in the port binding profile and other info in the port withou needing the device to be lisited in its own config file then you could support bandwith based shcduleing too without modifing placement or nova. at some point we will want to move the RP created by cyborg under the numa nodes created by nova but since that is a work in progress we can cross that bridge at a later date. it would just require a reshape in the cyborg code to move the RPs/Allocation and an update to how cyborg builds it provider tree. that is out of scope for now. > > In this way, Cyborg and Neutron will use the same RP and keep the consistency. > > Thanks, > Xin-Ran > > -----Original Message----- > From: Sean Mooney > Sent: Thursday, June 11, 2020 9:44 PM > To: Nadathur, Sundar ; openstack-discuss > Subject: Re: [cyborg][neutron][nova] Networking support in Cyborg > > On Thu, 2020-06-11 at 12:24 +0000, Nadathur, Sundar wrote: > > Hi Sean, > > > > > From: Sean Mooney > > > Sent: Thursday, June 11, 2020 4:31 AM > > > > > > > On Thu, 2020-06-11 at 11:04 +0000, Nadathur, Sundar wrote: > > > > [...] > > > > * Ideally, the admin should be able to formulate the device > > > > profile in the same way, independent of whether it is a > > > > single-component or multi-component device. For that, the device > > > > profile must have a single resource group that includes the > > > > resource, traits and Cyborg > > > > > > properties for both the accelerator and NIC. The device profile for > > > a Neutron port will presumably have only one request group. So, the > > > device profile would look something like this: > > > > > > > > { "name": "my-smartnic-dp", > > > > "groups": [{ > > > > "resources:FPGA": "1", > > > > "resources:CUSTOM_NIC_X": "1", > > > > "trait:CUSTOM_FPGA_REGION_ID_FOO": "required", > > > > "trait:CUSTOM_NIC_TRAIT_BAR": "required", > > > > "trait:CUSTOM_PHYSNET_VLAN3": "required", > > > > "accel:bitstream_id": "3AFE" > > > > }] > > > > } > > > > > > having "trait:CUSTOM_PHYSNET_VLAN3": "required", in the device > > > profile means you have to create a seperate device profile with the > > > same details for each physnet and the user then need to fine the > > > profile that matches there neutron network's physnet which is also > > > problematic if they use the multiprovidernet extention. > > > so we shoud keep the physnet seperate and have nova or neutorn > > > append that when we make the placment query. > > > > True, we did discuss this at the PTG, and I agree. The physnet can be > > passed in from the command line during port creation. > > that is not how that works. > > when you create a neutron network with segmenation type vlan or flat it is automatically assigned a segmeantion_id and > phsynet. > As an admin you can chose both but as a tenant this is managed by neutron > > ignoring the multiprovidernet for a second all vlan and flat network have 1 phyesnet and the port get a phsynet form > the network it is created on. > > the multiprovidernet extension allow a singlel neutron provider network to have multiple physnets but nova does not > support that today. > > so nova can get the physnet from the port/network/segment and incorporate that in the placment request but we cant > pass it in during port creation. > > in general tenants are not aware of physnets. > > > > > > > [...] > > > > * We discussed the physnet trait at the PTG. My suggestion is to > > > > keep Cyborg out of this, and out of networking in general, if possible. > > > > > > well this feature is more or less the opisite of that intent but i > > > get that you dont want cyborg to have to confiure the networking atribute of the interface. > > > > The admin could apply the trait to the right RP. Or, the OpenStack > > installer could automate this. That's similar in spirit to having the admin configure the physnet in PCI whitelist. > > yes they could its not a partially good user experience as it quite tedious to do but yes it a viable option and > likely sufficnet for the initial work. installer could automate it but having to do it manually would not be ideal. > > > > Regards, > > Sundar > > From gouthampravi at gmail.com Wed Jun 17 05:29:18 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 16 Jun 2020 22:29:18 -0700 Subject: [manila] Summary of the Victoria Cycle Project Technical Gathering Message-ID: Hello Zorillas and other friendly animals of the OpenStack universe, The manila project community met virtually between 1st June and 5th June and discussed project plans for the Victoria cycle. The detailed meeting notes are in [1] and the recordings were published [2]. A short summary of the discussions and the action items is below: *== Ussuri Retrospective ==* - We lauded the work that Vida Haririan and Jason Grosso have put into making bug tracking and triaging a whole lot easier, and systematic. At the beginning of the cycle, we had roughly 250 bugs, and this was brought down to under 130 by the end of the cycle. As a community, we acted upon many multi-release bugs and made backports as appropriate. We've now automated the expiry of invalid and incomplete bugs thereby reducing noise. Vida's our current Bug Czar, and is interested in mentoring anyone that wants to contribute to this role. Please reach out to her if you're interested. - We also had two successful Outreachy internships (Soledad Kuczala/solkz, Maari Tamm/maaritamm) thanks to Outreachy, their sponsors, mentors (Sofia Enriquez/enriquetaso, Victoria Martinez de la Cruz/vkmc) and the OpenStack Foundation; and a successful Google Summer of Code internship (Robert Vasek/gman0) - many thanks to the mentor (Tomas Smetana/tsmetana), Google, Red Hat and other sponsoring organizations. The team learned a lot, and vkmc encouraged all of us to consider submitting a mentorship application for upcoming cycles and increase our involvement. Through the interns' collective efforts in Train and Ussuri development cycles: - manila CSI driver was built [3] - manilaclient now provides a plugin to the OpenStackClient - manila-ui has support for newer microversions of the manila API and, - manila documentation has gotten a whole lot better! - We made good core team improvements and want to continue to mentor new contributors to become maintainers, and folks felt their PTL was doing a good job (:D) - The community loved the idea of PTL docs (thanks Kendall Nelson/diablo_rojo) - a lot of tribal knowledge was documented for the first time! - We felt that "low-hanging-fruit" bugs [4] were lingering too long in some cases, and must have a "resolve-by" date. These are farmed for new contributors, and if they turn out to be annoying issues, the team may set a resolve-by date and close them out. However, we'll continue to make a concerted effort to triage "bugs that are tolerable" with nice-to-have fixes and keep them handy for anyone looking to make an initial contribution. *== Optimize query speed for share snapshots ==* Haixin (haixin) discovered that not all APIs are taking advantage of filtering and pagination via sqlalchemy. There's a list of APIs that he's compiled and would like to work on them; the team agreed that this is a valuable bug fix; and can be made available to Ussuri when the fixes land in this cycle. *== TC Goals for Victoria cycle ==* - We discussed a long list of items that were proposed for inclusion as TC goals [5]. The TC has officially picked two of them for this cycle [6]. - For gating manila project repos, we make heavy use of "legacy" DSVM jobs. We hadn't invested time and effort in converting these jobs in the past cycles; however, we have a plan [7] and have started porting jobs to "native" zuulv3 format already in the manila-tempest-plugin repository. Once these jobs are complete there, we'll switch over to using them on the main branch of manila. Older branches will get opportunistic updates beyond milestone-2. - Luigi Toscano (tosky) joined us for this discussion and asked us for the status of third party CI systems. The team hasn't mandated that third party CI systems move their testing to zuulv3-native in this cycle. However, the OpenStack community may drop support for devstack-gate in the Victoria release, and making things work with it will get harder - so it's strongly encouraged that third party vendor systems that are using the community testing infrastructure projects: zuul, nodepool and devstack-gate move away from devstack-gate in this cycle. An option to adopt Zuulv3 in third party CI systems could be via the Software Factory project [8]. The RDO community runs some third party jobs and votes on OpenStack upstream projects - so they've created a wealth of jobs and documentation that can be of help. Maintainers of this project hang out in #softwarefactory on FreeNode. - All of the new zuulv3 style tempest jobs inherit from devstack-tempest from the tempest repository, and changing the node-set in the parent would affect all our jobs as well - this would make the transition to Ubuntu 20.04 LTS/Focal Fossa easier. *== Secure default policies and granular policies ==* - Raildo Mascena (raildo) joined us and presented an overview this cross-community effort [9] - Manila has many assumptions of what project roles should be - and over time, we seem to have blended the idea of a deployer administrator and a project administrator - so there are inconsistencies when, even to perform project level administration, one needs excessive permissions across the cloud. This is undesirable - so, a precursor to supporting the new scoped policies from Keystone seems to be to: - eliminate hard coded checks in the code requiring an "admin" role and switch to performing policy checks - eliminating empty defaults which allow anyone to execute an API - manila has very few of these - supporting a "reader" role with the APIs - We can then re-calibrate the defaults to ensure a separation between cross-tenant administration (system scope) and per-tenant administration - following the work in oslo.policy and in keystone - gouthamr will be leading this in the Victoria cycle - other contributors are welcome to join this effort! *== Oslo.privsep and other manila TODOs ==* - We discussed another cross-community effort around transitioning all sudo actions from rootwrap to privsep - Currently no one in the manila team has the bandwidth to investigate and commit to this effort, so we're happy to ask for help! - If you are interested, please join us during one of the team meetings or start submitting patches and we can discuss with you via code reviews. - The team also compiled a list of backlog items in an etherpad [10]. These are great areas for new project contributors to help manila, so please get in touch with us if you would like to work on any of these items *== OSC Status/Completion ==* - Victoria Martinez de la Cruz and Maari Tamm compiled the status for the completion of the OSC plugin work in manilaclient [11] - There's still a lot of ground to cover to get complete parity with the manila command line client, and we need contributors - Maari Tamm (maaritamm) will continue to work on this as time permits. Spyros Trigazis (strigazi) and his team at CERN are interested to work on this as well. Thank you, Maari and Spyros! - On Friday, we were joined by Artem Goncharov (gtema) to discuss the issue of "common commands" - quotas, services, availability zones, limits are common concepts that apply to other projects as well - OSC has support to show you these resources for compute, volume and networking - gtema suggested we should approach this via the OpenStackSDK rather than the plugin since plugins are slow as is, and adding anything more to that interface is not desirable at the moment - There's planned work in the OpenStackClient project to work on the plugin loading mechanisms to make things faster *== Graduation of Experimental Features ==* - Last cycle Carlos Eduardo (carloss) committed the work to graduate Share Groups APIs from their "Experimental API" status - We have two more features behind experimental APIs: share replication and share migration - This cycle, carloss will work on graduating the share replication APIs to fully supported - Generic Share Migration still needs some work, but we've fleshed out the API and it has stayed pretty constant in the past few releases - we might consider graduating the API for share migration in the Wallaby release. *== CephFS Updates ==* - Victoria (vkmc) took us through the updates planned for the Victoria cycle (heh) - Currently all dsvm/tempest based testing in OpenStack (cinder, nova, manila) is happening on ceph luminous and older releases (hammer and jewel) - Victoria has a patch up [12] to update the ceph cluster to using Nautilus by default - This patch moves to using the packages built by the ceph community via their shaman build system [13] - shaman does not support building nautilus on CentOS 8, or on Ubuntu Xenial - so if older branches of the projects are tested with Ubuntu Xenial, we'll fall back to testing with Luminous - The Manila CephFS driver wants to take advantage of the "ceph-mgr" daemon in the Nautilus release and beyond - Maintaining support for "ceph-mgr" and "ceph-volume" clients in the driver will make things messy - so, the manila driver will not support Ceph versions prior to Nautilus in the Victoria cycle - If you're using manila with cephfs, please upgrade your ceph clusters to Nautilus or newer - We're not opposed to supporting versions prior to nautilus, but the community members cannot invest in maintaining support for these older releases of ceph for future releases of manila - With the ceph-mgr interface, we intend to support asynchronous create-share-from-snapshot with manila - Ramana Raja (rraja) provided us an update regarding the ceph-mgr and upcoming support for nfs-ganesha interactions via that interface (ceph pacific release) - Currently there's a ganesha interface driver in manila, and that can switch to using the ceph-mgr interface - Manila provides an "ensure_shares" mechanism to migrate share export locations when the NAS host changes - We'll need to work on that if we want to make it easier to switch ganesha hosts. - We also briefly discussed supporting manage and unmanage operations with the ceph drivers - that should greatly assist day 2 operations, and migration of shared file systems from the native cephfs protocol to nfs and vice-versa. *== Add/Delete/Update security services for in-use share networks ==* - Douglas Viroel (dviroel) discussed a change to manila to support share server modifications wrt security services - Security services are project visible and tenant driven - however, share servers are hidden away from project users by virtue of default policy - dviroel's idea is that, If a share network has multiple share servers, the share manager will enumerate and communicate with all share servers on the share network to update a security service - We need to make sure that all conflicting operations (such as creating new shares, changing access rules on existing shares) are fenced off when a share server security service is being updated - dviroel has a spec that he's working on - and would like feedback on his proposal [14] *== Create shares with two (or more) subnets ==* - dviroel proposed a design allowing a share network having multiple subnets in a given AZ (currently you can have utmost 1 subnet in an AZ for a given share network) - Allowing multiple NICs on a share server may be something most drivers can easily support - This change is identical to the one to update security services on existing share networks - in terms of user experience and expectations - The use cases here include dual IP support, share server network maintenance and migration, simultaneous access from disparate subnets - Feedback from this discussion was to treat this as two separate concerns for easier implementation - Supporting multiple subnets per AZ per share network - Supporting adding/removing subnets to/from a share network that is in-use - Currently, there's no way to modify an in-use share server - so adding that would be a precursor to allowing modification of share networks/subnets and security services *== Technical Committee Tags ==* - In the last cycle, the manila team worked with OpenStack VMT to perform a vulnerability disclosure and coordinate a fix across distributions that included manila. - The experience was valuable in gaining control of our own "coresec" team that had gone wayward on launchpad; and learning about VMT - Jeremy Stanley (fungi) and other members of the VMT have been supportive of having manila apply to the "vulnerability-managed" tag. We'll follow up on this soon - While we're on the subject, with Ghanshyam Mann (gmann) in the room, we discussed other potential tags that we can assert as the project team: - "supports-accessible-upgrade" - manila allows control plane upgrade without disrupting the accessibility of shares, snapshots, ACLs, groups, replicas, share servers, networks and all other resources [15] - "supports-api-interoperability" - manila's API is microversioned and we have hard tests that enforce backwards compatibility [16] - We discussed "tc:approved-release" tag a bit, and felt that we needed to bring it up in a TC meeting, and we did that with Ghanshyam's help - The view from the manila team is that we'd like to eliminate any misconception that the project is not mature, or ready for production use or that it isn't a part of a "core OpenStack" - At the TC meeting, Thierry Carrez (ttx), Graham Hayes (mugsie) and the others provided historic context for this tag: the tag was for a section from the OpenStack foundation bylaws that states that the Technical Committee must define what an approved release is (Article 4, section 4.1 (b) i) [17] - The TC's view was that this tag has outlived its purpose and core-vs-non-core discussions have happened a lot of times. Dropping this tag might require speaking with the Foundation and amending the bylaws and exploring what this implies. It's a good effort to get started on though. - For the moment, The TC was not opposed to the manila team requesting this change to include manila in the list of projects in "tc-approved-release". *== Share and share size quotas/limits per share server ==* - carloss shared his design for allowing share server limits being enforced via the quotas system where administrators could define project share server quotas that the share manager would enforce these by provisioning new servers when the quotas are hit - the quotas system is ill suited for this sort of enforcement, specially given that the share manager allows the share drivers to control what share server can be used to provision a share - it's possible to look for a global solution, like the one proposed for the generic driver in [18], or implement this at a backend level agnostic to the rest of manila - another reason not to use quotas is that manila may eventually do away with this home grown quota system in favor of oslo.limit enforced via keystone - another alternative to doing this is via share types, but, this really fits as a per-share-network limit rather than a global one *== Optimize the quota processing logic for 'manage share' ==* - haixin ran into a bug where quota operations are incorrectly applied for during a share import/manage operation such that a failed manage operation would cause incorrect quota deductions - we discussed possible solutions for the bug, but mostly, this can definitely be fixed - he opened a bug for this [19] *== Share server migration ==* - dviroel presented his thoughts around this new feature [20] which would be helpful for day 2 operations - he suggested that we should not provide for a generic mechanism to perform this migration, given that users would not need it especially if it is not 100% reliable - though there is a generic framework for provisioning share servers, it is only being used by the reference driver (Generic driver) and the Windows SMB driver - shooting for a generic solution would require us to solve SPOF issues that we currently have with the reference driver - and there is not much investment in doing so - dviroel's solution involves a multi-step migration, however relying on the share drivers to perform atomic migration of all the shares - you can think of this wrt the Generic driver as multi-attaching all the underlying cinder volumes and deleting the older nova instance. - manila's share migration is multi-step allowing for a data copy and a cutover phase - and is cancelable through the data copy phase and before the cutover phase is invoked - so there were some concerns if that two phased approach is required here, given that the operation may not be cancelable always, generically *== Manila Container Storage Interface ==* - Tom Barron (tbarron) presented a summary and demo of using the Manila CSI driver on OpenShift to provide RWX storage to containerized applications - Robert Vasek (gman0) explained the core design and the reasoning behind the architecture - Mikhail Fedosin (mfedosin) spoke about the OpenShift operator for Manila CSI and the ease of install and day two operations [21] - the CSI driver has support for snapshots, cloning of snapshots (nfs only at the moment) and topology aside from provisioning, access control and deprovisioning - the team's prioritizing supporting cephfs snapshots and creating shares from cephfs snapshots via subvolume clones in the Victoria cycle Thanks for reading this far! Should you have any questions, don't hesitate to pop into #openstack-manila on freenode.net. [1] https://etherpad.opendev.org/p/victoria-ptg-manila (Minutes of the PTG) [2] https://www.youtube.com/playlist?list=PLnpzT0InFrqBKkyIAQdA9RFJnx-geS3lp (YouTube playlist of the PTG recordings) [3] https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-manila-csi-plugin.md (Manila CSI driver docs) [4] https://bugs.launchpad.net/manila/+bugs?field.tag=low-hanging-fruit (Low hanging fruit bugs in Manila) [5] https://etherpad.opendev.org/p/community-goals (Community goal proposals) [6] http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015459.html (Chosen TC Community Goals for Victoria cycle) [7] https://tree.taiga.io/project/gouthampacha-manila-ci-zuul-v3-migration/kanban (Zuulv3-native CI migrations tracker) [8] https://www.softwarefactory-project.io/ (Software Factory project) [9] https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team (Policy effort across OpenStack) [10] https://etherpad.opendev.org/p/manila-todos (ToDo list for manila) [11] https://etherpad.opendev.org/p/manila-openstackclient-updates (OSC CLI catchup tracker) [12] https://review.opendev.org/#/c/676722/ (devstack-plugin-ceph support for Ceph Nautilus) [13] https://shaman.ceph.com (Shaman build system for Ceph) [14] https://review.opendev.org/#/c/729292/ (Specification to allow security service updates) [15] https://governance.openstack.org/tc/reference/tags/assert_supports-accessible-upgrade.html (TC tag for accessible upgrades) [16] https://governance.openstack.org/tc/reference/tags/assert_supports-api-interoperability.html (TC tag for API interoperability) [17] https://www.openstack.org/legal/bylaws-of-the-openstack-foundation#ARTICLE_IV._BOARD_OF_DIRECTORS (TC bylaws requiring "approved release") [18] https://review.opendev.org/#/c/510542/ (Limitting the number of shares per Share server) [19] https://bugs.launchpad.net/manila/+bug/1883506 (delete manage_error share will lead to quota error) [20] https://review.opendev.org/#/c/735970/ (specification for share server migration) [21] https://github.com/openshift/csi-driver-manila-operator -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Jun 17 14:40:46 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 17 Jun 2020 10:40:46 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s an update for what happened in the OpenStack TC last week. You can get more information by checking for changes in openstack/governance repository. We have some reviews that have been open for quite some time without many reviews. I’d like to kindly ask our technical committee members to review them. # OPEN REVIEWS - [draft] Add assert:supports-standalone https://review.opendev.org/#/c/722399/ [open for 42 days] - Update joining-tc.rst to be general tc-guide.rst https://review.opendev.org/#/c/732983/ - Remove grace period on resolutions https://review.opendev.org/#/c/735361/ - Deprecatre neutron-fwaas and neutron-fwaas-dashboard master branch https://review.opendev.org/#/c/735828/ - Record deprecated cycle for deprecated project https://review.opendev.org/#/c/736084/ - Add openstack-ansible-os_adjutant repo https://review.opendev.org/#/c/736140/ - Merging TC and UC into a single body https://review.opendev.org/#/c/734074/ # PROJECT UPDATES - Add whitebox-tempest-plugin under QA https://review.opendev.org/#/c/714480/ - Clarify the support for linux distro https://review.opendev.org/#/c/727238/ - Rename ansible-role-lunasa-hsm deliverable https://review.opendev.org/#/c/731313/ - Select migrate-to-focal goal for Victoria cycle https://review.opendev.org/#/c/731213/ # PROJECTS RETIRED - Remove congress project team https://review.opendev.org/#/c/728818/ # GENERAL CHANGES - Propose Kendall Nelson for vice-chair https://review.opendev.org/#/c/733141/ - Add njohnston liaison preference https://review.opendev.org/#/c/733269/ - Add diablo_rojo liaison preferences https://review.opendev.org/#/c/733284/ - Add gmann liaison preference https://review.opendev.org/#/c/734894/ - Fix Rally PTL email address https://review.opendev.org/#/c/735342/ # GOAL UPDATES - Add QA branchless projects also in py3.5 support list https://review.opendev.org/#/c/729325/ Regards, Mohammed -- Mohammed Naser VEXXHOST, Inc. From gmann at ghanshyammann.com Wed Jun 17 15:49:34 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 17 Jun 2020 10:49:34 -0500 Subject: [all][qa] uWSGI release broke devstack jobs In-Reply-To: <0d329ad2-3e2b-cc13-c0ad-7289068608ca@suse.com> References: <0d329ad2-3e2b-cc13-c0ad-7289068608ca@suse.com> Message-ID: <172c2f8a443.12a8a8952388754.2863504231742404620@ghanshyammann.com> Updates: uwsgi issue is fixed and merged for master as well for stable branches till stein[1] but the gate is still blocked on stable branches due to neutron-grenade-multinode job failure. master gate is all green. For neutron-grenade-multinode, we are backporting its working zuulv3 native version to stable branches: - Ussuri: https://review.opendev.org/#/c/735948/ - Train and Stein - need grenade base job to be backported first which we will do. If that can not be done by today, the suggestion is to make that jobs as n-v till all backports are merged. NOTE: Extended maintenance stable branches are still broken, we discussed those in qa channel, I will start a different ML thread for the new proposed approach for their testing. [1] https://review.opendev.org/#/q/I82f539bfa533349293dd5a8ce309c9cc0ffb0393 -gmann ---- On Mon, 15 Jun 2020 03:55:29 -0500 Andreas Jaeger wrote ---- > The new uWSGI 2.0.19 release changed packaging (filename and content) > and thus broke devstack. > > The QA team is currently fixing this, changes need backporting to fix > grenade as well. > > On master, change [1] was merged to use distro packages for Ubuntu and > Fedora instead of installation from source for uWSGI. CentOS and > openSUSE installation is not fixed yet ([2] proposed for openSUSE). > > Thus, right now this should work again: > * Fedora and Ubuntu the devstack jobs on master > > Other distributions and branches and grenade are still broken. > > Please do NOT recheck until all fixes are in. > > If you want to help, best reach out on #openstack-qa. > > Thanks especially to Jens Harbott for driving this! > > Andreas > > [1] https://review.opendev.org/577955 > [2] https://review.opendev.org/735519 > -- > Andreas Jaeger aj at suse.com Twitter: jaegerandi > SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg > (HRB 36809, AG Nürnberg) GF: Felix Imendörffer > GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB > > From rosmaita.fossdev at gmail.com Wed Jun 17 16:35:36 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 17 Jun 2020 12:35:36 -0400 Subject: [cinder] victoria virtual mid-cycle next week Message-ID: <348f1eef-a137-0ec7-a3bc-da71e2a7dfbe@gmail.com> I know it seems early, but next week is R-16, and the spec freeze is R-15, so: Session One of the Cinder Victoria virtual mid-cycle will be held: DATE: 24 JUNE 2020 TIME: 1400-1600 UTC LOCATION: https://bluejeans.com/3228528973 The meeting will be recorded. Please add topics to the etherpad: https://etherpad.opendev.org/p/cinder-victoria-mid-cycles (Session Two will be held during the R-9 week. We'll do a poll to select the day/time around R-11.) cheers, brian From gouthampravi at gmail.com Wed Jun 17 16:58:51 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 17 Jun 2020 09:58:51 -0700 Subject: [all][qa] uWSGI release broke devstack jobs In-Reply-To: <172c2f8a443.12a8a8952388754.2863504231742404620@ghanshyammann.com> References: <0d329ad2-3e2b-cc13-c0ad-7289068608ca@suse.com> <172c2f8a443.12a8a8952388754.2863504231742404620@ghanshyammann.com> Message-ID: On Wed, Jun 17, 2020 at 8:58 AM Ghanshyam Mann wrote: > Updates: > > uwsgi issue is fixed and merged for master as well for stable branches > till stein[1] but the gate is still blocked > on stable branches due to neutron-grenade-multinode job failure. master > gate is all green. > > For neutron-grenade-multinode, we are backporting its working zuulv3 > native version > to stable branches: > - Ussuri: https://review.opendev.org/#/c/735948/ > - Train and Stein - need grenade base job to be backported first which we > will do. If that can not be done by today, > the suggestion is to make that jobs as n-v till all backports are merged. > > NOTE: Extended maintenance stable branches are still broken, we discussed > those in qa channel, I will start a different > ML thread for the new proposed approach for their testing. > Thanks for the update Ghanshyam Devstack plugins that use the uwsgi framework from devstack will need a small fix too, in all branches, like: https://review.opendev.org/#/c/735895/ > > [1] > https://review.opendev.org/#/q/I82f539bfa533349293dd5a8ce309c9cc0ffb0393 > > -gmann > > ---- On Mon, 15 Jun 2020 03:55:29 -0500 Andreas Jaeger > wrote ---- > > The new uWSGI 2.0.19 release changed packaging (filename and content) > > and thus broke devstack. > > > > The QA team is currently fixing this, changes need backporting to fix > > grenade as well. > > > > On master, change [1] was merged to use distro packages for Ubuntu and > > Fedora instead of installation from source for uWSGI. CentOS and > > openSUSE installation is not fixed yet ([2] proposed for openSUSE). > > > > Thus, right now this should work again: > > * Fedora and Ubuntu the devstack jobs on master > > > > Other distributions and branches and grenade are still broken. > > > > Please do NOT recheck until all fixes are in. > > > > If you want to help, best reach out on #openstack-qa. > > > > Thanks especially to Jens Harbott for driving this! > > > > Andreas > > > > [1] https://review.opendev.org/577955 > > [2] https://review.opendev.org/735519 > > -- > > Andreas Jaeger aj at suse.com Twitter: jaegerandi > > SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg > > (HRB 36809, AG Nürnberg) GF: Felix Imendörffer > > GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jun 17 18:17:26 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 17 Jun 2020 13:17:26 -0500 Subject: [all][tc][stable][qa] Grenade testing for Extended Maintenance stable Message-ID: <172c38005cd.cd05105e394518.669381411351594343@ghanshyammann.com> Hello Everyone, As you know devstack (so does grenade) got broken due to uwsgi new release, the master branch is fixed and stable branches are in progress[1]. But It is hard to maintain or fix the EM stable for those issues. Especially the greande job which depends on the source branch (previous branch of one where the job is running). For example, for stein grenade job, we need to keep rocky branch working and fix if failure. Till now, we have not removed the grenade testing from any of the EM stable branches because they were running fine till now but with uwsgi issues, those are failing and need more work to fix. This triggers the discussion of grenade testing on EM stable branches. Usual policy for grenade testing is to keep the job running from the 'oldest supported stable +1' branch. For example, if stein is the oldest supported stable (in the old stable definition) then run grenade from train onwards. But with the Extended Maintainance model, defining 'oldest supported stable' is not clear whether it is the oldest non-EM(stein) or oldest EM stable(ocata). To make it clear, we discussed this in the QA channel and come up with the below proposal. * 'oldest' is the oldest non-EM. In current time, it is stein. * With the above 'oldest' definition, we will: ** Make grenade jobs as n-v on all EM stable branches (which is till stable/rocky as of today) + on stable/stein also because that is 'oldest' as of today. ** Keep supporting and running grenade job on 'oldest+1' which is stable/train onwards as of today. NOTE: we will make n-v when they start failing and anyone can volunteer to fix them and change back to voting. elod expressed interest to work on current failure. If no objection to the above proposal, I will document it on the grenade documents to follow it whenever we see EM failing and need more work. In Tempest, we already have the EM stable testing policy documented which is to support those till they run fine[2]. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015496.html [2] https://docs.openstack.org/tempest/latest/stable_branch_support_policy.html [3] http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2020-06-17.log.html#t2020-06-17T14:12:42 -gmann From fungi at yuggoth.org Wed Jun 17 19:14:04 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 17 Jun 2020 19:14:04 +0000 Subject: [all][tc][stable][qa] Grenade testing for Extended Maintenance stable In-Reply-To: <172c38005cd.cd05105e394518.669381411351594343@ghanshyammann.com> References: <172c38005cd.cd05105e394518.669381411351594343@ghanshyammann.com> Message-ID: <20200617191403.wt5lhawk5r3gkn5x@yuggoth.org> [...] > Usual policy for grenade testing is to keep the job running from > the 'oldest supported stable +1' branch. For example, if stein is > the oldest supported stable (in the old stable definition) then > run grenade from train onwards. But with the Extended Maintainance > model, defining 'oldest supported stable' is not clear whether it > is the oldest non-EM(stein) or oldest EM stable(ocata). > > To make it clear, we discussed this in the QA channel and come up > with the below proposal. > > * 'oldest' is the oldest non-EM. In current time, it is stein. > * With the above 'oldest' definition, we will: > ** Make grenade jobs as n-v on all EM stable branches (which is > till stable/rocky as of today) + on stable/stein also because that > is 'oldest' as of today. > ** Keep supporting and running grenade job on 'oldest+1' which is > stable/train onwards as of today. [...] The way to phrase this consistent with our branch status terminology is: We only perform upgrade testing between the current source contents of adjacent Maintained or Development status branches (not Extended Maintenance or Unmaintained status branches, nor specific tags such as End Of Life versions). This means that the branch *prior* to any branch Grenade tests must be in a Maintained status, and so we do not perform upgrade testing on the oldest Maintained status branch. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Wed Jun 17 19:38:12 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 17 Jun 2020 12:38:12 -0700 Subject: =?UTF-8?Q?Re:_[all][tc][stable][qa]_Grenade_testing_for_Extended_Mainten?= =?UTF-8?Q?ance_stable?= In-Reply-To: <172c38005cd.cd05105e394518.669381411351594343@ghanshyammann.com> References: <172c38005cd.cd05105e394518.669381411351594343@ghanshyammann.com> Message-ID: On Wed, Jun 17, 2020, at 11:17 AM, Ghanshyam Mann wrote: > Hello Everyone, > > As you know devstack (so does grenade) got broken due to uwsgi new > release, the master branch is fixed > and stable branches are in progress[1]. But It is hard to maintain or > fix the EM stable for those issues. Especially > the greande job which depends on the source branch (previous branch of > one where the job is running). > For example, for stein grenade job, we need to keep rocky branch > working and fix if failure. > > Till now, we have not removed the grenade testing from any of the EM > stable branches because they > were running fine till now but with uwsgi issues, those are failing and > need more work to fix. This triggers > the discussion of grenade testing on EM stable branches. > > Usual policy for grenade testing is to keep the job running from the > 'oldest supported stable +1' branch. > For example, if stein is the oldest supported stable (in the old stable > definition) then run grenade from train onwards. > But with the Extended Maintainance model, defining 'oldest supported > stable' is not clear whether it is the oldest > non-EM(stein) or oldest EM stable(ocata). > > To make it clear, we discussed this in the QA channel and come up with > the below proposal. > > * 'oldest' is the oldest non-EM. In current time, it is stein. > * With the above 'oldest' definition, we will: > ** Make grenade jobs as n-v on all EM stable branches (which is till > stable/rocky as of today) + on stable/stein also because that is > 'oldest' as of today. > ** Keep supporting and running grenade job on 'oldest+1' which is > stable/train onwards as of today. > > NOTE: we will make n-v when they start failing and anyone can volunteer > to fix them and change back to voting. > elod expressed interest to work on current failure. Another important note for if/when there are grenade failures again: fixes to devstack and grenade that affect the grenade job need to be proposed in a "bottom up" fashion. The normal stable backport procedures are the wrong process because grenade uses the previous branch to test the upgrade to the current branch. This means we have to fix the previous branch first. Instead of backporting we need to "fowardport" which we can do using depends-on between branches to ensure the entire series across all branches functions. Calling this out because it is a departure from normal operations, but upgrade testing and grenade make it necessary. > > If no objection to the above proposal, I will document it on the > grenade documents to follow it whenever we see EM failing and need more > work. > In Tempest, we already have the EM stable testing policy documented > which is to support those till they run fine[2]. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015496.html > [2] > https://docs.openstack.org/tempest/latest/stable_branch_support_policy.html > [3] > http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2020-06-17.log.html#t2020-06-17T14:12:42 > > -gmann > > From kennelson11 at gmail.com Wed Jun 17 19:39:53 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 17 Jun 2020 12:39:53 -0700 Subject: [all][InteropWG] Request to review tests your Distro for 2020.06 draft guidelines In-Reply-To: <1708160800.349541.1588966443299@mail.yahoo.com> References: <1708160800.349541.1588966443299.ref@mail.yahoo.com> <1708160800.349541.1588966443299@mail.yahoo.com> Message-ID: Hello Prakash, I noticed you filed a blueprint in the refstack project[1] about this on launchpad which is no longer being used as they migrated to StoryBoard[2] a while ago. If you can open a story with the information you put in the blueprint, it would be good to keep all the work tracked in a single place. Thanks! -Kendall (diablo_rojo) [1] https://blueprints.launchpad.net/refstack/+spec/interop-2020.06 [2] https://storyboard.openstack.org/#!/project_group/61 On Fri, May 8, 2020 at 12:35 PM prakash RAMCHANDRAN wrote: > Hi all, > > Please review your tests for Draft 2020.06 guidelines to be proposed to > Board. > You can do that on and should start appearing in next 24-48 hours > depending on Zuul > https://refstack.openstack.org/#/community_results > > Plus please register for InteropWG PTG meet on June 1 st 6AM-*AM PDT slot > > See etherpads > PTG event > https://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020 > Specific invite Tempest, TC members, QA/API SIG teams, Edge SIG WG , > Baremetal SIG (for Ironic), K8s SIG (for OoK / KoO) & Scientific SIG teams. > > Other discussions > > https://etherpad.opendev.org/p/interop2020 > > > Welcome to join weekly Friday Meetings (Refer NA/EU/APJ in etherpad > below) > > https://etherpad.opendev.org/p/interop > > Appreciate all support from committee and special appreciation to Mark T > Voelker for providing the bridge. > He has been immensely valuable in bringing as Vice Chair the wealth of > history to enable this Interop WG. > > Thanks > Prakash > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Wed Jun 17 23:07:28 2020 From: sorrison at gmail.com (Sam Morrison) Date: Thu, 18 Jun 2020 09:07:28 +1000 Subject: [neutron] Tap-as-a-service releases on pypi behind In-Reply-To: References: <7C9434A5-F2F8-425E-97C8-53BB80FFCEF3@gmail.com> Message-ID: Thank you, that’s working now. Cheers, Sam > On 17 Jun 2020, at 2:42 pm, Lajos Katona wrote: > > Hi, > Thanks Yamamoto. > @Sam: I uploaded the releases to pypi, could you please check it > > Regards > Lajos Katona (lajoskatona) > > Takashi Yamamoto > ezt írta (időpont: 2020. jún. 17., Sze, 4:38): > hi, > > On Tue, Jun 16, 2020 at 4:05 PM Lajos Katona > wrote: > > > > Hi Takashi, > > > > Thanks, my pypi account name is: lajoskatona > > https://pypi.org/user/lajoskatona/ > > thank you. added as an owner. > > > > > Regards > > Lajos > > > > Takashi Yamamoto > ezt írta (időpont: 2020. jún. 16., K, 7:09): > >> > >> i can add you as a maintainer. > >> please tell me your pypi account. > >> > >> On Tue, Jun 16, 2020 at 2:00 PM Lajos Katona > wrote: > >> > > >> > Hi Sam, > >> > It seems that as taas is not under openstack governance Yamamo is the only maintainer on pypi, so he has only right to upload new release. > >> > > >> > Regards > >> > Lajos > >> > > >> > Sam Morrison > ezt írta (időpont: 2020. jún. 16., K, 3:25): > >> >> > >> >> It looks like tap-as-a-service project isn’t getting it’s latest releases pushed to pypi and this is breaking networking-midonet in train release. > >> >> > >> >> Source [1] has tags for releases 4.0.0, 5.0.0 and 6.0.0 but these aren’t in pypi [2] > >> >> > >> >> I need a fix [3] to upper-constraints to get this to work but just realised pypi is behind > >> >> > >> >> Can someone help me please. > >> >> > >> >> > >> >> Thanks, > >> >> Sam > >> >> > >> >> > >> >> [1] https://opendev.org/x/tap-as-a-service/ > >> >> [2] https://pypi.org/project/tap-as-a-service/ > >> >> [3] https://review.opendev.org/#/c/735754/ > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Jun 17 23:17:33 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 17 Jun 2020 16:17:33 -0700 Subject: [neutron] Tap-as-a-service releases on pypi behind In-Reply-To: References: <7C9434A5-F2F8-425E-97C8-53BB80FFCEF3@gmail.com> Message-ID: <88902bbe-906e-44c5-a64c-6b19b488a4b5@www.fastmail.com> On Mon, Jun 15, 2020, at 9:59 PM, Lajos Katona wrote: > Hi Sam, > It seems that as taas is not under openstack governance Yamamo is the > only maintainer on pypi, so he has only right to upload new release. > > Regards > Lajos > > Sam Morrison ezt írta (időpont: 2020. jún. 16., K, 3:25): > > It looks like tap-as-a-service project isn’t getting it’s latest releases pushed to pypi and this is breaking networking-midonet in train release. > > > > Source [1] has tags for releases 4.0.0, 5.0.0 and 6.0.0 but these aren’t in pypi [2] > > > > I need a fix [3] to upper-constraints to get this to work but just realised pypi is behind > > > > Can someone help me please. > > > > > > Thanks, > > Sam > > > > > > [1] https://opendev.org/x/tap-as-a-service/ > > [2] https://pypi.org/project/tap-as-a-service/ > > [3] https://review.opendev.org/#/c/735754/ > > As a side note you can do automated releases and uploads to pypi without being under openstack governance. You still need to add the openstack infra (really opendev now but the account hasn't changed) account as a maintainer then add the jobs in Zuul. Then when tags are pushed to Gerrit this can happen automatically. That said none of this is required if you prefer to push releases yourself. Clark From pramchan at yahoo.com Thu Jun 18 03:56:29 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Thu, 18 Jun 2020 03:56:29 +0000 (UTC) Subject: [all][InteropWG] Request to review tests your Distro for 2020.06 draft guidelines In-Reply-To: References: <1708160800.349541.1588966443299.ref@mail.yahoo.com> <1708160800.349541.1588966443299@mail.yahoo.com> Message-ID: <786100428.2634998.1592452589987@mail.yahoo.com> Sent from Yahoo Mail on Android On Wed, Jun 17, 2020 at 12:40 PM, Kendall Nelson wrote: Hello Prakash,  I noticed you filed a blueprint in the refstack project[1] about this on launchpad which is no longer being used as they migrated to StoryBoard[2] a while ago. If you can open a story with the information you put in the blueprint, it would be good to keep all the work tracked in a single place.  Thanks! -Kendall (diablo_rojo) [1] https://blueprints.launchpad.net/refstack/+spec/interop-2020.06[2] https://storyboard.openstack.org/#!/project_group/61 On Fri, May 8, 2020 at 12:35 PM prakash RAMCHANDRAN wrote: Hi all, Please review your tests for Draft 2020.06 guidelines to be proposed to Board.You can do that on and should start appearing in next 24-48 hours depending on Zuulhttps://refstack.openstack.org/#/community_results Plus please register for InteropWG  PTG meet on  June 1 st 6AM-*AM PDT slot See etherpadsPTG eventhttps://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020Specific invite Tempest, TC members, QA/API SIG teams, Edge SIG WG , Baremetal SIG (for Ironic), K8s SIG (for OoK / KoO) & Scientific SIG teams. Other discussions  https://etherpad.opendev.org/p/interop2020 Welcome to join weekly  Friday Meetings (Refer NA/EU/APJ in etherpad below)  https://etherpad.opendev.org/p/interop Appreciate all support from committee and special appreciation to Mark T Voelker for providing the bridge.He has been immensely valuable in bringing as Vice Chair the wealth of history to enable this Interop WG. ThanksPrakash  -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Thu Jun 18 07:42:12 2020 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 18 Jun 2020 09:42:12 +0200 Subject: [tripleo] Stop using host's /run|/var/run inside containers Message-ID: <04e5224e-3fe8-0143-2529-6a75d2cbfd2e@redhat.com> Hello all! While working on podman integration, especially the SELinux part of it, I was wondering why we kept using the host's /run (or its replicated /var/run) location inside containers. And I'm still wondering, 2 years later ;). Reasons: - from time to time, there are patches adding a ":z" flag to the run bind-mount. This breaks the system, since the host systemd can't write/access container_file_t SELinux context. Doing a relabeling might therefore prevent a service restart. - in order to keep things in a clean, understandable tree, getting a dedicated shared directory for the container's sockets makes sense, as it might make things easier to check (for instance, "is this or that service running in a container?") - if an operator runs a restorecon during runtime, it will break container services - mounting /run directly in the containers might expose unwanted sockets, such as DBus (this creates SELinux denials, and we're monkey-patching things and doing really ugly changes to prevent it). It's more than probable other unwanted shared sockets end in the containers, and it might expose the host at some point. Here again, from time to time we see new SELinux policies being added in order to solve the denials, and it creates big holes in the host security AFAIK, no *host* service is accessed by any container services, right? If so, could we imagine moving the shared /run to some other location on the host, such as /run/containers, or /container-run, or any other *dedicated* location we can manage as we want on a SELinux context? I would therefore get some feedback about this proposed change. For the containers, nothing should change: - they will get their /run populated with other containers sockets - they will NOT be able to access the host services at all. Thank you for your feedback, ideas and thoughts! Cheers, C. -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From cjeanner at redhat.com Thu Jun 18 07:59:05 2020 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 18 Jun 2020 09:59:05 +0200 Subject: [tripleo] Stop using host's /run|/var/run inside containers In-Reply-To: <04e5224e-3fe8-0143-2529-6a75d2cbfd2e@redhat.com> References: <04e5224e-3fe8-0143-2529-6a75d2cbfd2e@redhat.com> Message-ID: On 6/18/20 9:42 AM, Cédric Jeanneret wrote: > Hello all! > > While working on podman integration, especially the SELinux part of it, > I was wondering why we kept using the host's /run (or its replicated > /var/run) location inside containers. And I'm still wondering, 2 years > later ;). > > Reasons: > - from time to time, there are patches adding a ":z" flag to the run > bind-mount. This breaks the system, since the host systemd can't > write/access container_file_t SELinux context. Doing a relabeling might > therefore prevent a service restart. > > - in order to keep things in a clean, understandable tree, getting a > dedicated shared directory for the container's sockets makes sense, as > it might make things easier to check (for instance, "is this or that > service running in a container?") > > - if an operator runs a restorecon during runtime, it will break > container services > > - mounting /run directly in the containers might expose unwanted > sockets, such as DBus (this creates SELinux denials, and we're > monkey-patching things and doing really ugly changes to prevent it). > It's more than probable other unwanted shared sockets end in the > containers, and it might expose the host at some point. Here again, from > time to time we see new SELinux policies being added in order to solve > the denials, and it creates big holes in the host security > > AFAIK, no *host* service is accessed by any container services, right? > If so, could we imagine moving the shared /run to some other location on > the host, such as /run/containers, or /container-run, or any other > *dedicated* location we can manage as we want on a SELinux context? Small addendum/errata: some containers DO need to access some specific sockets/directories in /run, such as /run/netns and, probably, /run/openvswitch (iirc this one isn't running in a container). For those specific cases, we can of course mount the specific locations inside the container's /run. This addendum doesn't change the main question though :) > > I would therefore get some feedback about this proposed change. > > For the containers, nothing should change: > - they will get their /run populated with other containers sockets > - they will NOT be able to access the host services at all. > > Thank you for your feedback, ideas and thoughts! > > Cheers, > > C. > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From johannes.kulik at sap.com Thu Jun 18 08:16:34 2020 From: johannes.kulik at sap.com (Johannes Kulik) Date: Thu, 18 Jun 2020 10:16:34 +0200 Subject: [neutron][fwaas] Removal of neutron-fwaas projects from the neutron stadium In-Reply-To: <20200616104606.mndzm2bgkzchq645@skaplons-mac> References: <20200616104606.mndzm2bgkzchq645@skaplons-mac> Message-ID: <81303bec-2f13-f926-345a-b8ecd868bcf1@sap.com> On 6/16/20 12:46 PM, Slawek Kaplonski wrote: > Hi, > > In Shanghai PTG we agreed that due to lack of maintainers of neutron-fwaas > project we are going to deprecate it in neutron stadium in Ussuri cycle. > Since then we asked couple of times about volunteers who would like to maintain > this project but unfortunatelly there is still lack of such maintainers. > So now, as we are already in Victoria cycle, I just proposed serie of patches > [1] to remove master branch of neutron-fwaas and neutron-fwaas-dashboard from > the neutron stadium. Stable branches will be still there and can be maintained > but there will be no any code in master branch and there will be no new releases > of those 2 projects in Victoria. > > If You are using this project and wants to maintain it, You can respin it in x/ > namespace if needed. > > Feel free to ping me on IRC (slaweq) or by email if You would have any questions > about that. > > [1] https://review.opendev.org/#/q/topic:retire-neutron-fwaas+(status:open+OR+status:merged) > Hi, with neutron-fwaas gone, what will happen to the APIs for fwaas? Will they get deprecated, too, at some point in time? We have a plan to use those APIs in our custom ml2/l3 drivers. Would that still make sense with neutron-fwaas out of the picture? Have a nice day, Johannes -- Johannes Kulik IT Architecture Senior Specialist *SAP SE *| Rosenthaler Str. 30 | 10178 Berlin | Germany From dtantsur at redhat.com Thu Jun 18 09:18:11 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 18 Jun 2020 11:18:11 +0200 Subject: [all][release] One following-cycle release model to bind them all In-Reply-To: <2f187d4a-c97c-07ea-6473-d0d0cb86eafb@debian.org> References: <6ac2ad5d-2fd7-c5b1-4889-415613023292@openstack.org> <743ef518-7029-13f0-53e8-5851d52241a2@debian.org> <20200610132344.ncfuuqu2pgx6skvp@yuggoth.org> <46ef2c62-e7fa-f853-ff3f-90e6e69fbee1@goirand.fr> <20200610165646.iwvckxm4axyvywsa@yuggoth.org> <2f187d4a-c97c-07ea-6473-d0d0cb86eafb@debian.org> Message-ID: Hi, On Wed, Jun 10, 2020 at 11:59 PM Thomas Goirand wrote: > On 6/10/20 6:56 PM, Jeremy Stanley wrote: > >> This means that we wont have tags for the pre-release. > > > > We will, they'll just have release-like numbers on them > > It doesn't make sense. > > >> If the issue is just cosmetic as you say, then let's keep rc1 as > >> the name for the pre-release version. > > > > The workflow difference is primarily cosmetic (other than not > > necessarily needing to re-tag the last release candidate at > > coordinated release time). > > Is the re-tag of services THAT time/resource consuming? > > > The issue it solves is not cosmetic: we > > currently have two primary release models, one for services and > > another for libraries. This would result in following the same model > > for services as we've been using to release libraries for years, > > just at a different point in the cycle than when libraries are > > released. > > When I'll look into my Debian Q/A page [1] I wont be able to know if I > missed packaging final release just by looking at version numbers (ie: > track if there's still some RC version remaining and fix...). > How do you solve it for thousands of other packages that don't do RC? > > I'd be for the opposite move: tagging libraries as RC before the final > release would make a lot of sense, and help everyone identify what these > versions represent. > > On 6/10/20 6:33 PM, Mark Goddard wrote: > > I think the issue is that currently there is a period of time in which > > every project has a release candidate which can be packaged and > > tested, prior to the release. In the new model there is no obligation > > to release anything prior to GA, and I expect most teams would not. > > There's also that above that Mark wrote... > > On 6/10/20 7:05 PM, Jeremy Stanley wrote: > > You and I clearly read very different proposals then. My > > understanding is that this does not get rid of the period of time > > you're describing, just changes the tags we use in it: > > With this proposal, every project will treat the scheduled first RC as > the release time itself, and move on to work on master. Even worse: > since they are supposed to be just RC, you'll see that projects will > care less to be on-time for it, and the final version from projects will > be cut in a period varying from start of what we used to call the RC1, > to the final release date. > In ironic we have been doing what Thierry proposed for years without seeing any negative effects. And yes, the first RC *is* a release, so the projects will pretty rightfully treat it like that. > > So this effectively, removes the pre-release period which we used to > have dedicated for debugging and stabilising. > Maybe it works this way for Nova and other core projects, but it has never worked for everyone else. People either consume master (like RDO) or final releases (like pretty much every final consumer). Maybe, judging by your messages, Debian was actually the (only) consumer that cared about RC releases, although I cannot comment on how much actual testing we got specifically from people installing RC releases from Debian. Dmitry > > On 6/10/20 6:56 PM, Jeremy Stanley wrote: > > I think the proposal has probably confused some folks > > by saying, "stop having release candidates [...and instead have a] > > candidate for inclusion in the coordinated OpenStack release." > > Jeremy, my opinion is that you are the person not understanding what > this proposal implies, and what consequence it will have on how projects > will release final versions. > > > It would basically still be a "release candidate" in spirit, just not > > in name, and not using the same tagging scheme as we have > > traditionally used for release candidates of service projects. > > Please keep release candidate not just "in spirit", but effectively, > with the correct name matching what they are supposed to be. Otherwise, > you're effectively removing what the RCs were. > > If that is what you want to do (ie: stop having release candidates), > because of various reasons, just explain why and move on. I would > understand such a move: > - if we declare OpenStack more mature, and needing less care for > coordinated releases. > - if there's not enough people working on stable branches between RC and > final releases. > - if OpenStack isn't producing lots of bug-fixes after the first RCs, > and they are now useless. > > I wouldn't understand if RC versions would be gone, just because numbers > don't look pretty. That's IMO a wrong answer to a wrong problem. > > Cheers, > > Thomas Goirand (zigo) > > [1] > > https://qa.debian.org/developer.php?login=openstack-devel at lists.alioth.debian.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kotobi at dkrz.de Thu Jun 18 09:40:01 2020 From: kotobi at dkrz.de (Amjad Kotobi) Date: Thu, 18 Jun 2020 11:40:01 +0200 Subject: [nova] stein to train upgrade Message-ID: Hi all, I tried to upgrade nova-compute from stein -> train release and “openstack-nova-compute” couldn’t able to start and from log here is coming up ################### 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service Traceback (most recent call last): 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 810, in run_service 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service service.start() 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/nova/service.py", line 174, in start 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service self.manager.init_host() 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1337, in init_host 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service expected_attrs=['info_cache', 'metadata', 'numa_topology']) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 177, in wrapper 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service args, kwargs) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 241, in object_class_action_versions 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service args=args, kwargs=kwargs) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 181, in call 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=self.transport_options) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 129, in _send 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=transport_options) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 646, in send 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=transport_options) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 636, in _send 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service raise result 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service RemoteError: Remote error: DBError (pymysql.err.InternalError) (1054, u"Unknown column 'instances.hidden' in 'field list ……. ################### I only took beginning part, the field doesn’t exist in DB, or? I also upgrade rest of the nova components[api-scheduler-conductor,…] but that didn’t go away. Any idea how to tackle this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kotobi at dkrz.de Thu Jun 18 10:07:31 2020 From: kotobi at dkrz.de (Amjad Kotobi) Date: Thu, 18 Jun 2020 12:07:31 +0200 Subject: [nova][oslo][compute][conductor] stein to train upgrade In-Reply-To: References: Message-ID: <9CB23AC1-82A4-4C07-BA2C-03C5999D16B6@dkrz.de> Adding more logs from “nova-conductor.log" 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last): 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1236, in _execute_context 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters cursor, statement, parameters, context 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 536, in do_execute 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters) 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 170, in execute 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters result = self._query(query) 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 328, in _query 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters conn.query(q) 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 516, in query 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 727, in _read_query_result 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters result.read() 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1066, in read 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters first_packet = self.connection._read_packet() 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 683, in _read_packet 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters packet.check_error() 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/protocol.py", line 220, in check_error 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters err.raise_mysql_exception(self._data) 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/err.py", line 109, in raise_mysql_exception 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters raise errorclass(errno, errval) 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters InternalError: (1054, u"Unknown column 'instances.hidden' in 'field list'") 2020-06-18 12:06:00.121 118508 ERROR oslo_db.sqlalchemy.exc_filters > On 18. Jun 2020, at 11:40, Amjad Kotobi wrote: > > Hi all, > > I tried to upgrade nova-compute from stein -> train release and “openstack-nova-compute” couldn’t able to start and from log here is coming up > > ################### > > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service Traceback (most recent call last): > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 810, in run_service > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service service.start() > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/nova/service.py", line 174, in start > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service self.manager.init_host() > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1337, in init_host > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service expected_attrs=['info_cache', 'metadata', 'numa_topology']) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 177, in wrapper > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service args, kwargs) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 241, in object_class_action_versions > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service args=args, kwargs=kwargs) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 181, in call > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=self.transport_options) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 129, in _send > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=transport_options) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 646, in send > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=transport_options) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 636, in _send > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service raise result > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service RemoteError: Remote error: DBError (pymysql.err.InternalError) (1054, u"Unknown column 'instances.hidden' in 'field list > ……. > ################### > > I only took beginning part, the field doesn’t exist in DB, or? > I also upgrade rest of the nova components[api-scheduler-conductor,…] but that didn’t go away. > > Any idea how to tackle this? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.dibbo at stfc.ac.uk Thu Jun 18 10:11:29 2020 From: alexander.dibbo at stfc.ac.uk (Alexander Dibbo - UKRI STFC) Date: Thu, 18 Jun 2020 10:11:29 +0000 Subject: [nova] stein to train upgrade In-Reply-To: References: Message-ID: <77c30526004b436db34c558e8432cd6c@stfc.ac.uk> Hi, I’ve just worked through this issue myself, the issue is due to `nova-mange db sync` not completing correctly. In my case the issue was caused by us having upgraded from MariaDB 10.1 to 10.3: The issue is described here for another instance: https://bugs.launchpad.net/kolla-ansible/+bug/1856296 And here is the instruction on how to fix it: https://lxadm.com/MySQL:_changing_ROW_FORMAT_to_DYNAMIC_or_COMPRESSED For me I had to alter: Nova.instances Nova.shadow_instances Nova_cell0.instances Nova_cell0.shadow_instances I hope this helps Regards Alexander Dibbo – Cloud Architect For STFC Cloud Documentation visit https://stfc-cloud-docs.readthedocs.io To raise a support ticket with the cloud team please email cloud-support at gridpp.rl.ac.uk To receive notifications about the service please subscribe to our mailing list at: https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=STFC-CLOUD To receive fast notifications or to discuss usage of the cloud please join our Slack: https://stfc-cloud.slack.com/ From: Amjad Kotobi Sent: 18 June 2020 10:40 To: openstack-discuss at lists.openstack.org Subject: [nova] stein to train upgrade Hi all, I tried to upgrade nova-compute from stein -> train release and “openstack-nova-compute” couldn’t able to start and from log here is coming up ################### 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service Traceback (most recent call last): 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 810, in run_service 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service service.start() 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/nova/service.py", line 174, in start 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service self.manager.init_host() 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1337, in init_host 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service expected_attrs=['info_cache', 'metadata', 'numa_topology']) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 177, in wrapper 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service args, kwargs) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 241, in object_class_action_versions 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service args=args, kwargs=kwargs) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 181, in call 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=self.transport_options) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 129, in _send 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=transport_options) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 646, in send 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=transport_options) 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 636, in _send 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service raise result 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service RemoteError: Remote error: DBError (pymysql.err.InternalError) (1054, u"Unknown column 'instances.hidden' in 'field list ……. ################### I only took beginning part, the field doesn’t exist in DB, or? I also upgrade rest of the nova components[api-scheduler-conductor,…] but that didn’t go away. Any idea how to tackle this? This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. Opinions, conclusions or other information in this message and attachments that are not related directly to UKRI business are solely those of the author and do not represent the views of UKRI. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jun 18 10:50:18 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 18 Jun 2020 11:50:18 +0100 Subject: [nova] stein to train upgrade In-Reply-To: <77c30526004b436db34c558e8432cd6c@stfc.ac.uk> References: <77c30526004b436db34c558e8432cd6c@stfc.ac.uk> Message-ID: On Thu, 2020-06-18 at 10:11 +0000, Alexander Dibbo - UKRI STFC wrote: > Hi, > > I’ve just worked through this issue myself, the issue is due to `nova-mange db sync` not completing correctly. > > In my case the issue was caused by us having upgraded from MariaDB 10.1 to 10.3: > > The issue is described here for another instance: > https://bugs.launchpad.net/kolla-ansible/+bug/1856296 > > And here is the instruction on how to fix it: > https://lxadm.com/MySQL:_changing_ROW_FORMAT_to_DYNAMIC_or_COMPRESSED > > For me I had to alter: > Nova.instances > Nova.shadow_instances > Nova_cell0.instances > Nova_cell0.shadow_instances > > I hope this helps > > Regards > > Alexander Dibbo – Cloud Architect > For STFC Cloud Documentation visit https://stfc-cloud-docs.readthedocs.io; > To raise a support ticket with the cloud team please email cloud-support at gridpp.rl.ac.uk cloud-support at gridpp.rl.ac.uk> > To receive notifications about the service please subscribe to our mailing list at: > https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=STFC-CLOUD > To receive fast notifications or to discuss usage of the cloud please join our Slack: https://stfc-cloud.slack.com/ > > From: Amjad Kotobi > Sent: 18 June 2020 10:40 > To: openstack-discuss at lists.openstack.org > Subject: [nova] stein to train upgrade > > Hi all, > > I tried to upgrade nova-compute from stein -> train release and “openstack-nova-compute” couldn’t able to start and > from log here is coming up > > ################### > > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service Traceback (most recent call last): > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- > packages/oslo_service/service.py", line 810, in run_service > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service service.start() > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/nova/service.py", > line 174, in start > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service self.manager.init_host() > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- > packages/nova/compute/manager.py", line 1337, in init_host > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service expected_attrs=['info_cache', 'metadata', > 'numa_topology']) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- > packages/oslo_versionedobjects/base.py", line 177, in wrapper > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service args, kwargs) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- > packages/nova/conductor/rpcapi.py", line 241, in object_class_action_versions > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service args=args, kwargs=kwargs) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- > packages/oslo_messaging/rpc/client.py", line 181, in call > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=self.transport_options) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- > packages/oslo_messaging/transport.py", line 129, in _send > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=transport_options) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- > packages/oslo_messaging/_drivers/amqpdriver.py", line 646, in send > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=transport_options) > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- > packages/oslo_messaging/_drivers/amqpdriver.py", line 636, in _send > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service raise result > 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service RemoteError: Remote error: DBError > (pymysql.err.InternalError) (1054, u"Unknown column 'instances.hidden' in 'field list > ……. > ################### > > I only took beginning part, the field doesn’t exist in DB, or? > I also upgrade rest of the nova components[api-scheduler-conductor,…] but that didn’t go away. also just a note you should always update the controller first and computes second. if you do otherwise it can case issues with rpc. but yes this is caused by a failure of the db migration to run which is done via "nova-manage db sync" on a contoler node. the compute nodes should not have db access or creds in there conf files. > > Any idea how to tackle this? > > > > This email and any attachments are intended solely for the use of the named recipients. If you are not the intended > recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the > sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every > reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the > recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to presence of any viruses. Opinions, > conclusions or other information in this message and attachments that are not related directly to UKRI business are > solely those of the author and do not represent the views of UKRI. From kotobi at dkrz.de Thu Jun 18 11:46:24 2020 From: kotobi at dkrz.de (Amjad Kotobi) Date: Thu, 18 Jun 2020 13:46:24 +0200 Subject: [nova] stein to train upgrade In-Reply-To: References: <77c30526004b436db34c558e8432cd6c@stfc.ac.uk> Message-ID: <8D960822-EDDB-4D00-9042-AA5B0C13C429@dkrz.de> Thank you all, It did help me out and solved the issue. > On 18. Jun 2020, at 12:50, Sean Mooney wrote: > > On Thu, 2020-06-18 at 10:11 +0000, Alexander Dibbo - UKRI STFC wrote: >> Hi, >> >> I’ve just worked through this issue myself, the issue is due to `nova-mange db sync` not completing correctly. >> >> In my case the issue was caused by us having upgraded from MariaDB 10.1 to 10.3: >> >> The issue is described here for another instance: >> https://bugs.launchpad.net/kolla-ansible/+bug/1856296 >> >> And here is the instruction on how to fix it: >> https://lxadm.com/MySQL:_changing_ROW_FORMAT_to_DYNAMIC_or_COMPRESSED >> >> For me I had to alter: >> Nova.instances >> Nova.shadow_instances >> Nova_cell0.instances >> Nova_cell0.shadow_instances >> >> I hope this helps >> >> Regards >> >> Alexander Dibbo – Cloud Architect >> For STFC Cloud Documentation visit https://stfc-cloud-docs.readthedocs.io >; >> To raise a support ticket with the cloud team please email cloud-support at gridpp.rl.ac.uk > cloud-support at gridpp.rl.ac.uk> >> To receive notifications about the service please subscribe to our mailing list at: >> https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=STFC-CLOUD >> To receive fast notifications or to discuss usage of the cloud please join our Slack: https://stfc-cloud.slack.com/ >> >> From: Amjad Kotobi >> Sent: 18 June 2020 10:40 >> To: openstack-discuss at lists.openstack.org >> Subject: [nova] stein to train upgrade >> >> Hi all, >> >> I tried to upgrade nova-compute from stein -> train release and “openstack-nova-compute” couldn’t able to start and >> from log here is coming up >> >> ################### >> >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service Traceback (most recent call last): >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- >> packages/oslo_service/service.py", line 810, in run_service >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service service.start() >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/nova/service.py", >> line 174, in start >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service self.manager.init_host() >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- >> packages/nova/compute/manager.py", line 1337, in init_host >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service expected_attrs=['info_cache', 'metadata', >> 'numa_topology']) >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- >> packages/oslo_versionedobjects/base.py", line 177, in wrapper >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service args, kwargs) >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- >> packages/nova/conductor/rpcapi.py", line 241, in object_class_action_versions >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service args=args, kwargs=kwargs) >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- >> packages/oslo_messaging/rpc/client.py", line 181, in call >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=self.transport_options) >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- >> packages/oslo_messaging/transport.py", line 129, in _send >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=transport_options) >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- >> packages/oslo_messaging/_drivers/amqpdriver.py", line 646, in send >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service transport_options=transport_options) >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service File "/usr/lib/python2.7/site- >> packages/oslo_messaging/_drivers/amqpdriver.py", line 636, in _send >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service raise result >> 2020-06-18 11:31:08.037 303344 ERROR oslo_service.service RemoteError: Remote error: DBError >> (pymysql.err.InternalError) (1054, u"Unknown column 'instances.hidden' in 'field list >> ……. >> ################### >> >> I only took beginning part, the field doesn’t exist in DB, or? >> I also upgrade rest of the nova components[api-scheduler-conductor,…] but that didn’t go away. > also just a note you should always update the controller first and computes second. > if you do otherwise it can case issues with rpc. > > but yes this is caused by a failure of the db migration to run which is done > via "nova-manage db sync" on a contoler node. the compute nodes should not have > db access or creds in there conf files. >> >> Any idea how to tackle this? >> >> >> >> This email and any attachments are intended solely for the use of the named recipients. If you are not the intended >> recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the >> sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every >> reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the >> recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any >> liability for any losses or damages which the recipient may sustain due to presence of any viruses. Opinions, >> conclusions or other information in this message and attachments that are not related directly to UKRI business are >> solely those of the author and do not represent the views of UKRI. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Thu Jun 18 11:57:35 2020 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 18 Jun 2020 06:57:35 -0500 Subject: no-cache-dir patch bomb Message-ID: <1344B080-D1AA-4FB5-A27F-565F6B5FA5D5@inaugust.com> Hi all! We’ve apparently just gotten a bazillion patches uploaded that do a global search and replace to add —no-cache-dir to all of the pip commands it can find. I have a couple of issues with this and think people should be very careful about landing them a) —no-cache-dir is NOT universally a good idea, and in many cases for interactive install using it will be a worse user experience. The linked post is specifically about reducing docker image size. b) Importantly, it’s updating all of the end-user installation docs. That is highly inappropriate and makes our docs much more user unfriendly, especially given (a) c) In some install scripts we’re going to run them potentially in devstack. While constraints SHOULD prevent a large amount of reinstalling of different versions of things over and over, we all know reality is messier. I think we want to think carefully about removing cache dir usage in devstack as it could actually lead to an increase in install time, and optimizing for docker image size is not a thing we’re attempting to do in functional tests. Regarding the docker image size optimization, for those who are building docker images, I would highly recommend either installing from bistro packages or using builder-image pattern instead. We have one implemented in opendevorg/python-builder and opendevorg/python-base - but the pattern can be implemented elsewhere. The main idea/benefit is to use a builder image to install dev tools and -dev packages with headers and things needed to build wheels, build wheels for all of the dependencies - then in a second build stage copy the wheels from the builder image and install them. We’re using a “compile” tag in binder to indicate what packages are needed for compile-time tasks. The savings from this can be substantial - obviously at the cost of greater complexity. In any case, I HIGHLY recommend not landing any of these patches as submitted. Monty From aj at suse.com Thu Jun 18 12:11:51 2020 From: aj at suse.com (Andreas Jaeger) Date: Thu, 18 Jun 2020 14:11:51 +0200 Subject: no-cache-dir patch bomb In-Reply-To: <1344B080-D1AA-4FB5-A27F-565F6B5FA5D5@inaugust.com> References: <1344B080-D1AA-4FB5-A27F-565F6B5FA5D5@inaugust.com> Message-ID: On 18.06.20 13:57, Monty Taylor wrote: > [...] > In any case, I HIGHLY recommend not landing any of these patches as submitted. I agree and suggest to abandon directly, Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From skaplons at redhat.com Thu Jun 18 12:53:17 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 18 Jun 2020 14:53:17 +0200 Subject: [Neutron] Drivers meeting agenda - 19.06.2020 Message-ID: <20200618125317.b232eciomg7bldzz@skaplons-mac> Hi, Here is the agenda for tomorrows drivers meeting. I want to discuss few RFEs there: - https://bugs.launchpad.net/neutron/+bug/1880532 - L3 Router should support ECMP - this is continuation of the discussion from previous meeting. There is also spec proposed for that one https://review.opendev.org/#/c/729532/ - https://bugs.launchpad.net/neutron/+bug/1751040 - Extended resources should not always calculate all attributes - old RFE but we should discuss that finally, - https://bugs.launchpad.net/neutron/+bug/1882804 - allow replacing the QoS policy of bound port - this one is follow up after nova-neutron discussion during the PTG. TBH I don't think we will need a lot of discussion about that but still it needs an approval. There are also 2 items in "On demand agenda": - (ralonsoh) [RFE] Optional NUMA affinity for neutron ports (commented in the PTG and scheduled to be commented here before). BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1791834. No LP or BP created, this is just to present and discuss the idea of **using or not the QoS API** for this RFE. - (slaweq) We are deprecating neutron-fwaas. What with the API-REF? It's question from the ML http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015508.html As agenda is pretty busy, I would like to start with On Demand agenda tomorrow and then continue with RFEs according to how much time we will have. See You on the meeting tomorrow. -- Slawek Kaplonski Senior software engineer Red Hat From sean.mcginnis at gmx.com Thu Jun 18 13:09:40 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 18 Jun 2020 08:09:40 -0500 Subject: mox3 project retirement Message-ID: <96a8da3e-89b8-b855-12b3-dc2b4e707d38@gmx.com> Hey everyone, Way back in Rocky, we had a cycle goal to stop using mox/mox3: https://governance.openstack.org/tc/goals/selected/rocky/mox_removal.html The mox library itself is from the broader community, but mox3 specifically was a fork we were maintaining of that lib to add Python 3.x support. Not a lot has been happening there (or luckily needed either), but it is still a repo we have had to maintain while we were transitioning away from mox to mock. There were still a few repos that had mox3 in their requirements files, even though I think in most cases the actual use was dropped awhile ago. With these recent set of patches to clean that up, and remove it from global requirements, we now no longer have any repos in the openstack/* namespace referencing mox3: https://review.opendev.org/#/q/topic:remove-mox3+(status:open+OR+status:merged) This is to announce my intent to officially retire the mox3 repo and clean things up. Of course the lib is still out there in pypi, and can be used by others. It is also possible for the repo to be restored or forked elsewhere if someone not in openstack/* has a strong need for it. But at least within the OpenStack community, I think our time is better spent elsewhere. https://docs.openstack.org/mox3/latest/ https://opendev.org/openstack/mox3 Please raise any concerns here. Thanks, Sean From openstack at nemebean.com Thu Jun 18 13:59:26 2020 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 18 Jun 2020 08:59:26 -0500 Subject: mox3 project retirement In-Reply-To: <96a8da3e-89b8-b855-12b3-dc2b4e707d38@gmx.com> References: <96a8da3e-89b8-b855-12b3-dc2b4e707d38@gmx.com> Message-ID: <2fc471b9-f328-76cb-8cc8-78ce2f37e9d1@nemebean.com> \o/ I have no further thoughts on the matter. :-) On 6/18/20 8:09 AM, Sean McGinnis wrote: > Hey everyone, > > Way back in Rocky, we had a cycle goal to stop using mox/mox3: > > https://governance.openstack.org/tc/goals/selected/rocky/mox_removal.html > > The mox library itself is from the broader community, but mox3 > specifically was a fork we were maintaining of that lib to add Python > 3.x support. Not a lot has been happening there (or luckily needed > either), but it is still a repo we have had to maintain while we were > transitioning away from mox to mock. > > There were still a few repos that had mox3 in their requirements files, > even though I think in most cases the actual use was dropped awhile ago. > With these recent set of patches to clean that up, and remove it from > global requirements, we now no longer have any repos in the openstack/* > namespace referencing mox3: > > https://review.opendev.org/#/q/topic:remove-mox3+(status:open+OR+status:merged) > > > This is to announce my intent to officially retire the mox3 repo and > clean things up. Of course the lib is still out there in pypi, and can > be used by others. It is also possible for the repo to be restored or > forked elsewhere if someone not in openstack/* has a strong need for it. > But at least within the OpenStack community, I think our time is better > spent elsewhere. > > https://docs.openstack.org/mox3/latest/ > > https://opendev.org/openstack/mox3 > > Please raise any concerns here. > > Thanks, > > Sean > > From ildiko.vancsa at gmail.com Thu Jun 18 14:23:29 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 18 Jun 2020 16:23:29 +0200 Subject: [edge] Edge summaries from the virtual PTG Message-ID: <92CCDBD3-EE6E-4D28-AE0B-CC1C418E1A5B@gmail.com> Hi, It was great seeing many of you at the virtual PTG two weeks ago. I wrote up two blog posts to summarize edge related sessions at the event: * Edge Computing Group recap: https://superuser.openstack.org/articles/osf-edge-computing-group-ptg-overview/ * StarlingX recap: https://www.starlingx.io/blog/starlingx-vptg-june-2020-recap/ The articles have pointers to the etherpads we used at the event and include further pointers as well in case you would like to follow up on either discussion item or missed the event and would like to get a somewhat detailed view of what was discussed. Thanks and Best Regards, Ildikó From tobias.urdin at binero.com Thu Jun 18 14:55:45 2020 From: tobias.urdin at binero.com (Tobias Urdin) Date: Thu, 18 Jun 2020 14:55:45 +0000 Subject: [neutron][neutron-dynamic-routing] Call for maintainers In-Reply-To: <20200616194437.ejoe73t7ahsbzxhe@skaplons-mac> References: <20200616194437.ejoe73t7ahsbzxhe@skaplons-mac> Message-ID: <1592492145682.7834@binero.com> Hello Slawek, We are users of neutron-dynamic-routing. We would like to see the project continuing, we have very limited resources but I will check with our manager to see if we can help with this. Hopefully others will step up as well so we can share the work. Best regards Tobias ________________________________________ From: Slawek Kaplonski Sent: Tuesday, June 16, 2020 9:44 PM To: OpenStack Discuss ML Subject: [neutron][neutron-dynamic-routing] Call for maintainers Hi, During last, virtual PTG we discussed about health of neutron stadium projects (again). And it seems that neutron-dynamic-routing project is slowly going to be in similar state as neutron-fwaas in Ussuri. So there is basically no active maintainers of this project in our community. In Ussuri cycle there was Ryan Tidwell from SUSE who was taking care of this project but AFAIK he is not able to do that anymore in Victoria cycle. So, if You are using this project or are interested in it, please contact me by email or on IRC that You want to take care of it. Usually this don't means like a lot of work every day. But we need someone who we can ask for help e.g. when gate is broken or when there is some new bug reported and there is need to triage it. -- Slawek Kaplonski Senior software engineer Red Hat From amotoki at gmail.com Thu Jun 18 14:55:39 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 18 Jun 2020 23:55:39 +0900 Subject: [neutron][fwaas] Removal of neutron-fwaas projects from the neutron stadium In-Reply-To: <81303bec-2f13-f926-345a-b8ecd868bcf1@sap.com> References: <20200616104606.mndzm2bgkzchq645@skaplons-mac> <81303bec-2f13-f926-345a-b8ecd868bcf1@sap.com> Message-ID: On Thu, Jun 18, 2020 at 5:19 PM Johannes Kulik wrote: > > > > On 6/16/20 12:46 PM, Slawek Kaplonski wrote: > > Hi, > > > > In Shanghai PTG we agreed that due to lack of maintainers of neutron-fwaas > > project we are going to deprecate it in neutron stadium in Ussuri cycle. > > Since then we asked couple of times about volunteers who would like to maintain > > this project but unfortunatelly there is still lack of such maintainers. > > So now, as we are already in Victoria cycle, I just proposed serie of patches > > [1] to remove master branch of neutron-fwaas and neutron-fwaas-dashboard from > > the neutron stadium. Stable branches will be still there and can be maintained > > but there will be no any code in master branch and there will be no new releases > > of those 2 projects in Victoria. > > > > If You are using this project and wants to maintain it, You can respin it in x/ > > namespace if needed. > > > > Feel free to ping me on IRC (slaweq) or by email if You would have any questions > > about that. > > > > [1] https://review.opendev.org/#/q/topic:retire-neutron-fwaas+(status:open+OR+status:merged) > > > > Hi, > > with neutron-fwaas gone, what will happen to the APIs for fwaas? Will > they get deprecated, too, at some point in time? > > We have a plan to use those APIs in our custom ml2/l3 drivers. Would > that still make sense with neutron-fwaas out of the picture? neutron-fwaas removal (and deprecation in Ussuri release) applies all stuffs related to neutron-fwaas including the API and the database plugin layer which provides driver interface. It is not specific to neutron-fwaas reference implementation. Even for API and DB layer, we cannot guarantee that the API and DB layer work well without a working implementation. We the neutron team called for the maintainer for neutron-fwaas several times during Ussuri cycle but nobody volunteered for it, and we believe we had enough time to communicate with the community. Thanks, Akihiro Motoki (irc: amotoki) > > Have a nice day, > Johannes > > -- > Johannes Kulik > IT Architecture Senior Specialist > *SAP SE *| Rosenthaler Str. 30 | 10178 Berlin | Germany > > From elmiko at redhat.com Thu Jun 18 16:33:53 2020 From: elmiko at redhat.com (Michael McCune) Date: Thu, 18 Jun 2020 12:33:53 -0400 Subject: [all][api] Service discovery guidelines moving into frozen state Message-ID: hello all, this is a general advisory from the API SIG to inform the wider community that we are moving 2 guidelines[0][1] into a frozen state for final review. You can learn more about the API SIG guideline process in this document[2]. These guidelines have been in the drafting stage for quite a long time and it is with much exuberance that i send this message. I would also like to give special thanks to Monty Taylor for his continued guidance and oversight. if you are listed as a cross project liaison for the API SIG, you have automatically been added to the reviewer list for these merge requests. these guidelines will be merged in 1 week if there are no objections on the reviews. peace o/ [0] https://review.opendev.org/#/c/459710 [1] https://review.opendev.org/#/c/459405 [2] https://specs.openstack.org/openstack/api-sig/process.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsbiz at yahoo.com Thu Jun 18 18:21:11 2020 From: fsbiz at yahoo.com (fsbiz at yahoo.com) Date: Thu, 18 Jun 2020 18:21:11 +0000 (UTC) Subject: [ironic][nova-compute][placement]: Conflicting resource provider name errors In-Reply-To: <1592492145682.7834@binero.com> References: <20200616194437.ejoe73t7ahsbzxhe@skaplons-mac> <1592492145682.7834@binero.com> Message-ID: <37538868.307990.1592504471707@mail.yahoo.com> Hi folks, Every time I get an ironic switchover I end up with a few resource provider errors as follows. 2020-06-10 05:11:28.129 75837 INFO nova.compute.resource_tracker [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] Compute node record created for sc-ironic06.nvc.nvidia.com:7c95f255-3c54-46ab-87cf-0b1707971e9c with uuid: 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e 2020-06-10 05:11:28.502 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-d8af2589-8c75-427c-a7b1-d5270840a4c8] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-d8af2589-8c75-427c-a7b1-d5270840a4c8", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists.  ", "title": "Conflict"}]}. 2020-06-10 05:12:49.463 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-ffd13abc-08f3-47cd-a224-a07183b066ec] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-ffd13abc-08f3-47cd-a224-a07183b066ec", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists.  ", "title": "Conflict"}]}. So far the only way that works for me to fix these is to un-enroll and then re-enroll the node. Is there a simpler way to fix this? Thanks,Fred. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Jun 18 18:49:13 2020 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 18 Jun 2020 14:49:13 -0400 Subject: [Release-job-failures] Release of openstack/puppet-nova for ref refs/tags/15.6.0 failed In-Reply-To: References: Message-ID: Interesting. The tag was pushed but upload-git-mirror role failed: https://22ce6cebe995c8915d66-868b7eb6ba44b32e3b412bc5acf60e73.ssl.cf5.rackcdn.com/de51e32e9ff5ffedb6b66e6fcbc4a2ec9d4d5ddc/release/openstack-upload-github-mirror/fd8c86f/job-output.txt Is this a known issue? On Thu, Jun 18, 2020 at 2:23 PM wrote: > Build failed. > > - openstack-upload-github-mirror > https://zuul.opendev.org/t/openstack/build/fd8c86f9b57947239bd0273424e1d4d5 > : FAILURE in 2m 30s > - release-openstack-puppet > https://zuul.opendev.org/t/openstack/build/05dd9c7ffdc74af5ae4065eb16a522f9 > : SUCCESS in 5m 40s > - announce-release > https://zuul.opendev.org/t/openstack/build/188851ebacf9455ab50edc2101ae02bc > : SUCCESS in 5m 31s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Jun 18 18:51:59 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 18 Jun 2020 11:51:59 -0700 Subject: =?UTF-8?Q?Re:_[Release-job-failures]_Release_of_openstack/puppet-nova_fo?= =?UTF-8?Q?r_ref_refs/tags/15.6.0_failed?= In-Reply-To: References: Message-ID: <35105d51-7d4b-4eee-820f-03e0804e4240@www.fastmail.com> On Thu, Jun 18, 2020, at 11:49 AM, Emilien Macchi wrote: > Interesting. The tag was pushed but upload-git-mirror role failed: > https://22ce6cebe995c8915d66-868b7eb6ba44b32e3b412bc5acf60e73.ssl.cf5.rackcdn.com/de51e32e9ff5ffedb6b66e6fcbc4a2ec9d4d5ddc/release/openstack-upload-github-mirror/fd8c86f/job-output.txt > > Is this a known issue? Was this a re-enqueue? It failed because the tag already existed on the remote: 2020-06-18 18:07:39.509620 | localhost | ! [remote rejected] 15.6.0 -> 15.6.0 (cannot lock ref 'refs/tags/15.6.0': reference already exists) If the content on github looks up to date and correct then I don't think there is anything else to do here. > > On Thu, Jun 18, 2020 at 2:23 PM wrote: > > Build failed. > > > > - openstack-upload-github-mirror https://zuul.opendev.org/t/openstack/build/fd8c86f9b57947239bd0273424e1d4d5 : FAILURE in 2m 30s > > - release-openstack-puppet https://zuul.opendev.org/t/openstack/build/05dd9c7ffdc74af5ae4065eb16a522f9 : SUCCESS in 5m 40s > > - announce-release https://zuul.opendev.org/t/openstack/build/188851ebacf9455ab50edc2101ae02bc : SUCCESS in 5m 31s > > > > _______________________________________________ > > Release-job-failures mailing list > > Release-job-failures at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > > > -- > Emilien Macchi From sean.mcginnis at gmx.com Thu Jun 18 18:55:14 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 18 Jun 2020 13:55:14 -0500 Subject: [Release-job-failures] Release of openstack/puppet-nova for ref refs/tags/15.6.0 failed In-Reply-To: <35105d51-7d4b-4eee-820f-03e0804e4240@www.fastmail.com> References: <35105d51-7d4b-4eee-820f-03e0804e4240@www.fastmail.com> Message-ID: <563b4b9b-4273-20aa-e0a1-a21f8c42fa7d@gmx.com> >> Interesting. The tag was pushed but upload-git-mirror role failed: >> https://22ce6cebe995c8915d66-868b7eb6ba44b32e3b412bc5acf60e73.ssl.cf5.rackcdn.com/de51e32e9ff5ffedb6b66e6fcbc4a2ec9d4d5ddc/release/openstack-upload-github-mirror/fd8c86f/job-output.txt >> >> Is this a known issue? > Was this a re-enqueue? It failed because the tag already existed on the remote: > > 2020-06-18 18:07:39.509620 | localhost | ! [remote rejected] 15.6.0 -> 15.6.0 (cannot lock ref 'refs/tags/15.6.0': reference already exists) > > If the content on github looks up to date and correct then I don't think there is anything else to do here. I don't think this was a reenqueue. Do we have another process that syncs things to GitHub? Maybe a race between the release pushing the mirror update and something else sneaking in and syncing it first? Either way, looks like things are up to date on GitHub and nothing else to do here. Sean From cboylan at sapwetik.org Thu Jun 18 19:21:35 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 18 Jun 2020 12:21:35 -0700 Subject: =?UTF-8?Q?Re:_[Release-job-failures]_Release_of_openstack/puppet-nova_fo?= =?UTF-8?Q?r_ref_refs/tags/15.6.0_failed?= In-Reply-To: <563b4b9b-4273-20aa-e0a1-a21f8c42fa7d@gmx.com> References: <35105d51-7d4b-4eee-820f-03e0804e4240@www.fastmail.com> <563b4b9b-4273-20aa-e0a1-a21f8c42fa7d@gmx.com> Message-ID: <7a3a5048-8349-4109-a60a-9d3199dc268c@www.fastmail.com> On Thu, Jun 18, 2020, at 11:55 AM, Sean McGinnis wrote: > > >> Interesting. The tag was pushed but upload-git-mirror role failed: > >> https://22ce6cebe995c8915d66-868b7eb6ba44b32e3b412bc5acf60e73.ssl.cf5.rackcdn.com/de51e32e9ff5ffedb6b66e6fcbc4a2ec9d4d5ddc/release/openstack-upload-github-mirror/fd8c86f/job-output.txt > >> > >> Is this a known issue? > > Was this a re-enqueue? It failed because the tag already existed on the remote: > > > > 2020-06-18 18:07:39.509620 | localhost | ! [remote rejected] 15.6.0 -> 15.6.0 (cannot lock ref 'refs/tags/15.6.0': reference already exists) > > > > If the content on github looks up to date and correct then I don't think there is anything else to do here. > > I don't think this was a reenqueue. Do we have another process that > syncs things to GitHub? Maybe a race between the release pushing the > mirror update and something else sneaking in and syncing it first? > > Either way, looks like things are up to date on GitHub and nothing else > to do here. > > Sean There were two github mirror jobs for puppet-nova running at roughly the same time. One succeeded (and pushed the tag) and the other failed. SUCCESS: https://zuul.openstack.org/build/1f363ade2d8d4f418db86ae33d2c3363/log/job-output.txt#80 FAILURE: https://zuul.openstack.org/build/fd8c86f9b57947239bd0273424e1d4d5/log/job-output.txt#80 This looks like a race between the job for the 16.4.0 and 15.6.0 tags. We could update the job to check for this particular failure case and treat it as a success assuming something else has merged. We could serialize these jobs between different events on the same repo. We can also just ignore it for now since everything is happy :) From sean.mcginnis at gmx.com Thu Jun 18 20:15:17 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 18 Jun 2020 15:15:17 -0500 Subject: [Release-job-failures] Release of openstack/puppet-nova for ref refs/tags/15.6.0 failed In-Reply-To: <7a3a5048-8349-4109-a60a-9d3199dc268c@www.fastmail.com> References: <35105d51-7d4b-4eee-820f-03e0804e4240@www.fastmail.com> <563b4b9b-4273-20aa-e0a1-a21f8c42fa7d@gmx.com> <7a3a5048-8349-4109-a60a-9d3199dc268c@www.fastmail.com> Message-ID: <6767f32e-29a9-8f90-2fb4-f6cb2145ff49@gmx.com> >>> Was this a re-enqueue? It failed because the tag already existed on the remote: >>> >>> 2020-06-18 18:07:39.509620 | localhost | ! [remote rejected] 15.6.0 -> 15.6.0 (cannot lock ref 'refs/tags/15.6.0': reference already exists) >>> >>> If the content on github looks up to date and correct then I don't think there is anything else to do here. >> I don't think this was a reenqueue. Do we have another process that >> syncs things to GitHub? Maybe a race between the release pushing the >> mirror update and something else sneaking in and syncing it first? >> >> Either way, looks like things are up to date on GitHub and nothing else >> to do here. >> >> Sean > There were two github mirror jobs for puppet-nova running at roughly the same time. One succeeded (and pushed the tag) and the other failed. > > SUCCESS: https://zuul.openstack.org/build/1f363ade2d8d4f418db86ae33d2c3363/log/job-output.txt#80 > FAILURE: https://zuul.openstack.org/build/fd8c86f9b57947239bd0273424e1d4d5/log/job-output.txt#80 > > This looks like a race between the job for the 16.4.0 and 15.6.0 tags. > > We could update the job to check for this particular failure case and treat it as a success assuming something else has merged. We could serialize these jobs between different events on the same repo. We can also just ignore it for now since everything is happy :) That would make sense I think. We were trying to update GitHub, but it responds telling us it already has the update. So if we can detect that is the error and just move along, all's good in the end. From skaplons at redhat.com Thu Jun 18 20:38:36 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 18 Jun 2020 22:38:36 +0200 Subject: [neutron][neutron-dynamic-routing] Call for maintainers In-Reply-To: <1592492145682.7834@binero.com> References: <20200616194437.ejoe73t7ahsbzxhe@skaplons-mac> <1592492145682.7834@binero.com> Message-ID: <20200618203836.l23sfnf2oimxelbe@skaplons-mac> Hi, Thanks a lot. That's great news. You can sync with Ryan about details but TBH I don't think it requires a lot of additional work to keep it running. On Thu, Jun 18, 2020 at 02:55:45PM +0000, Tobias Urdin wrote: > Hello Slawek, > We are users of neutron-dynamic-routing. > > We would like to see the project continuing, we have very limited resources > but I will check with our manager to see if we can help with this. > > Hopefully others will step up as well so we can share the work. > > Best regards > Tobias > > ________________________________________ > From: Slawek Kaplonski > Sent: Tuesday, June 16, 2020 9:44 PM > To: OpenStack Discuss ML > Subject: [neutron][neutron-dynamic-routing] Call for maintainers > > Hi, > > During last, virtual PTG we discussed about health of neutron stadium projects > (again). And it seems that neutron-dynamic-routing project is slowly going to be > in similar state as neutron-fwaas in Ussuri. So there is basically no > active maintainers of this project in our community. > In Ussuri cycle there was Ryan Tidwell from SUSE who was taking care of this > project but AFAIK he is not able to do that anymore in Victoria cycle. > > So, if You are using this project or are interested in it, please contact me by > email or on IRC that You want to take care of it. > Usually this don't means like a lot of work every day. But we need someone who > we can ask for help e.g. when gate is broken or when there is some new bug > reported and there is need to triage it. > > -- > Slawek Kaplonski > Senior software engineer > Red Hat > > > > -- Slawek Kaplonski Senior software engineer Red Hat From rosmaita.fossdev at gmail.com Thu Jun 18 21:19:36 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 18 Jun 2020 17:19:36 -0400 Subject: [OSSN-0086] erratum: Dell EMC ScaleIO/VxFlex OS Backend Credentials Exposure Message-ID: As you may recall, the fix for this issue required patches for both Cinder and the os-brick library. The original patch for os-brick contained a flaw [0] that prevented the scaleio connector from operating when run under Python 2.7. Thus for OpenStack releases supporting Python 2.7 (that is, Train and earlier), a second os-brick patch is required and is listed below. (The Cinder and first os-brick patch are unchanged, but are listed below for completeness). [0] https://bugs.launchpad.net/os-brick/+bug/1883654 #### Patches #### Queens * cinder: https://review.opendev.org/733110 * os-brick: https://review.opendev.org/733104 and https://review.opendev.org/736749 Rocky * cinder: https://review.opendev.org/733109 * os-brick: https://review.opendev.org/733103 and https://review.opendev.org/736415 Stein * cinder: https://review.opendev.org/733108 * os-brick: https://review.opendev.org/733102 and https://review.opendev.org/736395 Train * cinder: https://review.opendev.org/733107 * os-brick: https://review.opendev.org/733100 and https://review.opendev.org/735989 Updated releases of os-brick incorporating the second patch are now available: Stein: os-brick 2.8.6 Train: os-brick 2.10.4 Point releases of cinder for Stein and Train will be made as soon as possible. These will be: Stein: cinder 14.1.1, requires os-brick 2.8.6 Train: cinder 15.2.1, requires os-brick 2.10.4 ### Contacts / References ### Author: Brian Rosmaita, Red Hat OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0086 Original LaunchPad Bug : https://bugs.launchpad.net/cinder/+bug/1823200 Mailing List : [Security] tag on openstack-discuss at lists.openstack.org OpenStack Security Project : https://launchpad.net/~openstack-ossg CVE: CVE-2020-10755 From whayutin at redhat.com Thu Jun 18 22:01:31 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 18 Jun 2020 16:01:31 -0600 Subject: [tripleo] CI is red Message-ID: Greetings, It's a been a week that started w/ CentOS mirror outage [0], followed by breaking changes caused by CentOS-8.2 [.5] and has not improved much since. The mirror issues are resolved, the updates required for CentOS-8.2 have been made here's the latest issue causing your gate jobs to fail. tripleo-common and python-tripleoclient became out of sync and started to fail unit tests. You can see this in the dlrn builds of your changes. [1] There was also a promotion to train [2] today and we noticed that mirrors were failing on container pulls for a bit (train only). This should resolve over time as the mirrors refresh themselves. Usually the mirrors handle the promotion more elegantly. CI status is updated in the $topic in #tripleo. I update the $topic as needed. Tomorrow is another day.. [0] https://bugs.launchpad.net/tripleo/+bug/1883430 [.5] https://bugs.launchpad.net/tripleo/+bug/1883937 [1] https://bugs.launchpad.net/tripleo/+bug/1884138 http://dashboard-ci.tripleo.org/d/wb8HBhrWk/cockpit?orgId=1&var-launchpad_tags=alert&var-releases=master&var-promotion_names=current-tripleo&var-promotion_names=current-tripleo-rdo&fullscreen&panelId=61 http://status.openstack.org/elastic-recheck/ http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22ERROR:dlrn:Received%20exception%20Error%20in%20build_rpm_wrapper%5C%22%20AND%20tags:%5C%22console%5C%22%20AND%20voting:1&from=864000s https://6f2b36b80678b13e4394-1a783ef91f61ec7475bcc10015912dcc.ssl.cf2.rackcdn.com/736227/1/gate/tripleo-ci-centos-8-standalone/fb33691/logs/failures_file [2] https://trunk.rdoproject.org/centos7-train/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Jun 18 22:03:46 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 18 Jun 2020 16:03:46 -0600 Subject: [tripleo] CI is red In-Reply-To: References: Message-ID: One more thing.. I forgot to mention 3rd party RDO clouds are experiencing problems or outages causing 3rd party jobs to fail as well. Ignore 3rd party check results until I update the list. Thanks On Thu, Jun 18, 2020 at 4:01 PM Wesley Hayutin wrote: > Greetings, > > It's a been a week that started w/ CentOS mirror outage [0], followed by > breaking changes caused by CentOS-8.2 [.5] and has not improved much since. > > The mirror issues are resolved, the updates required for CentOS-8.2 have > been made > here's the latest issue causing your gate jobs to fail. > > tripleo-common and python-tripleoclient became out of sync and started to > fail unit tests. You can see this in the dlrn builds of your changes. [1] > > There was also a promotion to train [2] today and we noticed that mirrors > were failing on container pulls for a bit (train only). This should > resolve over time as the mirrors refresh themselves. Usually the mirrors > handle the promotion more elegantly. > > CI status is updated in the $topic in #tripleo. I update the $topic as > needed. > > Tomorrow is another day.. > > > [0] https://bugs.launchpad.net/tripleo/+bug/1883430 > [.5] https://bugs.launchpad.net/tripleo/+bug/1883937 > [1] https://bugs.launchpad.net/tripleo/+bug/1884138 > > > http://dashboard-ci.tripleo.org/d/wb8HBhrWk/cockpit?orgId=1&var-launchpad_tags=alert&var-releases=master&var-promotion_names=current-tripleo&var-promotion_names=current-tripleo-rdo&fullscreen&panelId=61 > > http://status.openstack.org/elastic-recheck/ > > > http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22ERROR:dlrn:Received%20exception%20Error%20in%20build_rpm_wrapper%5C%22%20AND%20tags:%5C%22console%5C%22%20AND%20voting:1&from=864000s > > > https://6f2b36b80678b13e4394-1a783ef91f61ec7475bcc10015912dcc.ssl.cf2.rackcdn.com/736227/1/gate/tripleo-ci-centos-8-standalone/fb33691/logs/failures_file > > > [2] https://trunk.rdoproject.org/centos7-train/ > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu Jun 18 22:22:15 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 18 Jun 2020 18:22:15 -0400 Subject: [cinder] impending SPECS FREEZE Message-ID: <483a0fcb-e606-cb95-e5a6-51a5870bc328@gmail.com> To all people with unmerged proposed Cinder specs: The spec freeze is less than 2 weeks away (Wed 1 July at 23:59 UTC). If you have questions or want live discussion of issues, take advantage of next week's Cinder virtual mid-cycle: DATE: 24 JUNE 2020 TIME: 1400-1600 UTC LOCATION: https://bluejeans.com/3228528973 Add your spec to the list of topics: https://etherpad.opendev.org/p/cinder-victoria-mid-cycles All proposed specs were discussed at the Victoria Virtual Midcycle; see the PTG etherpad for comments about yours: https://etherpad.opendev.org/p/victoria-ptg-cinder Additionally, the etherpad has links to recordings of the discussions. cheers, brian From pramchan at yahoo.com Fri Jun 19 02:45:07 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 19 Jun 2020 02:45:07 +0000 (UTC) Subject: [all][InteropWG] Request to review tests your Distro for 2020.06 draft guidelines and meeting updates In-Reply-To: References: <1708160800.349541.1588966443299.ref@mail.yahoo.com> <1708160800.349541.1588966443299@mail.yahoo.com> Message-ID: <1106188404.544205.1592534707230@mail.yahoo.com> Kendall, Appreciate your request and have documented the details in here and merged. https://storyboard.openstack.org/#!/story/2007510 https://storyboard.openstack.org/#!/story/2007509 Asked Ghanshyam or Mark to close this as not sure if  we need to supersede or abandon this.https://blueprints.launchpad.net/refstack/+spec/interop-2020.06 Marketplace partciapents  Regards to Survey questions - https://storyboard.openstack.org/#!/story/2007511 We will be discussing at  Weekly on Friday at 1700 UTC in #openstack-interop (IRC webclient) . (Please join us at 10 AM PDT Friday 6/18/20) Other topics - The Content  & Procedure of Survey to be identified and agreed                      - How o send message to Vendors , Distro's, Hosting Partners and other  Market place participants to test for Logo updates for 2020.06 Ussuri cycle                      -  Consistency in meeting times on irc and websites (Update site for logo - https://www.openstack.org/brand/interop/ , update site for irc meeting time - https://refstack.openstack.org/#/about#about ) Some help on current status of Projects to consider for Logo Programs -  Bare Metal Ironic ,  Container Projects , Airship Cloud to StarlingX Edge Interop, kubernetes Conformance for OpenStack & Opne Infra Projects. Thanks PrakashChair / Interop WG                           Co-Chairs (Mark V.Voelker & Ghanshyam Maan)   On Wednesday, June 17, 2020, 12:40:22 PM PDT, Kendall Nelson wrote: Hello Prakash,  I noticed you filed a blueprint in the refstack project[1] about this on launchpad which is no longer being used as they migrated to StoryBoard[2] a while ago. If you can open a story with the information you put in the blueprint, it would be good to keep all the work tracked in a single place.  Thanks! -Kendall (diablo_rojo) [1] https://blueprints.launchpad.net/refstack/+spec/interop-2020.06[2] https://storyboard.openstack.org/#!/project_group/61 On Fri, May 8, 2020 at 12:35 PM prakash RAMCHANDRAN wrote: Hi all, Please review your tests for Draft 2020.06 guidelines to be proposed to Board.You can do that on and should start appearing in next 24-48 hours depending on Zuulhttps://refstack.openstack.org/#/community_results Plus please register for InteropWG  PTG meet on  June 1 st 6AM-*AM PDT slot See etherpadsPTG eventhttps://etherpad.opendev.org/p/interop_virtual_ptg_planning_june_2020Specific invite Tempest, TC members, QA/API SIG teams, Edge SIG WG , Baremetal SIG (for Ironic), K8s SIG (for OoK / KoO) & Scientific SIG teams. Other discussions  https://etherpad.opendev.org/p/interop2020 Welcome to join weekly  Friday Meetings (Refer NA/EU/APJ in etherpad below)  https://etherpad.opendev.org/p/interop Appreciate all support from committee and special appreciation to Mark T Voelker for providing the bridge.He has been immensely valuable in bringing as Vice Chair the wealth of history to enable this Interop WG. ThanksPrakash  -------------- next part -------------- An HTML attachment was scrubbed... URL: From josephine.seifert at secustack.com Fri Jun 19 11:46:35 2020 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Fri, 19 Jun 2020 13:46:35 +0200 Subject: [Image-Encryption] current state Message-ID: <3c1ea62f-ee9d-5a69-9399-aabbe287c838@secustack.com> Hi from the image-encryption-popupteam, we would like to provide a summary of what happened during the last year: 1. Secret Consumers in Barbican [1] As a foundation for the image encryption - and to not accidentally delete a secret, which is still in use - the Barbican team implemented the secret consumers. Their is still some work ongoing for the API part. We will use this feature whenever a image will be encrypted. 2. Specs We wrote Specs to describe, what we Image Encryption is and how it would affect Glance, Cinder and Nova. The Cinder spec got merged [2] . The Glance spec is still being reviewed [3]. And the nova spec is abandoned [4]. Nova is currently not part anymore, because of a missing ephemeral storage encryption needed for a coherent security mode. 3. WIP-patches We implemented two WIP-patches to let Glance devs get a better idea of how image encryption is affecting Glance. We provided a patch for Glance [5] and one for os-brick [6], which handles the encryption and decryption of images. [1] https://review.opendev.org/#/q/project:openstack/barbican+secret-consumer [2] https://review.opendev.org/#/c/608663/ [3] https://review.opendev.org/#/c/609667/11 [4] https://review.opendev.org/#/c/608696/ [5] https://review.opendev.org/#/c/705445/ [6] https://review.opendev.org/#/c/709432/ We appreciate reviews on the spec and the WIP-patches. greetings Josephine (Luzi) & Markus (mhen) From thierry at openstack.org Fri Jun 19 12:00:48 2020 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 19 Jun 2020 14:00:48 +0200 Subject: [Release-job-failures] Release of openstack/puppet-nova for ref refs/tags/15.6.0 failed In-Reply-To: References: Message-ID: zuul at openstack.org wrote: > Build failed. > > - openstack-upload-github-mirror https://zuul.opendev.org/t/openstack/build/fd8c86f9b57947239bd0273424e1d4d5 : FAILURE in 2m 30s > - release-openstack-puppet https://zuul.opendev.org/t/openstack/build/05dd9c7ffdc74af5ae4065eb16a522f9 : SUCCESS in 5m 40s > - announce-release https://zuul.opendev.org/t/openstack/build/188851ebacf9455ab50edc2101ae02bc : SUCCESS in 5m 31s Github mirroring failure as a concurrent mirror operation already synced that tag up. Can be safely ignored. -- Thierry Carrez (ttx) From yamamoto at midokura.com Fri Jun 19 12:02:42 2020 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Fri, 19 Jun 2020 21:02:42 +0900 Subject: [Neutron] Drivers meeting agenda - 19.06.2020 In-Reply-To: <20200618125317.b232eciomg7bldzz@skaplons-mac> References: <20200618125317.b232eciomg7bldzz@skaplons-mac> Message-ID: hi, sorry i can't attend today. some inline comments below. On Thu, Jun 18, 2020 at 9:53 PM Slawek Kaplonski wrote: > > Hi, > > Here is the agenda for tomorrows drivers meeting. > I want to discuss few RFEs there: > > - https://bugs.launchpad.net/neutron/+bug/1880532 - L3 Router should support > ECMP - this is continuation of the discussion from previous meeting. There is > also spec proposed for that one https://review.opendev.org/#/c/729532/ api-wise, i guess we need to wait for an answer to your question at https://review.opendev.org/#/c/729532/24/specs/ussuri/l3-router-support-ecmp.rst at 68 > > - https://bugs.launchpad.net/neutron/+bug/1751040 - Extended resources should > not always calculate all attributes - old RFE but we should discuss that > finally, > > - https://bugs.launchpad.net/neutron/+bug/1882804 - allow replacing the QoS > policy of bound port - this one is follow up after nova-neutron discussion > during the PTG. TBH I don't think we will need a lot of discussion about that > but still it needs an approval. > > There are also 2 items in "On demand agenda": > > - (ralonsoh) [RFE] Optional NUMA affinity for neutron ports (commented in the > PTG and scheduled to be commented here before). BZ: > https://bugzilla.redhat.com/show_bug.cgi?id=1791834. No LP or BP created, this > is just to present and discuss the idea of **using or not the QoS API** for > this RFE. is this just about having a way to pass necessary info to nova? > > - (slaweq) We are deprecating neutron-fwaas. What with the API-REF? > It's question from the ML > http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015508.html i'm not sure what he is asking. i guess he can maintain the api extension in his plugin if he wants. > > > As agenda is pretty busy, I would like to start with On Demand agenda tomorrow > and then continue with RFEs according to how much time we will have. See You on > the meeting tomorrow. > > -- > Slawek Kaplonski > Senior software engineer > Red Hat > From sathlang at redhat.com Fri Jun 19 12:48:10 2020 From: sathlang at redhat.com (Sofer Athlan-Guyot) Date: Fri, 19 Jun 2020 14:48:10 +0200 Subject: [tripleo] Stop using host's /run|/var/run inside containers In-Reply-To: References: <04e5224e-3fe8-0143-2529-6a75d2cbfd2e@redhat.com> Message-ID: <87wo43z0fp.fsf@s390sx.i-did-not-set--mail-host-address--so-tickle-me> Hi, not really a reply, but some random command as the title picked my curiousity which might gives more context. Cédric Jeanneret writes: > On 6/18/20 9:42 AM, Cédric Jeanneret wrote: >> Hello all! >> >> While working on podman integration, especially the SELinux part of it, >> I was wondering why we kept using the host's /run (or its replicated >> /var/run) location inside containers. And I'm still wondering, 2 years >> later ;). >> >> Reasons: >> - from time to time, there are patches adding a ":z" flag to the run >> bind-mount. This breaks the system, since the host systemd can't >> write/access container_file_t SELinux context. Doing a relabeling might >> therefore prevent a service restart. >> >> - in order to keep things in a clean, understandable tree, getting a >> dedicated shared directory for the container's sockets makes sense, as >> it might make things easier to check (for instance, "is this or that >> service running in a container?") >> >> - if an operator runs a restorecon during runtime, it will break >> container services >> >> - mounting /run directly in the containers might expose unwanted >> sockets, such as DBus (this creates SELinux denials, and we're >> monkey-patching things and doing really ugly changes to prevent it). >> It's more than probable other unwanted shared sockets end in the >> containers, and it might expose the host at some point. Here again, from >> time to time we see new SELinux policies being added in order to solve >> the denials, and it creates big holes in the host security >> >> AFAIK, no *host* service is accessed by any container services, right? >> If so, could we imagine moving the shared /run to some other location on >> the host, such as /run/containers, or /container-run, or any other >> *dedicated* location we can manage as we want on a SELinux context? > > Small addendum/errata: > > some containers DO need to access some specific sockets/directories in > /run, such as /run/netns and, probably, /run/openvswitch (iirc this one > isn't running in a container). > For those specific cases, we can of course mount the specific locations > inside the container's /run. > > This addendum doesn't change the main question though :) > So I run that command on controller and compute (train ... sorry old version, but the command stands) out of curiousity. Get all the containers that mounts run: for i in $(podman ps --format '{{.Names}}') ; do echo $i; podman inspect $i | jq '.[]|.Mounts[]|.Source + " -> " + .Destination'; done | awk '/^[a-z]/{container=$1}/run/{print container " : " $0}' # controller: swift_proxy : "/run -> /run" ceph-mgr-controller-0 : "/var/run/ceph -> /var/run/ceph" ceph-mon-controller-0 : "/var/run/ceph -> /var/run/ceph" openstack-cinder-backup-podman-0 : "/run -> /run" ovn_controller : "/run -> /run" ovn_controller : "/var/lib/openvswitch/ovn -> /run/ovn" nova_scheduler : "/run -> /run" iscsid : "/run -> /run" ovn-dbs-bundle-podman-0 : "/var/lib/openvswitch/ovn -> /run/openvswitch" ovn-dbs-bundle-podman-0 : "/var/lib/openvswitch/ovn -> /run/ovn" redis-bundle-podman-0 : "/var/run/redis -> /var/run/redis" # compute nova_compute : "/run -> /run" ovn_metadata_agent : "/run/netns -> /run/netns" ovn_metadata_agent : "/run/openvswitch -> /run/openvswitch" ovn_controller : "/run -> /run" ovn_controller : "/var/lib/openvswitch/ovn -> /run/ovn" nova_migration_target : "/run/libvirt -> /run/libvirt" iscsid : "/run -> /run" nova_libvirt : "/run -> /run" nova_libvirt : "/var/run/libvirt -> /var/run/libvirt" nova_virtlogd : "/run -> /run" nova_virtlogd : "/var/run/libvirt -> /var/run/libvirt" neutron-haproxy-ovnmeta-a80e1d01-9c65-4fd3-8393-0bf5b66d175e : "/run/netns -> /run/netns" So the usual suspects in this particular example seems to be cinder-backup, iscsid, ceph, swift, redis. Openvswitch seems to do the right thing here. I guess that the nova one must be required somehow. > >> >> I would therefore get some feedback about this proposed change. >> >> For the containers, nothing should change: >> - they will get their /run populated with other containers sockets >> - they will NOT be able to access the host services at all. >> >> Thank you for your feedback, ideas and thoughts! >> >> Cheers, >> >> C. >> > > -- > Cédric Jeanneret (He/Him/His) > Sr. Software Engineer - OpenStack Platform > Deployment Framework TC > Red Hat EMEA > https://www.redhat.com/ > -- Sofer Athlan-Guyot chem on #irc DFG:Upgrades From sean.mcginnis at gmail.com Fri Jun 19 13:12:11 2020 From: sean.mcginnis at gmail.com (Sean McGinnis) Date: Fri, 19 Jun 2020 08:12:11 -0500 Subject: [kolla] Re: [Release-job-failures] Release of openstack/kolla for ref refs/tags/9.1.1 failed In-Reply-To: References: Message-ID: On 6/19/20 5:45 AM, zuul at openstack.org wrote: > Build failed. > > - openstack-upload-github-mirror https://zuul.opendev.org/t/openstack/build/3bb83ca0c7344c7cac47d912a724ea07 : SUCCESS in 1m 03s > - release-openstack-python https://zuul.opendev.org/t/openstack/build/a54166b5f31340b1bdb6f28e4b4a8e61 : SUCCESS in 5m 22s > - announce-release https://zuul.opendev.org/t/openstack/build/5cee678de9b447428ca26b8b3ffd6ee7 : SUCCESS in 4m 07s > - propose-update-constraints https://zuul.opendev.org/t/openstack/build/fb96f3ad6edd43d7a611377542db95bc : SUCCESS in 3m 22s > - kolla-publish-centos-source https://zuul.opendev.org/t/openstack/build/9c7921b0b4a64641974ae616b27f03dc : RETRY_LIMIT in 3m 18s > - kolla-publish-centos-binary https://zuul.opendev.org/t/openstack/build/0388a788ca8a48949c20f48c60b2b7ba : RETRY_LIMIT in 2m 42s (non-voting) > - kolla-publish-centos8-source https://zuul.opendev.org/t/openstack/build/eac8ad3447bc49c1b3d306bffd7f905d : RETRY_LIMIT in 3m 14s > - kolla-publish-centos8-binary https://zuul.opendev.org/t/openstack/build/e68f3f687d3c452c823017a355049f5d : RETRY_LIMIT in 5m 04s (non-voting) > - kolla-publish-debian-source https://zuul.opendev.org/t/openstack/build/4c5a39b30f834bd597b167208c8cf322 : RETRY_LIMIT in 2m 51s (non-voting) > - kolla-publish-debian-source-aarch64 https://zuul.opendev.org/t/openstack/build/None : NODE_FAILURE in 0s (non-voting) > - kolla-publish-debian-binary https://zuul.opendev.org/t/openstack/build/144b9b8109804f8daa23bca8cc4a41d6 : RETRY_LIMIT in 2m 36s (non-voting) > - kolla-publish-ubuntu-source https://zuul.opendev.org/t/openstack/build/23f44f0a4ccb484c81abc76d0a7627d7 : RETRY_LIMIT in 3m 52s > - kolla-publish-ubuntu-binary https://zuul.opendev.org/t/openstack/build/a2ba0db9d974424f9f8424fef14d8e38 : RETRY_LIMIT in 3m 29s (non-voting) I haven't looks at all of these, but it looks like these kolla jobs need to be updated to have the ensure_pip role added like we've had to do for some of the other release jobs. My guess would be ensure_pip needs to be added somewhere like here: https://opendev.org/openstack/kolla/src/branch/master/tests/playbooks/pre.yml#L4 But I think someone from that team will need to look into that. Sean From sean.mcginnis at gmx.com Fri Jun 19 13:24:07 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 19 Jun 2020 08:24:07 -0500 Subject: [release] Release countdown for week R-16, Jun 22 - Jun 26 Message-ID: <20200619132407.GA1358903@sm-workstation> Development Focus ----------------- We are now past the victoria-1 milestone. Teams should now be focused on feature development and completion of release cycle goals [0]. [0] https://governance.openstack.org/tc/goals/selected/victoria/index.html General Information ------------------- Our next milestone in this development cycle will be victoria-2, on July 30. This milestone is when we freeze the list of deliverables that will be included in the Victoria final release, so if you plan to introduce new deliverables in this release, please propose a change to add an empty deliverable file in the deliverables/victoria directory of the openstack/releases repository. Now is also generally a good time to look at bugfixes that were introduced in the master branch that might make sense to be backported and released in a stable release. If you have any question around the OpenStack release process, feel free to ask on this mailing-list or on the #openstack-release channel on IRC. Upcoming Deadlines & Dates -------------------------- Victoria-2 milestone: July 30 Ussuri cycle-trailing deadline: August 13 From skaplons at redhat.com Fri Jun 19 15:04:38 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 19 Jun 2020 17:04:38 +0200 Subject: [neutron][fwaas] Removal of neutron-fwaas projects from the neutron stadium In-Reply-To: References: <20200616104606.mndzm2bgkzchq645@skaplons-mac> <81303bec-2f13-f926-345a-b8ecd868bcf1@sap.com> Message-ID: <20200619150438.ojhgl5meps5t6b6p@skaplons-mac> Hi, Thx for this question Johannes and this Akihiro for Your feedback. We were discussion this during our last drivers meeting [1]. We agreed there to keep neutron-fwaas api definition in neutron-lib as an official API. But we will highlight in api-ref that Neutron don't provides any official implementation of this API. That way we can keep support for it e.g. OSC or SDK. [1] http://eavesdrop.openstack.org/meetings/neutron_drivers/2020/neutron_drivers.2020-06-19-14.00.log.html#l-126 On Thu, Jun 18, 2020 at 11:55:39PM +0900, Akihiro Motoki wrote: > On Thu, Jun 18, 2020 at 5:19 PM Johannes Kulik wrote: > > > > > > > > On 6/16/20 12:46 PM, Slawek Kaplonski wrote: > > > Hi, > > > > > > In Shanghai PTG we agreed that due to lack of maintainers of neutron-fwaas > > > project we are going to deprecate it in neutron stadium in Ussuri cycle. > > > Since then we asked couple of times about volunteers who would like to maintain > > > this project but unfortunatelly there is still lack of such maintainers. > > > So now, as we are already in Victoria cycle, I just proposed serie of patches > > > [1] to remove master branch of neutron-fwaas and neutron-fwaas-dashboard from > > > the neutron stadium. Stable branches will be still there and can be maintained > > > but there will be no any code in master branch and there will be no new releases > > > of those 2 projects in Victoria. > > > > > > If You are using this project and wants to maintain it, You can respin it in x/ > > > namespace if needed. > > > > > > Feel free to ping me on IRC (slaweq) or by email if You would have any questions > > > about that. > > > > > > [1] https://review.opendev.org/#/q/topic:retire-neutron-fwaas+(status:open+OR+status:merged) > > > > > > > Hi, > > > > with neutron-fwaas gone, what will happen to the APIs for fwaas? Will > > they get deprecated, too, at some point in time? > > > > We have a plan to use those APIs in our custom ml2/l3 drivers. Would > > that still make sense with neutron-fwaas out of the picture? > > neutron-fwaas removal (and deprecation in Ussuri release) applies all > stuffs related to neutron-fwaas > including the API and the database plugin layer which provides driver interface. > It is not specific to neutron-fwaas reference implementation. > Even for API and DB layer, we cannot guarantee that the API and DB > layer work well > without a working implementation. > > We the neutron team called for the maintainer for neutron-fwaas > several times during Ussuri cycle > but nobody volunteered for it, and we believe we had enough time to > communicate with the community. > > Thanks, > Akihiro Motoki (irc: amotoki) > > > > > > Have a nice day, > > Johannes > > > > -- > > Johannes Kulik > > IT Architecture Senior Specialist > > *SAP SE *| Rosenthaler Str. 30 | 10178 Berlin | Germany > > > > > -- Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Fri Jun 19 15:08:38 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 19 Jun 2020 17:08:38 +0200 Subject: [Neutron] Drivers meeting agenda - 19.06.2020 In-Reply-To: References: <20200618125317.b232eciomg7bldzz@skaplons-mac> Message-ID: <20200619150838.sjpoubwzjjyslreq@skaplons-mac> Hi, Thx Takashi for Your comments about it. On Fri, Jun 19, 2020 at 09:02:42PM +0900, Takashi Yamamoto wrote: > hi, > > sorry i can't attend today. > some inline comments below. > > On Thu, Jun 18, 2020 at 9:53 PM Slawek Kaplonski wrote: > > > > Hi, > > > > Here is the agenda for tomorrows drivers meeting. > > I want to discuss few RFEs there: > > > > - https://bugs.launchpad.net/neutron/+bug/1880532 - L3 Router should support > > ECMP - this is continuation of the discussion from previous meeting. There is > > also spec proposed for that one https://review.opendev.org/#/c/729532/ > > api-wise, i guess we need to wait for an answer to your question at > https://review.opendev.org/#/c/729532/24/specs/ussuri/l3-router-support-ecmp.rst at 68 We will discuss that on next meeting. > > > > > - https://bugs.launchpad.net/neutron/+bug/1751040 - Extended resources should > > not always calculate all attributes - old RFE but we should discuss that > > finally, > > > > - https://bugs.launchpad.net/neutron/+bug/1882804 - allow replacing the QoS > > policy of bound port - this one is follow up after nova-neutron discussion > > during the PTG. TBH I don't think we will need a lot of discussion about that > > but still it needs an approval. > > > > There are also 2 items in "On demand agenda": > > > > - (ralonsoh) [RFE] Optional NUMA affinity for neutron ports (commented in the > > PTG and scheduled to be commented here before). BZ: > > https://bugzilla.redhat.com/show_bug.cgi?id=1791834. No LP or BP created, this > > is just to present and discuss the idea of **using or not the QoS API** for > > this RFE. > > is this just about having a way to pass necessary info to nova? In general yes. We decided to go with new API. Rodolfo will propose spec for that. > > > > > - (slaweq) We are deprecating neutron-fwaas. What with the API-REF? > > It's question from the ML > > http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015508.html > > i'm not sure what he is asking. > i guess he can maintain the api extension in his plugin if he wants. They have own custom implementation of fwaas API and they want to keep it as official API and to keep e.g. support for it in OSC. We agreed to do it like that. > > > > > > > As agenda is pretty busy, I would like to start with On Demand agenda tomorrow > > and then continue with RFEs according to how much time we will have. See You on > > the meeting tomorrow. > > > > -- > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > -- Slawek Kaplonski Senior software engineer Red Hat From kotobi at dkrz.de Fri Jun 19 16:01:57 2020 From: kotobi at dkrz.de (Amjad Kotobi) Date: Fri, 19 Jun 2020 18:01:57 +0200 Subject: [nova][oslo][rabbitmq][compute] Message-ID: <7694116D-D98C-4BE2-86F5-72C9D66C8F90@dkrz.de> Hi all, After trying to upgrade last piece of release Stein to Train which were all “python-oslo*” & “python2-oslo*” packages along with some others, nova computes status changed to “DOWN”, and from nova-compute.log below logs ERROR oslo.messaging._drivers.impl_rabbit [req-de95c598-8e72-46ad-aba6-1b26893ccad7 - - - - -] [739dc403-021e-4bba-a31a-3d86f3d7a908] AMQP server on controller5:5672 is unreachable: Server unexpectedly closed connection. Trying again in 1 seconds.: IOError: Server unexpectedly closed connection 2020-06-19 17:40:20.681 5518 INFO oslo.messaging._drivers.impl_rabbit [req-de95c598-8e72-46ad-aba6-1b26893ccad7 - - - - -] [739dc403-021e-4bba-a31a-3d86f3d7a908] Reconnected to AMQP server on controller5:5672 via [amqp] client with port 32894 This happened every two minutes or so. Connection between rabbitmq servers all looks fine and tested on port accessibility. Logs in rabbitmq got really nasty and after downgrading below packages(downgrade to Stein) all got to normal ############################# openvswitch.x86_64 python-openvswitch.x86_64 python-pycadf-common.noarch python2-automaton.noarch python2-barbicanclient.noarch python2-castellan.noarch python2-cliff.noarch python2-debtcollector.noarch python2-django-horizon.noarch python2-etcd3gw.noarch python2-eventlet.noarch python2-futurist.noarch python2-kombu.noarch python2-os-brick.noarch python2-os-client-config.noarch python2-os-ken.noarch python2-os-vif.noarch python2-os-win.noarch python2-osprofiler.noarch python2-ovsdbapp.noarch python2-pycadf.noarch python2-qpid-proton.x86_64 python2-simplejson.x86_64 python2-stevedore.noarch python2-swiftclient.noarch python2-taskflow.noarch python2-tenacity.noarch python2-tooz.noarch qpid-proton-c.x86_64 ########################## I’m not sure completely but suspecting in “kombu”. Also after rabbitmq got in high load and high ratio of failure “cinder” services got to DOWN too. Any idea will be great, thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Jun 19 16:25:22 2020 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 19 Jun 2020 17:25:22 +0100 Subject: [kolla] Re: [Release-job-failures] Release of openstack/kolla for ref refs/tags/9.1.1 failed In-Reply-To: References: Message-ID: On Fri, 19 Jun 2020 at 14:12, Sean McGinnis wrote: > > On 6/19/20 5:45 AM, zuul at openstack.org wrote: > > Build failed. > > > > - openstack-upload-github-mirror https://zuul.opendev.org/t/openstack/build/3bb83ca0c7344c7cac47d912a724ea07 : SUCCESS in 1m 03s > > - release-openstack-python https://zuul.opendev.org/t/openstack/build/a54166b5f31340b1bdb6f28e4b4a8e61 : SUCCESS in 5m 22s > > - announce-release https://zuul.opendev.org/t/openstack/build/5cee678de9b447428ca26b8b3ffd6ee7 : SUCCESS in 4m 07s > > - propose-update-constraints https://zuul.opendev.org/t/openstack/build/fb96f3ad6edd43d7a611377542db95bc : SUCCESS in 3m 22s > > - kolla-publish-centos-source https://zuul.opendev.org/t/openstack/build/9c7921b0b4a64641974ae616b27f03dc : RETRY_LIMIT in 3m 18s > > - kolla-publish-centos-binary https://zuul.opendev.org/t/openstack/build/0388a788ca8a48949c20f48c60b2b7ba : RETRY_LIMIT in 2m 42s (non-voting) > > - kolla-publish-centos8-source https://zuul.opendev.org/t/openstack/build/eac8ad3447bc49c1b3d306bffd7f905d : RETRY_LIMIT in 3m 14s > > - kolla-publish-centos8-binary https://zuul.opendev.org/t/openstack/build/e68f3f687d3c452c823017a355049f5d : RETRY_LIMIT in 5m 04s (non-voting) > > - kolla-publish-debian-source https://zuul.opendev.org/t/openstack/build/4c5a39b30f834bd597b167208c8cf322 : RETRY_LIMIT in 2m 51s (non-voting) > > - kolla-publish-debian-source-aarch64 https://zuul.opendev.org/t/openstack/build/None : NODE_FAILURE in 0s (non-voting) > > - kolla-publish-debian-binary https://zuul.opendev.org/t/openstack/build/144b9b8109804f8daa23bca8cc4a41d6 : RETRY_LIMIT in 2m 36s (non-voting) > > - kolla-publish-ubuntu-source https://zuul.opendev.org/t/openstack/build/23f44f0a4ccb484c81abc76d0a7627d7 : RETRY_LIMIT in 3m 52s > > - kolla-publish-ubuntu-binary https://zuul.opendev.org/t/openstack/build/a2ba0db9d974424f9f8424fef14d8e38 : RETRY_LIMIT in 3m 29s (non-voting) > > I haven't looks at all of these, but it looks like these kolla jobs need > to be updated to have the ensure_pip role added like we've had to do for > some of the other release jobs. > > My guess would be ensure_pip needs to be added somewhere like here: > > https://opendev.org/openstack/kolla/src/branch/master/tests/playbooks/pre.yml#L4 > > But I think someone from that team will need to look into that. Thanks for sharing this Sean. It has now been addressed, but I don't know if running the job again would work since the tag doesn't include the fix. > > Sean > > From mark at stackhpc.com Fri Jun 19 16:28:30 2020 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 19 Jun 2020 17:28:30 +0100 Subject: [kolla] Kolla Klub meeting In-Reply-To: References: Message-ID: On Tue, 16 Jun 2020 at 11:16, Mark Goddard wrote: > > Hi, > > The next Kolla klub meeting is scheduled for Thursday at 15:00 UTC. > We'll have a summary of the recent PTG discussions and some open > discussion. Please bring ideas for discussion topics! > > https://docs.google.com/document/d/1EwQs2GXF-EvJZamEx9vQAOSDB5tCjsDCJyHQN5_4_Sw/edit# Meeting recording: https://drive.google.com/file/d/1lFO3NUBv22eUTq5oGHC0OYFpBvyhTwRH/view?usp=sharing > > Thanks, > Mark From sean.mcginnis at gmx.com Fri Jun 19 16:53:27 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 19 Jun 2020 11:53:27 -0500 Subject: [kolla] Re: [Release-job-failures] Release of openstack/kolla for ref refs/tags/9.1.1 failed In-Reply-To: References: Message-ID: <8fd44bd1-9329-4fb8-25ab-053a3099081d@gmx.com> >>> - kolla-publish-ubuntu-binary https://zuul.opendev.org/t/openstack/build/a2ba0db9d974424f9f8424fef14d8e38 : RETRY_LIMIT in 3m 29s (non-voting) >> I haven't looks at all of these, but it looks like these kolla jobs need >> to be updated to have the ensure_pip role added like we've had to do for >> some of the other release jobs. >> >> My guess would be ensure_pip needs to be added somewhere like here: >> >> https://opendev.org/openstack/kolla/src/branch/master/tests/playbooks/pre.yml#L4 >> >> But I think someone from that team will need to look into that. > Thanks for sharing this Sean. It has now been addressed, but I don't > know if running the job again would work since the tag doesn't include > the fix. > I think you're right Mark. To get everything release correctly, we probably want to do another release including those fixes. From rfolco at redhat.com Fri Jun 19 18:41:52 2020 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 19 Jun 2020 15:41:52 -0300 Subject: [tripleo] TripleO CI Summary: Unified Sprint 28 Message-ID: Greetings, The TripleO CI team has just completed **Unified Sprint 28** (May 28 thru June 17). The following is a summary of completed work during this sprint cycle [1]: - Built internal component and integration pipelines by adding more baremetal and scenario jobs. OVB against PSI still in progress. - Continued to fix promoter tests on CentOS8. - Built an internal promoter server for the internal pipeline promotions. - Added TripleO IPA multinode job to integration pipeline in master, ussuri, and now train - Built CentOS8 train jobs upstream and created the CentOS8 version of promotion pipeline for train release. Still in progress, now based on a componentized DLRN - Tempest skip list and ironic plugin general improvements. - 3rd party dependency jobs for ceph-ansible. - Github pull request zuul jobs e.g. https://review.rdoproject.org/zuul/buildset/5aadebee715d49fc9f030b455a8d8f71 - Build ansible 2.9, ceph-ansible 2.4 - Execute against tripleo master, ussuri, train. - Ruck/Rover recorded notes [2]. The planned work for the next sprint [3] extends the work started in the previous sprint and focuses on creating CentOS8 jobs for train release. The Ruck and Rover for this sprint are Sagi Shnaidman (sshnaidm), Soniya Vyas (soniya29) and Ronelle Landy (rlandy). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in etherpad [4]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-28 [2] https://hackmd.io/YAqFJrKMThGghTW4P2tabA [3] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-29 [4] https://hackmd.io/XcuH2OIVTMiuxyrqSF6ocw -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Fri Jun 19 18:57:13 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 19 Jun 2020 18:57:13 +0000 (UTC) Subject: Switching the Interop meeting from irc to meetpad + next week Friday 26 UTC 17 - agenda In-Reply-To: <1278363599.827448.1592587636399@mail.yahoo.com> References: <1278363599.827448.1592587636399.ref@mail.yahoo.com> <1278363599.827448.1592587636399@mail.yahoo.com> Message-ID: <1982776231.872085.1592593033164@mail.yahoo.com> Hi all, The irc was unkind to us and cancelled today's meeting of #openstack-interop. Appreciate  for your considerations - lets go back and revive  meetpad from next week and we will be better off as with less than 30 people this option will be better than either zoom or irc, as we do have an "Interop-WG-weekly-meeting" established.   https://meetpad.opendev.org/Interop-WG-weekly-meeting or (https://meetpad.opendev.org).  After click join by entering  - Interop-WG-weekly-meeting - courtesy - Kendall [extra benefit of including etherpad integration (the videoconference doubles as an etherpad document, avoiding jumping between windows).] Agenda for next Friday 26th June  17 UTC / 10 PDT call on meetpad: (If you can solve through email do answer back) (Please update if you need changes - https://etherpad.opendev.org/p/interop) 1.Interop Ussusri Guidelines established - documented the details in here and merged. https://storyboard.openstack.org/#!/story/2007510 https://storyboard.openstack.org/#!/story/2007509 2.  To  supersede or abandon this.https://blueprints.launchpad.net/refstack/+spec/interop-2020.06 - Ghanshyam or Mark - please confirm 3. Marketplace participants  Regards to Survey questions - https://storyboard.openstack.org/#!/story/2007511 4. The Content  & Procedure of Survey to be identified and agreed     - How to send message to Vendors , Distro's, Hosting Partners and other  Marketplace participants to test for Logo updates for 2020.06 Ussuri cycle          -  Consistency in meeting times on irc and websites             - Update site for logo - https://www.openstack.org/brand/interop/            - update site for irc meeting time - https://refstack.openstack.org/#/about#about  5. Implementing updates to project governance - https://opendev.org/openstack/governance/src/branch/master/reference/projects.yamlHow to enable tag  assert:supports-api-interoperability (for Core + Add-ons for Integrated Projects) - tags: - - tc:approved-release - - starter-kit:compute - - vulnerability:managed - - assert:follows-standard-deprecation - - assert:supports-upgrade - - assert:supports-rolling-upgrade - - assert:supports-accessible-upgrade - - stable:follows-policy - - assert:supports-api-interoperability 6. Ideas for new logo programs to propose to Board working with TC- Any legal disclaimers or change required - Conformance or validation vs. compliance?- Revising Object Storage Logo for Swift Compliance (For paid partners)- Bare Metal Ironic- Container Projects -  Airship Cloud to StarlingX Edge Interop,-  kubernetes Conformance for OpenStack & Opne Infra Projects. Chair / Interop WG (Prakash Ramchandran)Co-Chairs (Mark V.Voelker & Ghanshyam Maan) Platinum Partner Rep. from Board (Arkady Kanevsky) On Friday, June 19, 2020, 10:27:16 AM PDT, prakash RAMCHANDRAN wrote: Hi, All I tried to get into irc and its way slow and not sure what nic registration and are access issues are for irc.Either you would have started the meeting and continued on the topics listed in agenda or I assume its cancelled. We will regroup next week but looks like again coming back to only zoom as option and so will setup with either Openstack Staff or go back to Dell Zoom as my only option. My apology, all botched up on communication for last 4 weeks with no real verbal communications. It has been a challenge for me, previous StarlingX vs Airship meeting went off well but this happened, not sure why and still tring to track irc issues. Thanks Prakash -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Jun 19 19:05:08 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 19 Jun 2020 13:05:08 -0600 Subject: [tripleo] CI is red In-Reply-To: References: Message-ID: On Thu, Jun 18, 2020 at 4:03 PM Wesley Hayutin wrote: > One more thing.. > > I forgot to mention 3rd party RDO clouds are experiencing problems or > outages causing 3rd party jobs to fail as well. Ignore 3rd party check > results until I update the list. > > Thanks > K.. quick update. Upstream seems to back to it's normal GREENISH status. We're working on a fix for the ipa-server install atm. Third party is still RED, but we think we're close. OVB BMC's updated, we've reduced the load on vexhost and testing out the latest changes in ironic. Thanks all, have a good weekend > > On Thu, Jun 18, 2020 at 4:01 PM Wesley Hayutin > wrote: > >> Greetings, >> >> It's a been a week that started w/ CentOS mirror outage [0], followed by >> breaking changes caused by CentOS-8.2 [.5] and has not improved much since. >> >> The mirror issues are resolved, the updates required for CentOS-8.2 have >> been made >> here's the latest issue causing your gate jobs to fail. >> >> tripleo-common and python-tripleoclient became out of sync and started to >> fail unit tests. You can see this in the dlrn builds of your changes. [1] >> >> There was also a promotion to train [2] today and we noticed that mirrors >> were failing on container pulls for a bit (train only). This should >> resolve over time as the mirrors refresh themselves. Usually the mirrors >> handle the promotion more elegantly. >> >> CI status is updated in the $topic in #tripleo. I update the $topic as >> needed. >> >> Tomorrow is another day.. >> >> >> [0] https://bugs.launchpad.net/tripleo/+bug/1883430 >> [.5] https://bugs.launchpad.net/tripleo/+bug/1883937 >> [1] https://bugs.launchpad.net/tripleo/+bug/1884138 >> >> >> http://dashboard-ci.tripleo.org/d/wb8HBhrWk/cockpit?orgId=1&var-launchpad_tags=alert&var-releases=master&var-promotion_names=current-tripleo&var-promotion_names=current-tripleo-rdo&fullscreen&panelId=61 >> >> http://status.openstack.org/elastic-recheck/ >> >> >> http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22ERROR:dlrn:Received%20exception%20Error%20in%20build_rpm_wrapper%5C%22%20AND%20tags:%5C%22console%5C%22%20AND%20voting:1&from=864000s >> >> >> https://6f2b36b80678b13e4394-1a783ef91f61ec7475bcc10015912dcc.ssl.cf2.rackcdn.com/736227/1/gate/tripleo-ci-centos-8-standalone/fb33691/logs/failures_file >> >> >> [2] https://trunk.rdoproject.org/centos7-train/ >> >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri Jun 19 20:03:30 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 19 Jun 2020 16:03:30 -0400 Subject: [horizon] patternfly? Message-ID: Hi everyone, I was wondering if anyone in the community has explored the idea of implementing PatternFly inside Horizon. It seems like it shares a lot of the similar ideas that we use and we could really benefit from using a common library that already exists with a lot of good UX thought behind it. I know it's based on React which is a bit of a leap from where Horizon is today. However, I'd be curious if the Horizon team is interested in figuring out a plan to make a migration to something like this happen. Personally, I think I would be able to provide resources to have someone do this work. However, if this seems like a huge stretch and architecture change where it actually makes more sense to stand this up from scratch (and implement an architecture where the dashboard talks directly to the APIs?), perhaps we should explore that. I also would like to hear more from our extended community too, as I think we really need to improve our user interface experience. Thanks, Mohammed -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri Jun 19 20:59:14 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 19 Jun 2020 16:59:14 -0400 Subject: [ironic][nova-compute][placement]: Conflicting resource provider name errors In-Reply-To: <37538868.307990.1592504471707@mail.yahoo.com> References: <20200616194437.ejoe73t7ahsbzxhe@skaplons-mac> <1592492145682.7834@binero.com> <37538868.307990.1592504471707@mail.yahoo.com> Message-ID: Hi Fred: You'll need to update to grab this change: https://review.opendev.org/#/c/675496/ Thanks, Mohammed On Thu, Jun 18, 2020 at 2:29 PM fsbiz at yahoo.com wrote: > > Hi folks, > > Every time I get an ironic switchover I end up with a few resource > provider errors as follows. > > > 2020-06-10 05:11:28.129 75837 INFO nova.compute.resource_tracker > [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] Compute node record > created for sc-ironic06.nvc.nvidia.com:7c95f255-3c5 > 4-46ab-87cf-0b1707971e9c with uuid: 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e > > 2020-06-10 05:11:28.502 75837 ERROR nova.scheduler.client.report > [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] > [req-d8af2589-8c75-427c-a7b1-d5270840a4c8] Failed to create resource > provider record in placement API for UUID > 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, > "request_id": "req-d8af2589-8c75-427c-a7b1-d5270840a4c8", "detail": "There > was a conflict when trying to complete your request.\n\n Conflicting > resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already > exists. ", "title": "Conflict"}]}. > > 2020-06-10 05:12:49.463 75837 ERROR nova.scheduler.client.report > [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] > [req-ffd13abc-08f3-47cd-a224-a07183b066ec] Failed to create resource > provider record in placement API for UUID > 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, > "request_id": "req-ffd13abc-08f3-47cd-a224-a07183b066ec", "detail": "There > was a conflict when trying to complete your request.\n\n Conflicting > resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already > exists. ", "title": "Conflict"}]}. > > > So far the only way that works for me to fix these is to un-enroll and > then re-enroll the node. > > Is there a simpler way to fix this? > > Thanks, > Fred. > > > -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Jun 19 21:18:39 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 19 Jun 2020 15:18:39 -0600 Subject: [tripleo] CI is red In-Reply-To: References: Message-ID: On Fri, Jun 19, 2020 at 1:05 PM Wesley Hayutin wrote: > > > On Thu, Jun 18, 2020 at 4:03 PM Wesley Hayutin > wrote: > >> One more thing.. >> >> I forgot to mention 3rd party RDO clouds are experiencing problems or >> outages causing 3rd party jobs to fail as well. Ignore 3rd party check >> results until I update the list. >> >> Thanks >> > > K.. quick update. > Upstream seems to back to it's normal GREENISH status. We're working on a > fix for the ipa-server install atm. > Third party is still RED, but we think we're close. OVB BMC's updated, > we've reduced the load on vexhost and testing out the latest changes in > ironic. > > Thanks all, have a good weekend > OK... last update. We're turning off OVB jobs on all non-ci repos and the openstack-virtual-baremetal repo. We're going to try and lower usage of the 3rd party clouds significantly until the jobs run consistently green [1]. Once we have some consistent passes we will start to add it back to various tripleo projects. I'll keep everyone updated. Thanks :) [1] https://review.rdoproject.org/r/28173 > > > >> >> On Thu, Jun 18, 2020 at 4:01 PM Wesley Hayutin >> wrote: >> >>> Greetings, >>> >>> It's a been a week that started w/ CentOS mirror outage [0], followed by >>> breaking changes caused by CentOS-8.2 [.5] and has not improved much since. >>> >>> The mirror issues are resolved, the updates required for CentOS-8.2 have >>> been made >>> here's the latest issue causing your gate jobs to fail. >>> >>> tripleo-common and python-tripleoclient became out of sync and started >>> to fail unit tests. You can see this in the dlrn builds of your changes. >>> [1] >>> >>> There was also a promotion to train [2] today and we noticed that >>> mirrors were failing on container pulls for a bit (train only). This >>> should resolve over time as the mirrors refresh themselves. Usually the >>> mirrors handle the promotion more elegantly. >>> >>> CI status is updated in the $topic in #tripleo. I update the $topic as >>> needed. >>> >>> Tomorrow is another day.. >>> >>> >>> [0] https://bugs.launchpad.net/tripleo/+bug/1883430 >>> [.5] https://bugs.launchpad.net/tripleo/+bug/1883937 >>> [1] https://bugs.launchpad.net/tripleo/+bug/1884138 >>> >>> >>> http://dashboard-ci.tripleo.org/d/wb8HBhrWk/cockpit?orgId=1&var-launchpad_tags=alert&var-releases=master&var-promotion_names=current-tripleo&var-promotion_names=current-tripleo-rdo&fullscreen&panelId=61 >>> >>> http://status.openstack.org/elastic-recheck/ >>> >>> >>> http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22ERROR:dlrn:Received%20exception%20Error%20in%20build_rpm_wrapper%5C%22%20AND%20tags:%5C%22console%5C%22%20AND%20voting:1&from=864000s >>> >>> >>> https://6f2b36b80678b13e4394-1a783ef91f61ec7475bcc10015912dcc.ssl.cf2.rackcdn.com/736227/1/gate/tripleo-ci-centos-8-standalone/fb33691/logs/failures_file >>> >>> >>> [2] https://trunk.rdoproject.org/centos7-train/ >>> >>> >>> >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsbiz at yahoo.com Sat Jun 20 00:22:02 2020 From: fsbiz at yahoo.com (fsbiz at yahoo.com) Date: Sat, 20 Jun 2020 00:22:02 +0000 (UTC) Subject: [ironic][nova-compute][placement]: Conflicting resource provider name errors In-Reply-To: References: <20200616194437.ejoe73t7ahsbzxhe@skaplons-mac> <1592492145682.7834@binero.com> <37538868.307990.1592504471707@mail.yahoo.com> Message-ID: <234120345.1000186.1592612522448@mail.yahoo.com> Thanks Mohammed.Seems relevant but the patches were applied only to train, ussuri and master. I'm still on Queens.  Not sure if the patches are relevant to Queens also. Regards,Fred. On Friday, June 19, 2020, 01:59:27 PM PDT, Mohammed Naser wrote: Hi Fred: You'll need to update to grab this change: https://review.opendev.org/#/c/675496/ Thanks,Mohammed On Thu, Jun 18, 2020 at 2:29 PM fsbiz at yahoo.com wrote: Hi folks, Every time I get an ironic switchover I end up with a few resource provider errors as follows. 2020-06-10 05:11:28.129 75837 INFO nova.compute.resource_tracker [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] Compute node record created for sc-ironic06.nvc.nvidia.com:7c95f255-3c54-46ab-87cf-0b1707971e9c with uuid: 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e 2020-06-10 05:11:28.502 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-d8af2589-8c75-427c-a7b1-d5270840a4c8] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-d8af2589-8c75-427c-a7b1-d5270840a4c8", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists.  ", "title": "Conflict"}]}. 2020-06-10 05:12:49.463 75837 ERROR nova.scheduler.client.report [req-eac491b2-dd72-4466-b37e-878dbf40cda5 - - - - -] [req-ffd13abc-08f3-47cd-a224-a07183b066ec] Failed to create resource provider record in placement API for UUID 1b0b74d2-d0d4-4637-b7ea-3adaa58cac2e. Got 409: {"errors": [{"status": 409, "request_id": "req-ffd13abc-08f3-47cd-a224-a07183b066ec", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider name: 7c95f255-3c54-46ab-87cf-0b1707971e9c already exists.  ", "title": "Conflict"}]}. So far the only way that works for me to fix these is to un-enroll and then re-enroll the node. Is there a simpler way to fix this? Thanks,Fred. -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Sat Jun 20 16:35:38 2020 From: tkajinam at redhat.com (Takashi Kajinami) Date: Sun, 21 Jun 2020 01:35:38 +0900 Subject: [puppet][congress] Retiring puppet-congress Message-ID: Hello, As you know, Congress project has been retired already[1], so we will retire its puppet module, puppet-congress in openstack puppet project as well. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014292.html Because congress was directly retired instead of getting migrated to x namespace, we'll follow the same way about puppet-congress retirement and won't create x/puppet-congress. Thank you for the contribution made for the project ! Please let us know if you have any concerns about this retirement. Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat Jun 20 21:46:46 2020 From: zigo at debian.org (Thomas Goirand) Date: Sat, 20 Jun 2020 23:46:46 +0200 Subject: [horizon] patternfly? In-Reply-To: References: Message-ID: On 6/19/20 10:03 PM, Mohammed Naser wrote: > Hi everyone, > > I was wondering if anyone in the community has explored the idea of > implementing PatternFly inside Horizon.  It seems like it shares a lot > of the similar ideas that we use and we could really benefit from using > a common library that already exists with a lot of good UX thought > behind it. > > I know it's based on React which is a bit of a leap from where Horizon > is today.  However, I'd be curious if the Horizon team is interested in > figuring out a plan to make a migration to something like this happen. > > Personally, I think I would be able to provide resources to have > someone do this work.  However, if this seems like a huge stretch and > architecture change where it actually makes more sense to stand this up > from scratch (and implement an architecture where the dashboard talks > directly to the APIs?), perhaps we should explore that. > > I also would like to hear more from our extended community too, as I > think we really need to improve our user interface experience. > > Thanks, > Mohammed > > -- > Mohammed Naser > VEXXHOST, Inc. Stuff based on npm are *very* difficult to maintain on downstream distributions, because of the way apps are getting dependencies (ie: by the 100s of dependencies, each of them having to be packaged separately as separate libraries). So, before considering any solution for the web, please consider the amount of work necessary to do the packaging. For example, I would *not* have the bandwidth to package 100s of nmp components. These are just general remarks, I don't know any specifics about this particular library (just saw its package.json which is as horrifying (from a package maintainer perspective) as any other npm app...). Cheers, Thomas Goirand (zigo) From pradeepantil at gmail.com Sun Jun 21 12:51:55 2020 From: pra