From Tushar.Patil at nttdata.com Wed Apr 1 00:13:18 2020 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Wed, 1 Apr 2020 00:13:18 +0000 Subject: [tosca-parser][heat-translator][tacker] - Need new version of heat-translator and tosca-parser libraries for Tacker In-Reply-To: References: , Message-ID: Hi PTL and Core Reviewers, Final release of non-client libraries is around the corner on 3rd April and the following patch [2] in heat-translator is not yet merged which is a must for tacker project to merge VNF LCM API feature [1]. We have already received one +2, need another one. Request core-reviewers to please review and approve the patch. [1]: NF LCM API based on ETSI NFV-SOL specification : https://review.opendev.org/#/c/591866 [2] :Add support for list data type https://review.opendev.org/#/c/714026 Thank you. Regards, tpatil ________________________________________ From: Patil, Tushar Sent: Tuesday, March 31, 2020 8:33 AM To: openstack-discuss at lists.openstack.org Subject: Re: [tosca-parser][heat-translator][tacker] - Need new version of heat-translator and tosca-parser libraries for Tacker Hi PTL and Core Reviewers, Thank you Bob and Jo for reviewing the heat-translator patch. I have uploaded a new PS [1] after addressing comments given by Jo san. Request you all to please review the newly uploaded PS. [1] : https://urldefense.com/v3/__https://review.opendev.org/*/c/714026__;Iw!!AKgNyAfk550h-spHnEo!J3AJ3XxZTKPxZL1NYIW82I9KDfoBo3TAQ5tAv6dRzWe-lhREoELonmmB7Qx-BW7yB3A$ Thank you. Regards, Tushar Patil ________________________________________ From: Patil, Tushar Sent: Thursday, March 26, 2020 7:00 PM To: openstack-discuss at lists.openstack.org Subject: [tosca-parser][heat-translator][tacker] - Need new version of heat-translator and tosca-parser libraries for Tacker Hi PTL and Core Reviewers, We are working on implementing ETSI spec in tacker for Ussuri release for which we have pushed couple of patches in heat-translator [1][2] and tosca-parser[3]. Patch [1] is not yet merged so request core reviewers to please take a look at it. We have uploaded many patches in tacker [4] but most of the patches are failing on CI as it's requires changes from heat-translator and tosca-parser. So we need a new release of heat-translator (after patch [1] is merged) and tosca-parser badly. Please let us know when are you planning to release a new version of these libraries. [1] : https://urldefense.com/v3/__https://review.opendev.org/*/q/status:merged*project:openstack/heat-translator*branch:master*topic:etsi_nfv-sol001__;IysrKw!!AKgNyAfk550h-spHnEo!OqpUJDWQwwxEBK76c23JD3R5IL0Mh6OIkqbCswRlGv16AlKf2ELDqMQq-BOR1l3G5bw$ [2] : https://urldefense.com/v3/__https://review.opendev.org/*/q/status:open*project:openstack/heat-translator*branch:master*topic:bp/support-etsi-nfv-specs__;IysrKw!!AKgNyAfk550h-spHnEo!OqpUJDWQwwxEBK76c23JD3R5IL0Mh6OIkqbCswRlGv16AlKf2ELDqMQq-BORRnrUPAI$ [3] : https://urldefense.com/v3/__https://review.opendev.org/*/c/688633__;Iw!!AKgNyAfk550h-spHnEo!OqpUJDWQwwxEBK76c23JD3R5IL0Mh6OIkqbCswRlGv16AlKf2ELDqMQq-BOR4xYmHaY$ [4] : https://urldefense.com/v3/__https://review.opendev.org/*/q/topic:bp/support-etsi-nfv-specs*(status:open*OR*status:merged)__;IysrKw!!AKgNyAfk550h-spHnEo!OqpUJDWQwwxEBK76c23JD3R5IL0Mh6OIkqbCswRlGv16AlKf2ELDqMQq-BORA0-cbDU$ Thank you. Regards, tpatil Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From gmann at ghanshyammann.com Wed Apr 1 00:37:42 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 31 Mar 2020 19:37:42 -0500 Subject: [all][TC][PTL][election] Nominations Close & Campaigning Begins In-Reply-To: References: Message-ID: <171332c2258.e47af13f24342.3352020126460654343@ghanshyammann.com> ---- On Tue, 31 Mar 2020 18:46:39 -0500 Kendall Nelson wrote ---- > The PTL and TC Nomination periods are now over. The official candidate listsare available on the election website[0][1]. > > --PTL Election Details-- > > There are 16 projects without candidates, so according to this resolution[2], the TC will have to decide how the following projects will proceed: Adjutant Barbican Cloudkitty Congress I18n Infrastructure Loci Masakari Oslo Packaging_Rpm Placement Rally Swift Tacker Tricircle Zaqar Just curious about Barbican PTL nomination[1]. Douglas is returning PTL and email id the same as what we have currently in the governance page[2]. Also saw Ussuri cycle nomination also with the same email id[3]. did he disabled/changed his OSF profile or some automatic disable happened? Though running as PTL even not voting in elections should be considered as an active member of the foundation. Or I can rephrase my question, should we extend the active foundation member criteria beyond election voting (like ATC, AUC even not voting in elections)? [1] https://review.opendev.org/#/c/716316/1 [2] https://governance.openstack.org/tc/reference/projects/barbican.html [3] https://review.opendev.org/#/c/679448/ -gmann > > There are no projects with more than one candidate so we won't need to hold any runoffs! Congratulations to our new and returning PTLs! [0] > > --TC Election Details-- > > The official candidate list is available on the election website[1]. > > There are will be a TC election following the campaigning period that has now begun and runs till Apr 07, 2020 23:45 UTC when the polling will begin. > > You are encouraged to ask questions to the candidates on the ML to help inform your decisions when polling begins. > > Thank you, > > -Kendall (diablo_rojo) & the Election Officials > > [0] https://governance.openstack.org/election/#victoria-ptl-candidates[1] https://governance.openstack.org/election/#victoria-tc-candidates[2] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html > > From fungi at yuggoth.org Wed Apr 1 02:01:07 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 1 Apr 2020 02:01:07 +0000 Subject: [all][TC][PTL][election] Nominations Close & Campaigning Begins In-Reply-To: <171332c2258.e47af13f24342.3352020126460654343@ghanshyammann.com> References: <171332c2258.e47af13f24342.3352020126460654343@ghanshyammann.com> Message-ID: <20200401020107.eo7izh4j7ltavgfd@yuggoth.org> On 2020-03-31 19:37:42 -0500 (-0500), Ghanshyam Mann wrote: [...] > Just curious about Barbican PTL nomination[1]. Douglas is > returning PTL and email id the same as what we have currently in > the governance page[2]. Also saw Ussuri cycle nomination also with > the same email id[3]. > > did he disabled/changed his OSF profile or some automatic disable > happened? Though running as PTL even not voting in elections > should be considered as an active member of the foundation. > > Or I can rephrase my question, should we extend the active > foundation member criteria beyond election voting (like ATC, AUC > even not voting in elections)? [...] Per the The OpenStack Foundation Technical Committee Member Policy (appendix 4 of the Bylaws of the OpenStack Foundation): [3.a.i] "An ATC is an Individual Member..." https://www.openstack.org/legal/technical-committee-member-policy/ So we have a legal document currently requiring ATCs to be Individual Members of the OSF in good standing. Changing this would require more than just a decision by the TC itself. Further, The OpenStack Foundation Individual Member Policy (appendix 1 of the Bylaws of the OpenStack Foundation) states that one of the criteria for removal of an Individual Member is: [3.iii] "failure to vote in at least 50% of the votes for Individual Members within the prior twenty-four months unless the person does not respond within thirty (30) days of notice of such termination that the person wishes to continue to be an Individual Member." https://www.openstack.org/legal/individual-member-policy/ I'm not a lawyer, but as I understand it this is actually the *primary* responsibility for an Individual member, so probably doesn't make sense to remove from the bylaws at all. The one thing which might be easier for the TC to change directly is the requirements for PTL candidates, since the OSF bylaws have nothing to say about team-specific leadership structures. This is instead encoded in the TC Charter: "Any APC can propose their candidacy for the corresponding project PTL election. Sitting PTLs are eligible to run for re-election each cycle, provided they continue to meet the criteria." https://governance.openstack.org/tc/reference/charter.html#candidates-for-ptl-seats [and just before that] "Voters for a given project’s PTL election are the Active Project Contributors (“APC”), which are a subset of the Foundation Individual Members. Individual Members who committed a change to a repository of a project over the last two 6-month release cycles are considered APC for that project team." https://governance.openstack.org/tc/reference/charter.html#voters-for-ptl-seats-apc So the TC could decide that PTL candidates and those who elect PTLs don't have to be OSF Individual Members simply by amending its charter. This doesn't solve the problem for TC candidates and their electorate, but it's a start. One thing I feel obligated to point out though, is that making the PTL electorate and candidates not require OSF membership while the TC electorate and candidates do would further complicate our already incredibly complex election tooling, so please take that into consideration. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From skaplons at redhat.com Wed Apr 1 09:34:21 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 1 Apr 2020 11:34:21 +0200 Subject: [tempest] what is a proper way to install a package into a vm instance spawned by tempest? In-Reply-To: References: Message-ID: <3CA94BF9-078A-460C-8F00-EA06BBA5DCAF@redhat.com> Hi, I forgot about it earlier but in neutron-tempest-plugin’s devstack plugin we have functions to modify image which will be used for tests. Please check [1], maybe that will be useful in this case. [1] https://github.com/openstack/neutron-tempest-plugin/blob/master/devstack/customize_image.sh > On 24 Mar 2020, at 09:27, Roman Safronov wrote: > > Right, it was ubuntu-16.04 rather than ubuntu-18.04 as I said before by mistake. > As Clark said it might be a NAT issue on devstack nodes. > > > On Tue, Mar 24, 2020 at 7:32 AM Lajos Katona wrote: > Hi, > Perhaps it's better to look at neutron-tempest-plugin: > • Your case is more a networking issue as I see. > • neutron-tempest-plugin has the option to use other image than cirros with config option advanced_image_ref and in neutron gate that is mostly some ubuntu (ubunt16.04 as I see in latest logs) > example: > https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_bf5/713719/3/check/neutron-tempest-plugin-scenario-openvswitch/bf5f249/controller/logs/tempest_conf.txt > > Lajos > > Roman Safronov ezt írta (időpont: 2020. márc. 23., H, 18:55): > Actually we need such test because we are testing Openstack as a whole product (including Neutron+openvswitch+OVN+Nova+libvirt+Octavia etc.) that's why I think neutron functional test would be not enough. We are creating tests covering scenarios that our customers tried to use and encountered issues. > For example this is a bug reported downstream on an issue happened in this scenario: https://bugzilla.redhat.com/show_bug.cgi?id=1707241 > There were more reported issues on similar use case and we would like to catch such issues before the product is released. > > Anyway, as I said this specific test runs stable in downstream CI on virtual multi node environments with nested virtualization. It usually runs with RHEL8 image but I also tried it with standard Ubuntu-18.04 guest image used by upstream CI gates. The only issue is that keepalive package installation by 'apt install' for some reason does not work when running on upstream gates causing the test to be skipped. I just would like to understand if running 'apt install/yum install' inside VMs spawned by upstream tempest tests is not acceptable at all or I am missing something (maybe proxy settings?). > > On Mon, Mar 23, 2020 at 5:36 PM Clark Boylan wrote: > On Sun, Mar 22, 2020, at 9:10 AM, Roman Safronov wrote: > > Hi, > > > > I wrote a tempest test > > which requires keepalived to be running inside vm instance. > > The test tries to install keepalived package inside vm by running "apt > > install/yum install". > > However as I see in upstream gates logs this does not work while the > > same code works perfectly in our downstream CI using the same image. > > > > Do vm instances spawned on upstream gates during tests have a route to > > the internet? > > Is there an alternative way that I can use in order to install a > > package? > > By default the tempest jobs use a cirros image. This is a very small, quick to boot linux without a package manager. If you need services to be running in the image typically you'll need to boot a more typical linux installation. Keep in mind that nested virt isn't on by default as it isn't available everywhere and has been flaky in the past. This makes these types of installations very small which may make reliable VRRP testing difficult. > > Thinking out loud here, I'm not sure how much benefit there is to testing VRRP failures in this manner. Do we think that OpenStack sitting on top of libvirt and OVS will somehow produce different results with VRRP than using those tools as is? To put this another way: are we testing OpenStack or are we testing OVS and libvirt? > > One option here may be to construct this as a Neutron functional test and run VRRP on Neutron networks without virtualization mixed in. > > > > > Thanks in advance > > > > -- > > ROMAN SAFRONOV > > > > > > > — Slawek Kaplonski Senior software engineer Red Hat From ianyrchoi at gmail.com Wed Apr 1 13:30:40 2020 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Wed, 1 Apr 2020 22:30:40 +0900 Subject: [I18n][PTL][election] Nominations were over - no PTL candidacy In-Reply-To: References: Message-ID: <6175f5cf-94e6-045d-6291-1effd140d6d8@gmail.com> Hello, I would like to share with I18n team members + OpenStackers who are interested in I18n that there were no PTL nominations for upcoming Victoria release cycle. I am still serving as an appointed PTL since I cannot run for I18n PTL as an election official, but also I honestly tell that my contribution has been decreased due to my personal life (my baby was born ~50 days ago - so busy but really happy as a father). TC started to discuss on leaderless projects [1], and seems that a candidate to become to SIG [2] is being discussed as Docs team previously moved to Technical Writing SIG. I would like to kindly ask opinions from I18n team - unless there are no other strong opinions, I will support to become SIG and try to move forward for next possible steps. With many thanks, /Ian [1] https://etherpad.openstack.org/p/victoria-leaderless [2] https://governance.openstack.org/sigs/ -------- Forwarded Message -------- Subject: [all][TC][PTL][election] Nominations Close & Campaigning Begins Date: Tue, 31 Mar 2020 16:46:39 -0700 From: Kendall Nelson To: OpenStack Discuss The PTL and TC Nomination periods are now over. The official candidate lists are available on the election website[0][1]. --PTL Election Details-- There are 16 projects without candidates, so according to this resolution[2], the TC will have to decide how the following projects will proceed: Adjutant Barbican Cloudkitty Congress I18n Infrastructure Loci Masakari Oslo Packaging_Rpm Placement Rally Swift Tacker Tricircle Zaqar There are no projects with more than one candidate so we won't need to hold any runoffs! Congratulations to our new and returning PTLs! [0] --TC Election Details-- The official candidate list is available on the election website[1]. There are will be a TC election following the campaigning period that has now begun and runs till Apr 07, 2020 23:45 UTC when the polling will begin. You are encouraged to ask questions to the candidates on the ML to help inform your decisions when polling begins. Thank you, -Kendall (diablo_rojo) & the Election Officials [0] https://governance.openstack.org/election/#victoria-ptl-candidates [1] https://governance.openstack.org/election/#victoria-tc-candidates [2] https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html From thierry at openstack.org Wed Apr 1 13:32:35 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 1 Apr 2020 15:32:35 +0200 Subject: [all] Curating the openstack org on GitHub Message-ID: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> Hi everyone, As you know, for practical and "code marketing" reasons, we maintain mirrors of the opendev.org/openstack repositories on GitHub: https://github.com/openstack/ A few months ago jroll and I took on the task of maintaining the "openstack" organization on GitHub... In particular pushing "moved" final commits to repositories that are now maintained elsewhere on Opendev, and marking them retired. It's all a bit of a challenge. GitHub does not let you set the "mirror" flag by API (you have to ask GitHub support to manually do it, and they don't like it when you ask for it to be set on 1,831 repositories). The "pinned repositories" UI fails miserably and does not let us select "Nova" (again, probably too many repositories, GitHub support does not have a solution for us). There are limited opportunities to describe the fact that we actually only use GitHub as a mirror, leading to confusion. For inactive repositories (no longer under OpenStack governance), the situation is even more confusing. Some are noted "MOVED" or "RETIRED" in their description, some have a closing commit (some others don't), some have the "archived" flag. Some don't have anything and appear active while they are not (see attached CSV for those who like data). There is also the openstack-dev and openstack-attic organizations, which are leftovers from ancient times. In summary, for a "code marketing" opportunity, it does not paint a great picture. I'd like to make the following suggestion: - aggressively delete all non-openstack things from the openstack org, keeping only official, active repositories - delete the openstack-dev and openstack-attic organizations We shied away from doing that in the past, mostly to not break people who may have cloned those repositories... But I think the benefits of cleaning up now outweigh the drawbacks. The reference code always exists at opendev.org anyway. Also maybe once we are back to a more reasonable number of visible repositories, the pins UI will work again. And GitHub will actually be OK with setting the mirror flag. Thoughts, comments ? -- Thierry Carrez (ttx) -------------- next part -------------- A non-text attachment was scrubbed... Name: analysis.csv.gz Type: application/gzip Size: 12568 bytes Desc: not available URL: From neil at tigera.io Wed Apr 1 13:57:31 2020 From: neil at tigera.io (Neil Jerram) Date: Wed, 1 Apr 2020 13:57:31 +0000 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> Message-ID: SGTM, as someone associated with a now-inactive repo ( https://github.com/openstack/networking-calico). On Wed, Apr 1, 2020 at 2:32 PM Thierry Carrez wrote: > Hi everyone, > > As you know, for practical and "code marketing" reasons, we maintain > mirrors of the opendev.org/openstack repositories on GitHub: > > https://github.com/openstack/ > > A few months ago jroll and I took on the task of maintaining the > "openstack" organization on GitHub... In particular pushing "moved" > final commits to repositories that are now maintained elsewhere on > Opendev, and marking them retired. > > It's all a bit of a challenge. GitHub does not let you set the "mirror" > flag by API (you have to ask GitHub support to manually do it, and they > don't like it when you ask for it to be set on 1,831 repositories). The > "pinned repositories" UI fails miserably and does not let us select > "Nova" (again, probably too many repositories, GitHub support does not > have a solution for us). There are limited opportunities to describe the > fact that we actually only use GitHub as a mirror, leading to confusion. > > For inactive repositories (no longer under OpenStack governance), the > situation is even more confusing. Some are noted "MOVED" or "RETIRED" in > their description, some have a closing commit (some others don't), some > have the "archived" flag. Some don't have anything and appear active > while they are not (see attached CSV for those who like data). There is > also the openstack-dev and openstack-attic organizations, which are > leftovers from ancient times. In summary, for a "code marketing" > opportunity, it does not paint a great picture. > > I'd like to make the following suggestion: > > - aggressively delete all non-openstack things from the openstack org, > keeping only official, active repositories > - delete the openstack-dev and openstack-attic organizations > > We shied away from doing that in the past, mostly to not break people > who may have cloned those repositories... But I think the benefits of > cleaning up now outweigh the drawbacks. The reference code always exists > at opendev.org anyway. Also maybe once we are back to a more reasonable > number of visible repositories, the pins UI will work again. And GitHub > will actually be OK with setting the mirror flag. > > Thoughts, comments ? > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paye600 at gmail.com Wed Apr 1 14:04:50 2020 From: paye600 at gmail.com (Roman Gorshunov) Date: Wed, 1 Apr 2020 16:04:50 +0200 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> Message-ID: Hello Thierry, I'm all for it, however would try to talk to GitHub/Microsoft and get their support on keeping our mirrors there up and running officially. Would we may be need to have a formal contract with them? Whenever I got-clone code, I use opendev. But what I really like in GitHub in comparison to the Gitea we use, is advanced search functionality and git repo statistics (commits volume, per-contributor commits volume, etc.). Best regards, Roman Gorshunov On Wed, Apr 1, 2020 at 3:40 PM Thierry Carrez wrote: > > Hi everyone, > > As you know, for practical and "code marketing" reasons, we maintain > mirrors of the opendev.org/openstack repositories on GitHub: > > https://github.com/openstack/ > > A few months ago jroll and I took on the task of maintaining the > "openstack" organization on GitHub... In particular pushing "moved" > final commits to repositories that are now maintained elsewhere on > Opendev, and marking them retired. > > It's all a bit of a challenge. GitHub does not let you set the "mirror" > flag by API (you have to ask GitHub support to manually do it, and they > don't like it when you ask for it to be set on 1,831 repositories). The > "pinned repositories" UI fails miserably and does not let us select > "Nova" (again, probably too many repositories, GitHub support does not > have a solution for us). There are limited opportunities to describe the > fact that we actually only use GitHub as a mirror, leading to confusion. > > For inactive repositories (no longer under OpenStack governance), the > situation is even more confusing. Some are noted "MOVED" or "RETIRED" in > their description, some have a closing commit (some others don't), some > have the "archived" flag. Some don't have anything and appear active > while they are not (see attached CSV for those who like data). There is > also the openstack-dev and openstack-attic organizations, which are > leftovers from ancient times. In summary, for a "code marketing" > opportunity, it does not paint a great picture. > > I'd like to make the following suggestion: > > - aggressively delete all non-openstack things from the openstack org, > keeping only official, active repositories > - delete the openstack-dev and openstack-attic organizations > > We shied away from doing that in the past, mostly to not break people > who may have cloned those repositories... But I think the benefits of > cleaning up now outweigh the drawbacks. The reference code always exists > at opendev.org anyway. Also maybe once we are back to a more reasonable > number of visible repositories, the pins UI will work again. And GitHub > will actually be OK with setting the mirror flag. > > Thoughts, comments ? > > -- > Thierry Carrez (ttx) > From gr at ham.ie Wed Apr 1 14:33:39 2020 From: gr at ham.ie (Graham Hayes) Date: Wed, 1 Apr 2020 15:33:39 +0100 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> Message-ID: <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> On 01/04/2020 14:32, Thierry Carrez wrote: > Hi everyone, > > I'd like to make the following suggestion: > > - aggressively delete all non-openstack things from the openstack org, > keeping only official, active repositories I think we have gotten to a point where this is warranted > - delete the openstack-dev and openstack-attic organizations If we do delete these orgs, could we maybe leave the org in place with a single repo pointing people to opendev / or a table of old repo -> new repos? I agree with the clean up, but I don't want to break people and leave them with no guidance. > We shied away from doing that in the past, mostly to not break people > who may have cloned those repositories... But I think the benefits of > cleaning up now outweigh the drawbacks. The reference code always exists > at opendev.org anyway. Also maybe once we are back to a more reasonable > number of visible repositories, the pins UI will work again. And GitHub > will actually be OK with setting the mirror flag. > > Thoughts, comments ? > From allison at openstack.org Wed Apr 1 14:54:10 2020 From: allison at openstack.org (Allison Price) Date: Wed, 1 Apr 2020 09:54:10 -0500 Subject: Fwd: OpenStack Foundation Community Meetings References: <1633187D-8928-4671-B126-CB9CAC98377B@openstack.org> Message-ID: <2B7FE5C9-910E-4B47-AA8D-B86E59E60039@openstack.org> Hi everyone - In case you didn’t see the post on the Foundation mailing list, we have a community meeting tomorrow where we will be talking about updates to OSF events as well as project updates and how you can get involved in the 10th year of OpenStack campaign. One meeting will be in English and one in Mandarin. Bring your questions and see you then! Cheers, Allison > Begin forwarded message: > > From: Allison Price > Subject: OpenStack Foundation Community Meetings > Date: March 26, 2020 at 10:21:15 AM CDT > To: foundation at lists.openstack.org > > Hi everyone, > > Next week we are going to have two community meetings to discuss the OpenStack 10th anniversary planning, current community projects, and an update on OSF events. Please join if you would like to hear updates or if you have questions for the OpenStack Foundation team. > > Join us: > Thursday, April 2 at 10am CT / 1500 UTC  > Friday, April 3 in Mandarin at 10am China Standard Time > > If you are unable to join one of the above times, I will share a recording to the mailing list after the meetings. > > Cheers, > Allison > > > Allison Price > OpenStack Foundation > allison at openstack.org > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Apr 1 15:28:08 2020 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 1 Apr 2020 17:28:08 +0200 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> Message-ID: What do you think about creating a new openstack archive organization github.com/openstack-archive/ (or thing like that) and tranfert them to this organization? Github properly manage redirection and transfert, it could be a proper way to keep projects reachable for searching with the github search engine (for some reasons) and to allow us to clean things on github.com/openstack. Le mer. 1 avr. 2020 à 16:38, Graham Hayes a écrit : > On 01/04/2020 14:32, Thierry Carrez wrote: > > Hi everyone, > > > > > > > I'd like to make the following suggestion: > > > > - aggressively delete all non-openstack things from the openstack org, > > keeping only official, active repositories > > I think we have gotten to a point where this is warranted > > > - delete the openstack-dev and openstack-attic organizations > > If we do delete these orgs, could we maybe leave the org in place with a > single repo pointing people to opendev / or a table of old repo -> new > repos? > > I agree with the clean up, but I don't want to break people and leave > them with no guidance. > > > We shied away from doing that in the past, mostly to not break people > > who may have cloned those repositories... But I think the benefits of > > cleaning up now outweigh the drawbacks. The reference code always exists > > at opendev.org anyway. Also maybe once we are back to a more reasonable > > number of visible repositories, the pins UI will work again. And GitHub > > will actually be OK with setting the mirror flag. > > > > Thoughts, comments ? > > > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Wed Apr 1 15:51:15 2020 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 1 Apr 2020 10:51:15 -0500 Subject: Inline comments from Zuul Message-ID: <7FBF7A19-F0A4-41BC-832E-F9ADC5B4177C@inaugust.com> Hey everybody, Yesterday mnaser finished up a long-standing TODO list item we had of leveraging Zuul’s ability to leave inline comments on changes by parsing out things like linter output and dropping them on changes. This is now live. We’ve run in to a few gotchas (turns out there’s a lot of people doing a lot of different things) - all of which we’ve either fixed or have fixes in flight for. Notably there is a usage pattern of running pylint but only partially caring about the results, which turns inline comments from pylint output into complete noise. We’ve turned off the inline comments in openstack-tox-pylint: https://review.opendev.org/#/c/716599/ https://review.opendev.org/#/c/716600/ although if your project uses it and would like inline comments from it, they can be re-enabled in your project. Similarly the same flag can be used to disable inline comments if your project decides they don't want them for some reason. Work is under way to add parsing for Sphinx output and golangci-lint output. If anybody runs in to any issues - like the results are too noisy or something is breaking where it shouldn’t, please let us know and we’ll get on it as quickly as possible. Thanks! Monty From openstack at nemebean.com Wed Apr 1 16:00:55 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 1 Apr 2020 11:00:55 -0500 Subject: [all] Curating the openstack org on GitHub In-Reply-To: References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> Message-ID: <0c993967-f1c7-4a11-cccb-9e7257f50f93@nemebean.com> On 4/1/20 10:28 AM, Herve Beraud wrote: > What do you think about creating a new openstack archive organization > github.com/openstack-archive/ (or > thing like that) and tranfert them to this organization? This is basically what openstack-attic was. It would be nicer for anyone who has cloned these repos to move them there instead of deleting. It does still leave a bunch of essentially dead repos lying around though, so I'm not sure if that completely addresses the cleanup aspect of this effort. I'm also curious if all of the repos involved here are inactive, or if there are still things in openstack-dev that were moved to the openstack namespace but are still mirrored to github in the old org. > > Github properly manage redirection and transfert, it could be a proper > way to keep projects reachable for searching with the github search > engine (for some reasons) and to allow us to clean things on > github.com/openstack . > > Le mer. 1 avr. 2020 à 16:38, Graham Hayes > > a écrit : > > On 01/04/2020 14:32, Thierry Carrez wrote: > > Hi everyone, > > > > > > > I'd like to make the following suggestion: > > > > - aggressively delete all non-openstack things from the openstack > org, > > keeping only official, active repositories > > I think we have gotten to a point where this is warranted > > > - delete the openstack-dev and openstack-attic organizations > > If we do delete these orgs, could we maybe leave the org in place with a > single repo pointing people to opendev / or a table of old repo -> new > repos? > > I agree with the clean up, but I don't want to break people and leave > them with no guidance. > > > We shied away from doing that in the past, mostly to not break > people > > who may have cloned those repositories... But I think the > benefits of > > cleaning up now outweigh the drawbacks. The reference code always > exists > > at opendev.org anyway. Also maybe once we > are back to a more reasonable > > number of visible repositories, the pins UI will work again. And > GitHub > > will actually be OK with setting the mirror flag. > > > > Thoughts, comments ? > > > > > > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From thierry at openstack.org Wed Apr 1 16:04:48 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 1 Apr 2020 18:04:48 +0200 Subject: [all] Curating the openstack org on GitHub In-Reply-To: References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> Message-ID: Herve Beraud wrote: > What do you think about creating a new openstack archive organization > github.com/openstack-archive/ (or > thing like that) and tranfert them to this organization? > > Github properly manage redirection and transfert, it could be a proper > way to keep projects reachable for searching with the github search > engine (for some reasons) and to allow us to clean things on > github.com/openstack . The trick is that transfer is (if documentation is to be trusted) an async process involving the new owner clicking a link in an email to accept the transferred repository. Since this has to be done for 1,130 repositories (not even counting openstack-dev and openstack-attic), I don't think that would be practical... -- Thierry Carrez (ttx) From thierry at openstack.org Wed Apr 1 16:07:04 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 1 Apr 2020 18:07:04 +0200 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <0c993967-f1c7-4a11-cccb-9e7257f50f93@nemebean.com> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> <0c993967-f1c7-4a11-cccb-9e7257f50f93@nemebean.com> Message-ID: Ben Nemec wrote: > On 4/1/20 10:28 AM, Herve Beraud wrote: >> What do you think about creating a new openstack archive organization >> github.com/openstack-archive/ >> (or thing like that) and tranfert them to this organization? > > This is basically what openstack-attic was. [...] Except openstack-attic was defined and replicated from our side, not the result of a GitHub transfer. So if we were to do the same (rename all projects on opendev, sync them to GitHub), users of old repositories on GitHub would not be automagically transferred. -- Thierry Carrez (ttx) From sean.mcginnis at gmx.com Wed Apr 1 16:15:37 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 1 Apr 2020 11:15:37 -0500 Subject: [all] Curating the openstack org on GitHub In-Reply-To: References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> <0c993967-f1c7-4a11-cccb-9e7257f50f93@nemebean.com> Message-ID: <28424a20-21ed-6608-1adf-38ceaaeb9100@gmx.com> On 4/1/20 11:07 AM, Thierry Carrez wrote: > Ben Nemec wrote: >> On 4/1/20 10:28 AM, Herve Beraud wrote: >>> What do you think about creating a new openstack archive >>> organization github.com/openstack-archive/ >>> (or thing like that) and >>> tranfert them to this organization? >> >> This is basically what openstack-attic was. [...] > > Except openstack-attic was defined and replicated from our side, not > the result of a GitHub transfer. So if we were to do the same (rename > all projects on opendev, sync them to GitHub), users of old > repositories on GitHub would not be automagically transferred. Would it be simpler in this case though. We would just: 1. Stop mirroring retired repos from our gitea to GitHub 2. Manually do a repo transfer from openstack/ to openstack-attic/ on GitHub 3. Set the openstack-attic/repo to Archived Then anyone that still tries to clone from the old openstack namespaces GitHub location will still get the files, but from that point on we don't need to worry about mirroring or ongoing maintenance. It's basically just a historical record. And if someone wants to for whatever reason, they can fork that repo and do whatever they want to do with it. Sean From mtreinish at kortar.org Wed Apr 1 16:26:57 2020 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 1 Apr 2020 12:26:57 -0400 Subject: [all] Curating the openstack org on GitHub In-Reply-To: References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> Message-ID: <20200401162657.GA10049@zeong> On Wed, Apr 01, 2020 at 06:04:48PM +0200, Thierry Carrez wrote: > Herve Beraud wrote: > > What do you think about creating a new openstack archive organization > > github.com/openstack-archive/ (or > > thing like that) and tranfert them to this organization? > > > > Github properly manage redirection and transfert, it could be a proper > > way to keep projects reachable for searching with the github search > > engine (for some reasons) and to allow us to clean things on > > github.com/openstack . > > The trick is that transfer is (if documentation is to be trusted) an async > process involving the new owner clicking a link in an email to accept the > transferred repository. Since this has to be done for 1,130 repositories > (not even counting openstack-dev and openstack-attic), I don't think that > would be practical... > This is true unless the user doing the transfer is an owner on both sides of the transfer. Then it should just be a matter of pushing the button/making the API request and there is no confirmation needed (I did this just the other day, transferring a repo from my personal account to an organization I'm an owner of). -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Wed Apr 1 16:32:13 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 1 Apr 2020 16:32:13 +0000 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <28424a20-21ed-6608-1adf-38ceaaeb9100@gmx.com> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> <0c993967-f1c7-4a11-cccb-9e7257f50f93@nemebean.com> <28424a20-21ed-6608-1adf-38ceaaeb9100@gmx.com> Message-ID: <20200401163213.wps7fkhv4ivnigyr@yuggoth.org> On 2020-04-01 11:15:37 -0500 (-0500), Sean McGinnis wrote: [...] > 1. Stop mirroring retired repos from our gitea to GitHub [...] At the moment, there is legacy configuration in place instructing OpenDev's Gerrit to replicate all repositories with names matching ^openstack/.* to the openstack organization on GitHub. Gerrit is going to continue to try to (re)replicate all these retired repos within the openstack namespace. This is probably the time to talk about switching OpenStack's active deliverables to using a Zuul job for replicating to GitHub like some of the other namespaces in OpenDev have been doing. We'd dearly love to drop that GitHub remote from our Gerrit replication config. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thierry at openstack.org Wed Apr 1 16:36:18 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 1 Apr 2020 18:36:18 +0200 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <28424a20-21ed-6608-1adf-38ceaaeb9100@gmx.com> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> <0c993967-f1c7-4a11-cccb-9e7257f50f93@nemebean.com> <28424a20-21ed-6608-1adf-38ceaaeb9100@gmx.com> Message-ID: <2c5b28fb-7fef-a547-d00d-703a0803d10c@openstack.org> Sean McGinnis wrote: > [...] We would just: > > 1. Stop mirroring retired repos from our gitea to GitHub > 2. Manually do a repo transfer from openstack/ to openstack-attic/ on > GitHub > 3. Set the openstack-attic/repo to Archived > > Then anyone that still tries to clone from the old openstack namespaces > GitHub location will still get the files, but from that point on we > don't need to worry about mirroring or ongoing maintenance. It's > basically just a historical record. And if someone wants to for whatever > reason, they can fork that repo and do whatever they want to do with it. Assuming Matt is right and org-to-org transfer does not generate manual confirmation if they have a shared owner, that's definitely a possibility. I prefer openstack-archive because openstack-attic actually exists on opendev so having it in two places but containing different things is likely to be confusing. I'd rather transfer openstack-attic to openstack-archive as well. -- Thierry Carrez (ttx) From fungi at yuggoth.org Wed Apr 1 16:48:53 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 1 Apr 2020 16:48:53 +0000 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <20200401162657.GA10049@zeong> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> <20200401162657.GA10049@zeong> Message-ID: <20200401164852.5uxy6mpwf64kocfk@yuggoth.org> On 2020-04-01 12:26:57 -0400 (-0400), Matthew Treinish wrote: [...] > This is true unless the user doing the transfer is an owner on > both sides of the transfer. Then it should just be a matter of > pushing the button/making the API request and there is no > confirmation needed (I did this just the other day, transferring a > repo from my personal account to an organization I'm an owner of). In fact, this was the only way to do a transfer between orgs for quite some time (so you had to temporarily add the user to one or the other org), and the confirmation workflow was only added more recently. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Tim.Bell at cern.ch Wed Apr 1 16:56:10 2020 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 1 Apr 2020 16:56:10 +0000 Subject: OpenStack Foundation Community Meetings In-Reply-To: <2B7FE5C9-910E-4B47-AA8D-B86E59E60039@openstack.org> References: <1633187D-8928-4671-B126-CB9CAC98377B@openstack.org> <2B7FE5C9-910E-4B47-AA8D-B86E59E60039@openstack.org> Message-ID: Allison, I can't make it to the event.. do you know if they'll be recorded ? Tim -----Original message----- From: Allison Price  Sent: Wednesday, April 1, 2020 4:57 PM To: OpenStack Discuss Subject: Fwd: OpenStack Foundation Community Meetings Hi everyone -  In case you didn’t see the post on the Foundation mailing list, we have a community meeting tomorrow where we will be talking about updates to OSF events as well as project updates and how you can get involved in the 10th year of OpenStack campaign.  One meeting will be in English and one in Mandarin.  Bring your questions and see you then!  Cheers, Allison Begin forwarded message: From: Allison Price > Subject: OpenStack Foundation Community Meetings Date: March 26, 2020 at 10:21:15 AM CDT To: foundation at lists.openstack.org Hi everyone,  Next week we are going to have two community meetings to discuss the OpenStack 10th anniversary planning, current community projects, and an update on OSF events. Please join if you would like to hear updates or if you have questions for the OpenStack Foundation team. Join us:  * Thursday, April 2 at 10am CT / 1500 UTC  * Friday, April 3 in Mandarin at 10am China Standard Time If you are unable to join one of the above times, I will share a recording to the mailing list after the meetings. Cheers, Allison Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Wed Apr 1 16:57:37 2020 From: allison at openstack.org (Allison Price) Date: Wed, 1 Apr 2020 11:57:37 -0500 Subject: OpenStack Foundation Community Meetings In-Reply-To: References: <1633187D-8928-4671-B126-CB9CAC98377B@openstack.org> <2B7FE5C9-910E-4B47-AA8D-B86E59E60039@openstack.org> Message-ID: <6F348FDA-DB77-4737-9BB8-2BC24F324579@openstack.org> Hi Tim, Yes, both meetings will be recorded and shared on the mailing list afterwards along with the slides and an etherpad for folks to share any questions we may not cover so we can circle back. Allison > On Apr 1, 2020, at 11:56 AM, Tim Bell wrote: > > Allison, > > I can't make it to the event.. do you know if they'll be recorded ? > > Tim > > -----Original message----- > From: Allison Price > Sent: Wednesday, April 1, 2020 4:57 PM > To: OpenStack Discuss > Subject: Fwd: OpenStack Foundation Community Meetings > > Hi everyone - > > In case you didn’t see the post on the Foundation mailing list, we have a community meeting tomorrow where we will be talking about updates to OSF events as well as project updates and how you can get involved in the 10th year of OpenStack campaign. > > One meeting will be in English and one in Mandarin. > > Bring your questions and see you then! > > Cheers, > Allison > > > > > Begin forwarded message: > > From: Allison Price > > Subject: OpenStack Foundation Community Meetings > Date: March 26, 2020 at 10:21:15 AM CDT > To: foundation at lists.openstack.org > > Hi everyone, > > Next week we are going to have two community meetings to discuss the OpenStack 10th anniversary planning, current community projects, and an update on OSF events. Please join if you would like to hear updates or if you have questions for the OpenStack Foundation team. > > Join us: > Thursday, April 2 at 10am CT / 1500 UTC  > Friday, April 3 in Mandarin at 10am China Standard Time > > If you are unable to join one of the above times, I will share a recording to the mailing list after the meetings. > > Cheers, > Allison > > > Allison Price > OpenStack Foundation > allison at openstack.org > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Wed Apr 1 17:01:20 2020 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 1 Apr 2020 17:01:20 +0000 Subject: OpenStack Foundation Community Meetings In-Reply-To: <6F348FDA-DB77-4737-9BB8-2BC24F324579@openstack.org> References: <1633187D-8928-4671-B126-CB9CAC98377B@openstack.org> <2B7FE5C9-910E-4B47-AA8D-B86E59E60039@openstack.org> <6F348FDA-DB77-4737-9BB8-2BC24F324579@openstack.org> Message-ID: Great, thanks. Tim -----Original message----- From: Allison Price  Sent: Wednesday, April 1, 2020 7:00 PM To: Tim Bell Cc: OpenStack Discuss Subject: Re: OpenStack Foundation Community Meetings Hi Tim,  Yes, both meetings will be recorded and shared on the mailing list afterwards along with the slides and an etherpad for folks to share any questions we may not cover so we can circle back.  Allison  On Apr 1, 2020, at 11:56 AM, Tim Bell wrote: Allison, I can't make it to the event.. do you know if they'll be recorded ? Tim -----Original message----- From: Allison Price  > Sent: Wednesday, April 1, 2020 4:57 PM To: OpenStack Discuss > Subject: Fwd: OpenStack Foundation Community Meetings Hi everyone -  In case you didn’t see the post on the Foundation mailing list, we have a community meeting tomorrow where we will be talking about updates to OSF events as well as project updates and how you can get involved in the 10th year of OpenStack campaign.  One meeting will be in English and one in Mandarin.  Bring your questions and see you then!  Cheers, Allison Begin forwarded message: From: Allison Price > Subject: OpenStack Foundation Community Meetings Date: March 26, 2020 at 10:21:15 AM CDT To: foundation at lists.openstack.org Hi everyone,  Next week we are going to have two community meetings to discuss the OpenStack 10th anniversary planning, current community projects, and an update on OSF events. Please join if you would like to hear updates or if you have questions for the OpenStack Foundation team. Join us:  * Thursday, April 2 at 10am CT / 1500 UTC  * Friday, April 3 in Mandarin at 10am China Standard Time If you are unable to join one of the above times, I will share a recording to the mailing list after the meetings. Cheers, Allison Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Apr 1 17:03:47 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 1 Apr 2020 19:03:47 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? Message-ID: Hi everyone! This topic should not come as a huge surprise for many, since it has been raised numerous times in the past years. I have a feeling that the end of Ussuri, now that we’ve re-acquired our PTL and are on the verge of selecting new TC members, may be a good time to propose it for a formal discussion. TL;DR I’m proposing to make Ironic a top-level project under opendev.org and the OpenStack Foundation, following the same model as Zuul. I don’t propose severing current relationships with other OpenStack projects, nor making substantial changes in how the project is operated. (And no, it’s not an April 1st joke) Background ========= Ironic was born as a Nova plugin, but has grown way beyond this single case since then. The first commit in Bifrost dates to February 2015. During these 5 years (hey, we forgot to celebrate!) it has developed into a commonly used data center management tool - and still based on standalone Ironic! The Metal3 project uses standalone Ironic as its hardware management backend. We haven’t been “just” a component of OpenStack for a while now, I think it’s time to officially recognize it. And before you ask: in no case do I suggest scaling down our invaluable integration with Nova. We’re observing a solid growth of deployments using Ironic as an addition to their OpenStack clouds, and this proposal doesn’t try to devalue this use case. The intention is to accept publicly and officially that it’s not the only or the main use case, but one of the main use cases. I don’t think it comes as a surprise to the Nova team. Okay, so why? =========== The first and the main reason is the ambiguity in our positioning. We do see prospective operators and users confused by the perception that Ironic is a part of OpenStack, especially when it comes to the standalone use case. “But what if I don’t need OpenStack” is a question that I hear in most of these conversations. Changing from “a part of OpenStack” to “a FOSS tool that can integrate with OpenStack” is critical for our project to keep growing into new fields. To me personally it feels in line with how OpenDev itself is reaching into new areas beyond just the traditional IaaS. The next OpenDev even will apparently have a bare metal management track, so why not a top-level project for it? Another reason is release cadence. We have repeatedly expressed the desire to release Ironic and its sub-projects more often than we do now. Granted, *technically* we can release often even now. We can even abandon the current release model and switch to “independent”, but it doesn’t entirely solve the issue at hand. First, we don’t want to lose the notion of stable branches. One way or another, we need to support consumers with bug fix releases. Second, to become truly “independent” we’ll need to remove any tight coupling with any projects that do integrated releases. Which is, essentially, what I’m proposing here. Finally, I believe that our independence (can I call it “Irexit” please?) has already happened in reality, we just shy away from recognizing it. Look: 1. All integration points with other OpenStack projects are optional. 2. We can work fully standalone and even provide a project for that. 3. Many new features (RAID, BIOS to name a few) are exposed to standalone users much earlier than to those going through Nova. 4. We even have our own mini-scheduler (although its intention is not and has not been to replace the Placement service). 5. We make releases more often than the “core” OpenStack projects (but see above). What we will do ============ This proposal involves in the short term: * Creating a new git namespace: opendev.org/ironic * Creating a new website (name TBD, bare metal puns are welcome). * If we can have https://docs.opendev.org/ironic/, it may be just fine though. * Keeping the same governance model, only adjusted to the necessary extent. * Keeping the same policies (reviews, CI, stable). * Defining a new release cadence and stable branch support schedule. In the long term we will consider (not necessary do): * Reshaping our CI to rely less on devstack and grenade (only use them for jobs involving OpenStack). * Reducing or removing reliance on oslo libraries. * Stopping using rabbitmq for messaging (we’ve already made it optional). * Integrating with non-OpenStack services (kubernetes?) and providing lighter alternatives (think, built-in authentication). What we will NOT do ================ At least this proposal does NOT involve: * Stopping maintaining the Ironic virt driver in Nova. * Stopping running voting CI jobs with OpenStack services. * Dropping optional integration with OpenStack services. * Leaving OpenDev completely. What do you think? =============== Please let us know what you think about this proposal. Any hints on how to proceed with it, in case we reach a consensus, are also welcome. Cheers, Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Wed Apr 1 17:20:08 2020 From: gr at ham.ie (Graham Hayes) Date: Wed, 1 Apr 2020 18:20:08 +0100 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <20200401163213.wps7fkhv4ivnigyr@yuggoth.org> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> <0c993967-f1c7-4a11-cccb-9e7257f50f93@nemebean.com> <28424a20-21ed-6608-1adf-38ceaaeb9100@gmx.com> <20200401163213.wps7fkhv4ivnigyr@yuggoth.org> Message-ID: <88be2b3a-f209-358a-06f5-fded11782bef@ham.ie> On 01/04/2020 17:32, Jeremy Stanley wrote: > On 2020-04-01 11:15:37 -0500 (-0500), Sean McGinnis wrote: > [...] >> 1. Stop mirroring retired repos from our gitea to GitHub > [...] > > At the moment, there is legacy configuration in place instructing > OpenDev's Gerrit to replicate all repositories with names matching > ^openstack/.* to the openstack organization on GitHub. Gerrit is > going to continue to try to (re)replicate all these retired repos > within the openstack namespace. > > This is probably the time to talk about switching OpenStack's active > deliverables to using a Zuul job for replicating to GitHub like some > of the other namespaces in OpenDev have been doing. We'd dearly > love to drop that GitHub remote from our Gerrit replication config. > This is probably a good thing regardless of what we decide here - in theory, it would be removing the remotes, and then injecting a new job into all openstack/ projects that looks something like [1], and runs as a post job, right? 1 - https://opendev.org/recordsansible/ara/src/branch/master/.zuul.d/jobs.yaml#L15-L25 From gr at ham.ie Wed Apr 1 17:20:31 2020 From: gr at ham.ie (Graham Hayes) Date: Wed, 1 Apr 2020 18:20:31 +0100 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <2c5b28fb-7fef-a547-d00d-703a0803d10c@openstack.org> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> <0c993967-f1c7-4a11-cccb-9e7257f50f93@nemebean.com> <28424a20-21ed-6608-1adf-38ceaaeb9100@gmx.com> <2c5b28fb-7fef-a547-d00d-703a0803d10c@openstack.org> Message-ID: <54bbff76-1e80-2a67-9d10-c1a77d0aaa35@ham.ie> On 01/04/2020 17:36, Thierry Carrez wrote: > Sean McGinnis wrote: >> [...] We would just: >> >> 1. Stop mirroring retired repos from our gitea to GitHub >> 2. Manually do a repo transfer from openstack/ to openstack-attic/ on >> GitHub >> 3. Set the openstack-attic/repo to Archived >> >> Then anyone that still tries to clone from the old openstack namespaces >> GitHub location will still get the files, but from that point on we >> don't need to worry about mirroring or ongoing maintenance. It's >> basically just a historical record. And if someone wants to for whatever >> reason, they can fork that repo and do whatever they want to do with it. > > Assuming Matt is right and org-to-org transfer does not generate manual > confirmation if they have a shared owner, that's definitely a possibility. > > I prefer openstack-archive because openstack-attic actually exists on > opendev so having it in two places but containing different things is > likely to be confusing. I'd rather transfer openstack-attic to > openstack-archive as well. > That seems sane to me From juliaashleykreger at gmail.com Wed Apr 1 17:27:47 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 1 Apr 2020 10:27:47 -0700 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: tl;dr <3 - more words below. On Wed, Apr 1, 2020 at 10:05 AM Dmitry Tantsur wrote: > > Hi everyone! > > This topic should not come as a huge surprise for many, since it has been raised numerous times in the past years. I have a feeling that the end of Ussuri, now that we’ve re-acquired our PTL and are on the verge of selecting new TC members, may be a good time to propose it for a formal discussion. > > TL;DR I’m proposing to make Ironic a top-level project under opendev.org and the OpenStack Foundation, following the same model as Zuul. I don’t propose severing current relationships with other OpenStack projects, nor making substantial changes in how the project is operated. Thank you for bringing this up Dmitry. I know the cores have discussed this a couple times over the past two years in various settings and forums, and I think you make a solid case. > > (And no, it’s not an April 1st joke) > > Background > ========= > > Ironic was born as a Nova plugin, but has grown way beyond this single case since then. The first commit in Bifrost dates to February 2015. During these 5 years (hey, we forgot to celebrate!) it has developed into a commonly used data center management tool - and still based on standalone Ironic! The Metal3 project uses standalone Ironic as its hardware management backend. We haven’t been “just” a component of OpenStack for a while now, I think it’s time to officially recognize it. > +2 > And before you ask: in no case do I suggest scaling down our invaluable integration with Nova. We’re observing a solid growth of deployments using Ironic as an addition to their OpenStack clouds, and this proposal doesn’t try to devalue this use case. The intention is to accept publicly and officially that it’s not the only or the main use case, but one of the main use cases. I don’t think it comes as a surprise to the Nova team. > > Okay, so why? > =========== > > The first and the main reason is the ambiguity in our positioning. We do see prospective operators and users confused by the perception that Ironic is a part of OpenStack, especially when it comes to the standalone use case. “But what if I don’t need OpenStack” is a question that I hear in most of these conversations. Changing from “a part of OpenStack” to “a FOSS tool that can integrate with OpenStack” is critical for our project to keep growing into new fields. To me personally it feels in line with how OpenDev itself is reaching into new areas beyond just the traditional IaaS. The next OpenDev even will apparently have a bare metal management track, so why not a top-level project for it? I can second this perception issue that we encounter a lot. People assume because we're part of the community that everything else is required when it is not. This is possibly the #1 barrier to accepting Ironic or even parts of ironic's ecosystem to help solve problems. > > Another reason is release cadence. We have repeatedly expressed the desire to release Ironic and its sub-projects more often than we do now. Granted, *technically* we can release often even now. We can even abandon the current release model and switch to “independent”, but it doesn’t entirely solve the issue at hand. First, we don’t want to lose the notion of stable branches. One way or another, we need to support consumers with bug fix releases. Second, to become truly “independent” we’ll need to remove any tight coupling with any projects that do integrated releases. Which is, essentially, what I’m proposing here. I agree, and I suspect this is going to be perceived as the most "scary" part of this. Ideally we want and need consumers like Metal3 to be able to pickup latest releases and latest stable branches, and we want to be able to fix major issues in past branches. While I know I've obtained agreement that we should have more ad-hoc freedom from the TC, at least verbally, it only takes a single -1 to prevent the ironic project from doing what is right for its consumers. > > Finally, I believe that our independence (can I call it “Irexit” please?) has already happened in reality, we just shy away from recognizing it. Look: > 1. All integration points with other OpenStack projects are optional. > 2. We can work fully standalone and even provide a project for that. > 3. Many new features (RAID, BIOS to name a few) are exposed to standalone users much earlier than to those going through Nova. > 4. We even have our own mini-scheduler (although its intention is not and has not been to replace the Placement service). > 5. We make releases more often than the “core” OpenStack projects (but see above). > This is possibly the best point I've heard on this case to date. Your right, it basically has already happened and I think the resistance to change that is only human causes us to be shy about it. > What we will do > ============ > > This proposal involves in the short term: > * Creating a new git namespace: opendev.org/ironic > * Creating a new website (name TBD, bare metal puns are welcome). > * If we can have https://docs.opendev.org/ironic/, it may be just fine though. > * Keeping the same governance model, only adjusted to the necessary extent. I don't think we would need anything super expansive. In a sense, we're a bit of a rag-tag fugitive fleet, so we need to accept that as part of any model _AND_ recognize that most of us have numerous responsibilities, so things take time and don't necessarily ever fit to a perfect time table. > * Keeping the same policies (reviews, CI, stable). > * Defining a new release cadence and stable branch support schedule. > I suspect this means we should also consider our own mailing list at some point, and maybe renaming the IRC channel? > In the long term we will consider (not necessary do): > * Reshaping our CI to rely less on devstack and grenade (only use them for jobs involving OpenStack). > * Reducing or removing reliance on oslo libraries. > * Stopping using rabbitmq for messaging (we’ve already made it optional). > * Integrating with non-OpenStack services (kubernetes?) and providing lighter alternatives (think, built-in authentication). +1000 > > What we will NOT do > ================ > > At least this proposal does NOT involve: > * Stopping maintaining the Ironic virt driver in Nova. > * Stopping running voting CI jobs with OpenStack services. > * Dropping optional integration with OpenStack services. > * Leaving OpenDev completely. > > What do you think? > =============== > > Please let us know what you think about this proposal. Any hints on how to proceed with it, in case we reach a consensus, are also welcome. > > Cheers, > Dmitry I completely agree with this, and while I've wondered about this for some time, I think now is the right time to proceed. Again, thank you Dmitry for bringing this up! From openstack at nemebean.com Wed Apr 1 17:28:32 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 1 Apr 2020 12:28:32 -0500 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <2c5b28fb-7fef-a547-d00d-703a0803d10c@openstack.org> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> <0c993967-f1c7-4a11-cccb-9e7257f50f93@nemebean.com> <28424a20-21ed-6608-1adf-38ceaaeb9100@gmx.com> <2c5b28fb-7fef-a547-d00d-703a0803d10c@openstack.org> Message-ID: On 4/1/20 11:36 AM, Thierry Carrez wrote: > Sean McGinnis wrote: >> [...] We would just: >> >> 1. Stop mirroring retired repos from our gitea to GitHub >> 2. Manually do a repo transfer from openstack/ to openstack-attic/ on >> GitHub >> 3. Set the openstack-attic/repo to Archived >> >> Then anyone that still tries to clone from the old openstack namespaces >> GitHub location will still get the files, but from that point on we >> don't need to worry about mirroring or ongoing maintenance. It's >> basically just a historical record. And if someone wants to for whatever >> reason, they can fork that repo and do whatever they want to do with it. > > Assuming Matt is right and org-to-org transfer does not generate manual > confirmation if they have a shared owner, that's definitely a possibility. > > I prefer openstack-archive because openstack-attic actually exists on > opendev so having it in two places but containing different things is > likely to be confusing. I'd rather transfer openstack-attic to > openstack-archive as well. > Good point. +1 from me. From fungi at yuggoth.org Wed Apr 1 18:01:33 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 1 Apr 2020 18:01:33 +0000 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <88be2b3a-f209-358a-06f5-fded11782bef@ham.ie> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> <0c993967-f1c7-4a11-cccb-9e7257f50f93@nemebean.com> <28424a20-21ed-6608-1adf-38ceaaeb9100@gmx.com> <20200401163213.wps7fkhv4ivnigyr@yuggoth.org> <88be2b3a-f209-358a-06f5-fded11782bef@ham.ie> Message-ID: <20200401180132.656rg5v3fr2ixwhh@yuggoth.org> On 2020-04-01 18:20:08 +0100 (+0100), Graham Hayes wrote: > On 01/04/2020 17:32, Jeremy Stanley wrote: [...] > > This is probably the time to talk about switching OpenStack's active > > deliverables to using a Zuul job for replicating to GitHub like some > > of the other namespaces in OpenDev have been doing. We'd dearly > > love to drop that GitHub remote from our Gerrit replication config. > > This is probably a good thing regardless of what we decide here - > in theory, it would be removing the remotes, and then injecting a new > job into all openstack/ projects that looks something like [1], and > runs as a post job, right? [...] Basically, yes. We might be able to further centralize some of that so as to reduce the amount of boilerplate projects would need. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From alifshit at redhat.com Wed Apr 1 18:32:21 2020 From: alifshit at redhat.com (Artom Lifshitz) Date: Wed, 1 Apr 2020 14:32:21 -0400 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: On Wed, Apr 1, 2020 at 1:06 PM Dmitry Tantsur wrote: > Finally, I believe that our independence (can I call it “Irexit” please?) No, but I will accept "smelting" or "oxidation" ;) From sundar.nadathur at intel.com Wed Apr 1 19:02:00 2020 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 1 Apr 2020 19:02:00 +0000 Subject: [election][nova] PTL Candidacy for Victoria In-Reply-To: References: Message-ID: > From: Balázs Gibizer > Sent: Tuesday, March 31, 2020 7:54 AM > To: OpenStack Discuss > Subject: [election][nova] PTL Candidacy for Victoria > > Hi, > > I'd like to announce my candidacy for Nova PTL for Train. > [...] > Thanks, > Balazs Gibizer (gibi) Big +1. All the best, gibi. Regards, Sundar From dmendiza at redhat.com Wed Apr 1 20:19:13 2020 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Wed, 1 Apr 2020 15:19:13 -0500 Subject: [all][TC][PTL][election] Nominations Close & Campaigning Begins In-Reply-To: <171332c2258.e47af13f24342.3352020126460654343@ghanshyammann.com> References: <171332c2258.e47af13f24342.3352020126460654343@ghanshyammann.com> Message-ID: <80b0c748-6cde-1e2e-da33-c6fa16eed5d8@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 3/31/20 7:37 PM, Ghanshyam Mann wrote: [...] > Just curious about Barbican PTL nomination[1]. Douglas is returning > PTL and email id the same as what we have currently in the > governance page[2]. Also saw Ussuri cycle nomination also with the > same email id[3]. > > did he disabled/changed his OSF profile or some automatic disable > happened? Though running as PTL even not voting in elections should > be considered as an active member of the foundation. It appears that my OSF membership was automatically disabled, as I did not expect my nomination to be rejected. In retrospect, I should have submitted my nomination a bit earlier because the deadline has now passed, and I am not sure how to proceed. I've fixed my membership status with the OSF to once again be a member, and I do still want to continue to shepherd the Barbican project through the next cycle. - - Douglas -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEwcapj5oGTj2zd3XogB6WFOq/OrcFAl6E90AACgkQgB6WFOq/ OrcyWA//TwI1FpLzEyviyoU0VWSaUDnsxXeOWMZ6FUUjck55XjS6orrWquAQwDoe UCUXItt0IuVKT8g2hcqyJBM0asIryKi1zfNSpLjtapiD+jatfjF9EpgHHZBlpUYw XQJ5fabBC8UOpmB3FB/28aSTm9McmUc6dMNONlAJCTKPpc662X/ZNXLaDxYjQnxN +fjhsIGcF0W+uSKjYjkxxg0Ey/V1u+uq46Hbf1/a+9aRQ/IHleFJgeEDCuWRW7xm cL5KS4+If2YM8JyUczqK+H/xfUPSkwTY6JE3VbCG6Wr0y9QD3Kq+cCqYAgoeiRaN ZYKSWLIyJKXwFc4usSiaEYSZwFKaFGRig3Y7VV5o/90KeNRrwerLn6L1yLryQLCF VTjdx2Rz83Ba8j6OpRdTAkg5+EqaS3iJXRIq9BAdq1FcI+D2oZlS2SSq98YeiPcP 5Wa7q81JI0q7DuGGSgJe9vFXcpRu5htkX5fNUBgd+rcRCsdUAG/vD5GEsXWwuC0w G9LDdR91VHsK7Y14xXVcde6QNoKj4XAu/a7oKcloYlxg1qgdCurKErrDx/e6TQp5 jnYQ2pSj6/S4MxaPKLvRmUmcj3qm4LoWUU2LXU9PBuOAvwxiDSa3gzqhkKusKHRb 9UL03Z1jILKZUFX88KMp3T3kGehRQYPwKJKPMkCNCKLyZ5VTZNs= =WmMw -----END PGP SIGNATURE----- From gmann at ghanshyammann.com Wed Apr 1 20:32:03 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 01 Apr 2020 15:32:03 -0500 Subject: [all][TC][PTL][election] Nominations Close & Campaigning Begins In-Reply-To: <80b0c748-6cde-1e2e-da33-c6fa16eed5d8@redhat.com> References: <171332c2258.e47af13f24342.3352020126460654343@ghanshyammann.com> <80b0c748-6cde-1e2e-da33-c6fa16eed5d8@redhat.com> Message-ID: <17137719904.b4501dca64597.1492619151253275881@ghanshyammann.com> ---- On Wed, 01 Apr 2020 15:19:13 -0500 Douglas Mendizabal wrote ---- > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > On 3/31/20 7:37 PM, Ghanshyam Mann wrote: > [...] > > Just curious about Barbican PTL nomination[1]. Douglas is returning > > PTL and email id the same as what we have currently in the > > governance page[2]. Also saw Ussuri cycle nomination also with the > > same email id[3]. > > > > did he disabled/changed his OSF profile or some automatic disable > > happened? Though running as PTL even not voting in elections should > > be considered as an active member of the foundation. > > It appears that my OSF membership was automatically disabled, as I did > not expect my nomination to be rejected. > > In retrospect, I should have submitted my nomination a bit earlier > because the deadline has now passed, and I am not sure how to proceed. > > I've fixed my membership status with the OSF to once again be a > member, and I do still want to continue to shepherd the Barbican > project through the next cycle. Thanks Douglas for stepping up for leading Barbican. This is unfortunate that nomination did not get in for membership things but you do not need to worry. As next step, TC will appoint the PTL for the leaderless project and your name is in the list for Barbican. We will update you once TC finalizes the appointments. -gmann > > - - Douglas > -----BEGIN PGP SIGNATURE----- > > iQIzBAEBCAAdFiEEwcapj5oGTj2zd3XogB6WFOq/OrcFAl6E90AACgkQgB6WFOq/ > OrcyWA//TwI1FpLzEyviyoU0VWSaUDnsxXeOWMZ6FUUjck55XjS6orrWquAQwDoe > UCUXItt0IuVKT8g2hcqyJBM0asIryKi1zfNSpLjtapiD+jatfjF9EpgHHZBlpUYw > XQJ5fabBC8UOpmB3FB/28aSTm9McmUc6dMNONlAJCTKPpc662X/ZNXLaDxYjQnxN > +fjhsIGcF0W+uSKjYjkxxg0Ey/V1u+uq46Hbf1/a+9aRQ/IHleFJgeEDCuWRW7xm > cL5KS4+If2YM8JyUczqK+H/xfUPSkwTY6JE3VbCG6Wr0y9QD3Kq+cCqYAgoeiRaN > ZYKSWLIyJKXwFc4usSiaEYSZwFKaFGRig3Y7VV5o/90KeNRrwerLn6L1yLryQLCF > VTjdx2Rz83Ba8j6OpRdTAkg5+EqaS3iJXRIq9BAdq1FcI+D2oZlS2SSq98YeiPcP > 5Wa7q81JI0q7DuGGSgJe9vFXcpRu5htkX5fNUBgd+rcRCsdUAG/vD5GEsXWwuC0w > G9LDdR91VHsK7Y14xXVcde6QNoKj4XAu/a7oKcloYlxg1qgdCurKErrDx/e6TQp5 > jnYQ2pSj6/S4MxaPKLvRmUmcj3qm4LoWUU2LXU9PBuOAvwxiDSa3gzqhkKusKHRb > 9UL03Z1jILKZUFX88KMp3T3kGehRQYPwKJKPMkCNCKLyZ5VTZNs= > =WmMw > -----END PGP SIGNATURE----- > > > From smooney at redhat.com Wed Apr 1 21:25:47 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 01 Apr 2020 22:25:47 +0100 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: <49b4357ae50f9eb3df8d24baae969eb3022b1a09.camel@redhat.com> On Wed, 2020-04-01 at 14:32 -0400, Artom Lifshitz wrote: > On Wed, Apr 1, 2020 at 1:06 PM Dmitry Tantsur wrote: > > Finally, I believe that our independence (can I call it “Irexit” please?) given the political overtones and baggage related to Irexit https://en.wikipedia.org/wiki/Irish_Freedom_Party i would avoid that like the plague both in real life and in this proposal unless you want to advocate for alt-right, populist, xenophobe polictical movement that will piss off most irish people. > > No, but I will accept "smelting" or "oxidation" ;) > > From hao7.liu at midea.com Wed Apr 1 06:48:34 2020 From: hao7.liu at midea.com (hao7.liu at midea.com) Date: Wed, 1 Apr 2020 14:48:34 +0800 Subject: =?GB2312?B?ob5vY3Rhdmlhob9GYWlsZWQgdG8gbG9hZCBDQSBDZXJ0aWZpY2F0ZSAvZXRjL29jdGF2aWEvY2VydHMvc2VydmVyX2NhLmNlcnQucGVt?= Message-ID: <2020040114483393995311@midea.com> OS version:CentOS7.6 openstack version:Train when i deployed my openstack with octavia,and create a lb,the worker report error logs: 2020-04-01 14:40:41.842 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:41.865 164881 INFO octavia.controller.worker.v1.tasks.database_tasks [-] Created Amphora in DB with id 191958e3-2577-4a8a-a1ff-b8f048056b72 2020-04-01 14:40:41.869 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'SUCCESS' from state 'RUNNING' with result '191958e3-2577-4a8a-a1ff-b8f048056b72' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:183 2020-04-01 14:40:41.874 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:41.927 164881 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. 2020-04-01 14:40:41.927 164881 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. 2020-04-01 14:40:41.946 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'FAILURE' from state 'RUNNING' 13 predecessors (most recent first): Atom 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': u'191958e3-2577-4a8a-a1ff-b8f048056b72'} |__Flow 'BACKUP-octavia-create-amp-for-lb-subflow' |__Atom 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'flavor': {u'loadbalancer_topology': u'ACTIVE_STANDBY'}, 'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': None} |__Flow 'BACKUP-octavia-get-amphora-for-lb-subflow' |__Flow 'BACKUP-octavia-plug-net-subflow' |__Flow 'octavia-create-loadbalancer-flow' |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': None} |__Atom 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': , 'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': } |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } |__Atom 'reload-lb-before-allocate-vip' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': } |__Atom 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': None} |__Flow 'octavia-create-loadbalancer-flow': CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/cert_task.py", line 47, in execute 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker validity=CONF.certificates.cert_validity_time) 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker cert = cls.sign_cert(csr, validity, **kwargs) 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 91, in sign_cert 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker cls._validate_cert(ca_cert, ca_key, ca_key_pass) 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 53, in _validate_cert 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker .format(CONF.certificates.ca_certificate) 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker 2020-04-01 14:40:41.969 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:41.972 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:41.975 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:41.975 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting create amphora in DB for amp id 191958e3-2577-4a8a-a1ff-b8f048056b72 2020-04-01 14:40:41.992 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:41.995 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (97f157c5-8b35-476d-a3d9-586087ecf235) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:41.996 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer d7ca9fb7-eda3-4a17-a615-c6d7f31d32d8 in the database. 2020-04-01 14:40:42.003 164881 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. 2020-04-01 14:40:42.003 164881 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. 2020-04-01 14:40:42.005 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (97f157c5-8b35-476d-a3d9-586087ecf235) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.006 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'FAILURE' from state 'RUNNING': CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/cert_task.py", line 47, in execute 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker validity=CONF.certificates.cert_validity_time) 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker cert = cls.sign_cert(csr, validity, **kwargs) 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 91, in sign_cert 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker cls._validate_cert(ca_cert, ca_key, ca_key_pass) 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 53, in _validate_cert 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker .format(CONF.certificates.ca_certificate) 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker 2020-04-01 14:40:42.013 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.014 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.017 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (145e3ecd-816e-415e-90a4-b7b09ca09c60) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.018 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting create amphora in DB for amp id 1ecbc19a-2644-4f3a-a9fc-bf6ace1655e3 2020-04-01 14:40:42.034 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (145e3ecd-816e-415e-90a4-b7b09ca09c60) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.038 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (a17713f7-52df-4d3b-8cd2-5e592ce29a6a) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.038 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer d7ca9fb7-eda3-4a17-a615-c6d7f31d32d8 in the database. 2020-04-01 14:40:42.047 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (a17713f7-52df-4d3b-8cd2-5e592ce29a6a) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.052 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' (b6e38bf6-57d3-4b99-8226-486e16606d72) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.054 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' (b6e38bf6-57d3-4b99-8226-486e16606d72) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.059 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' (47efda4a-4ab4-4618-ae0d-f0d145ca75b0) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.062 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' (47efda4a-4ab4-4618-ae0d-f0d145ca75b0) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.066 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' (e24fb53e-195e-401d-b300-a798503d1f97) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.068 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' (e24fb53e-195e-401d-b300-a798503d1f97) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.073 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' (11bbd801-d889-4499-ab7d-768d81153939) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.073 164881 WARNING octavia.controller.worker.v1.tasks.network_tasks [-] Deallocating vip 172.20.250.184 2020-04-01 14:40:42.199 164881 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security group b2430a12-2c07-4ca9-a381-3af79f702715 from port a52f2cfa-765b-4664-b4ad-c2a11dd870de 2020-04-01 14:40:43.189 164881 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security group b2430a12-2c07-4ca9-a381-3af79f702715 2020-04-01 14:40:43.994 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' (11bbd801-d889-4499-ab7d-768d81153939) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:43.999 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'reload-lb-before-allocate-vip' (01c2a7f3-9114-41f3-a2c0-42601b2b48f0) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:44.002 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'reload-lb-before-allocate-vip' (01c2a7f3-9114-41f3-a2c0-42601b2b48f0) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:44.007 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (2339e5d5-e545-4f1d-9147-4f5a7b2f9ce9) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:44.017 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (2339e5d5-e545-4f1d-9147-4f5a7b2f9ce9) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:44.028 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Flow 'octavia-create-loadbalancer-flow' (aab75b85-a8f1-486f-99e8-5c81e21aa3f3) transitioned into state 'REVERTED' from state 'RUNNING' 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server [-] Exception during message handling: WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem., Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem.] 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 274, in dispatch 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/queue/v1/endpoints.py", line 45, in create_load_balancer 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server self.worker.create_load_balancer(load_balancer_id, flavor) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 292, in wrapped_f 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self.call(f, *args, **kw) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 358, in call 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server do = self.iter(retry_state=retry_state) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 319, in iter 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return fut.result() 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 422, in result 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self.__get_result() 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 361, in call 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server result = fn(*args, **kwargs) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/controller_worker.py", line 344, in create_load_balancer 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server create_lb_tf.run() 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 247, in run 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server for _state in self.run_iter(timeout=timeout): 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server failure.Failure.reraise_if_any(er_failures) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 341, in reraise_if_any 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server raise exc.WrappedFailure(failures) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem., Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem.] 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server hao7.liu at midea.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hao7.liu at midea.com Wed Apr 1 06:51:34 2020 From: hao7.liu at midea.com (hao7.liu at midea.com) Date: Wed, 1 Apr 2020 14:51:34 +0800 Subject: =?GB2312?B?u9i4tDogob5vY3Rhdmlhob9GYWlsZWQgdG8gbG9hZCBDQSBDZXJ0aWZpY2F0ZSAvZXRjL29jdGF2aWEvY2VydHMvc2VydmVyX2NhLmNlcnQucGVt?= References: <2020040114483393995311@midea.com> Message-ID: <2020040114513476847214@midea.com> OS version:CentOS7.6, ubuntu1804 openstack version:Train when i create an amphora image, always may errors, such as: ./diskimage-create.sh -i ubuntu -d bionic -r 123456 -s 5 -o amphora-x64-haproxy-ubuntu-1804-0401 2020-04-01 05:34:13.189 | Ignoring actdiag: markers 'python_version == "3.7"' don't match your environment 2020-04-01 05:34:13.192 | Ignoring sphinxcontrib-applehelp: markers 'python_version == "3.6"' don't match your environment 2020-04-01 05:34:13.194 | Ignoring sphinxcontrib-applehelp: markers 'python_version == "3.7"' don't match your environment 2020-04-01 05:34:13.197 | Ignoring scikit-learn: markers 'python_version == "3.6"' don't match your environment 2020-04-01 05:34:13.199 | Ignoring scikit-learn: markers 'python_version == "3.7"' don't match your environment 2020-04-01 05:34:13.203 | Processing /opt/amphora-agent 2020-04-01 05:34:14.758 | ERROR: Package 'octavia' requires a different Python: 2.7.5 not in '>=3.6' 2020-04-01 05:34:14.823 | Unmount /tmp/dib_build.EjDukNCf/mnt/tmp/yum 2020-04-01 05:34:14.867 | Unmount /tmp/dib_build.EjDukNCf/mnt/tmp/pip 2020-04-01 05:34:14.887 | Unmount /tmp/dib_build.EjDukNCf/mnt/tmp/in_target.d 2020-04-01 05:34:14.915 | Unmount /tmp/dib_build.EjDukNCf/mnt/sys 2020-04-01 05:34:14.935 | Unmount /tmp/dib_build.EjDukNCf/mnt/proc 2020-04-01 05:34:14.963 | Unmount /tmp/dib_build.EjDukNCf/mnt/dev/pts 2020-04-01 05:34:14.991 | Unmount /tmp/dib_build.EjDukNCf/mnt/dev 2020-04-01 05:34:15.721 | INFO diskimage_builder.block_device.blockdevice [-] State already cleaned - no way to do anything here root at ip-172-31-53-210:/apps/octavia/diskimage-create# 2020-04-01 05:47:47.398 | Successfully uninstalled pip-9.0.1 2020-04-01 05:47:48.444 | Successfully installed pip-20.0.2 setuptools-44.1.0 wheel-0.34.2 2020-04-01 05:47:51.309 | Collecting virtualenv 2020-04-01 05:47:51.966 | Downloading virtualenv-20.0.15-py2.py3-none-any.whl (4.6 MB) 2020-04-01 05:49:29.260 | ERROR: Exception: 2020-04-01 05:49:29.261 | Traceback (most recent call last): 2020-04-01 05:49:29.261 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 425, in _error_catcher 2020-04-01 05:49:29.261 | yield 2020-04-01 05:49:29.261 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 507, in read 2020-04-01 05:49:29.261 | data = self._fp.read(amt) if not fp_closed else b"" 2020-04-01 05:49:29.261 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/cachecontrol/filewrapper.py", line 62, in read 2020-04-01 05:49:29.261 | data = self.__fp.read(amt) 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/http/client.py", line 459, in read 2020-04-01 05:49:29.261 | n = self.readinto(b) 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/http/client.py", line 503, in readinto 2020-04-01 05:49:29.261 | n = self.fp.readinto(b) 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/socket.py", line 586, in readinto 2020-04-01 05:49:29.261 | return self._sock.recv_into(b) 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/ssl.py", line 1012, in recv_into 2020-04-01 05:49:29.261 | return self.read(nbytes, buffer) 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/ssl.py", line 874, in read 2020-04-01 05:49:29.261 | return self._sslobj.read(len, buffer) 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/ssl.py", line 631, in read 2020-04-01 05:49:29.261 | v = self._sslobj.read(len, buffer) 2020-04-01 05:49:29.261 | socket.timeout: The read operation timed out 2020-04-01 05:49:29.261 | 2020-04-01 05:49:29.261 | During handling of the above exception, another exception occurred: 2020-04-01 05:49:29.261 | 2020-04-01 05:49:29.261 | Traceback (most recent call last): 2020-04-01 05:49:29.261 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 186, in _main 2020-04-01 05:49:29.261 | status = self.run(options, args) 2020-04-01 05:49:29.261 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 331, in run 2020-04-01 05:49:29.262 | resolver.resolve(requirement_set) 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 177, in resolve 2020-04-01 05:49:29.262 | discovered_reqs.extend(self._resolve_one(requirement_set, req)) 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 333, in _resolve_one 2020-04-01 05:49:29.262 | abstract_dist = self._get_abstract_dist_for(req_to_install) 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 282, in _get_abstract_dist_for 2020-04-01 05:49:29.262 | abstract_dist = self.preparer.prepare_linked_requirement(req) 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 482, in prepare_linked_requirement 2020-04-01 05:49:29.262 | hashes=hashes, 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 287, in unpack_url 2020-04-01 05:49:29.262 | hashes=hashes, 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 159, in unpack_http_url 2020-04-01 05:49:29.262 | link, downloader, temp_dir.path, hashes 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 303, in _download_http_url 2020-04-01 05:49:29.262 | for chunk in download.chunks: 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/ui.py", line 160, in iter 2020-04-01 05:49:29.262 | for x in it: 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/network/utils.py", line 39, in response_chunks 2020-04-01 05:49:29.262 | decode_content=False, 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 564, in stream 2020-04-01 05:49:29.262 | data = self.read(amt=amt, decode_content=decode_content) 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 529, in read 2020-04-01 05:49:29.262 | raise IncompleteRead(self._fp_bytes_read, self.length_remaining) 2020-04-01 05:49:29.262 | File "/usr/lib/python3.6/contextlib.py", line 99, in __exit__ 2020-04-01 05:49:29.262 | self.gen.throw(type, value, traceback) 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 430, in _error_catcher 2020-04-01 05:49:29.262 | raise ReadTimeoutError(self._pool, None, "Read timed out.") 2020-04-01 05:49:29.262 | pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. 2020-04-01 05:49:29.424 | Unmount /tmp/dib_build.QPWJUysz/mnt/var/cache/apt/archives 2020-04-01 05:49:29.459 | Unmount /tmp/dib_build.QPWJUysz/mnt/tmp/pip 2020-04-01 05:49:29.490 | Unmount /tmp/dib_build.QPWJUysz/mnt/tmp/in_target.d 2020-04-01 05:49:29.522 | Unmount /tmp/dib_build.QPWJUysz/mnt/sys 2020-04-01 05:49:29.546 | Unmount /tmp/dib_build.QPWJUysz/mnt/proc 2020-04-01 05:49:29.573 | Unmount /tmp/dib_build.QPWJUysz/mnt/dev/pts 2020-04-01 05:49:29.607 | Unmount /tmp/dib_build.QPWJUysz/mnt/dev 2020-04-01 05:49:30.562 | INFO diskimage_builder.block_device.blockdevice [-] State already cleaned - no way to do anything here and may other errors. hao7.liu at midea.com 发件人: hao7.liu at midea.com 发送时间: 2020-04-01 14:48 收件人: openstack-discuss at lists.openstack.org 主题: 【octavia】Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem OS version:CentOS7.6 openstack version:Train when i deployed my openstack with octavia,and create a lb,the worker report error logs: 2020-04-01 14:40:41.842 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:41.865 164881 INFO octavia.controller.worker.v1.tasks.database_tasks [-] Created Amphora in DB with id 191958e3-2577-4a8a-a1ff-b8f048056b72 2020-04-01 14:40:41.869 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'SUCCESS' from state 'RUNNING' with result '191958e3-2577-4a8a-a1ff-b8f048056b72' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:183 2020-04-01 14:40:41.874 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:41.927 164881 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. 2020-04-01 14:40:41.927 164881 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. 2020-04-01 14:40:41.946 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'FAILURE' from state 'RUNNING' 13 predecessors (most recent first): Atom 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': u'191958e3-2577-4a8a-a1ff-b8f048056b72'} |__Flow 'BACKUP-octavia-create-amp-for-lb-subflow' |__Atom 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'flavor': {u'loadbalancer_topology': u'ACTIVE_STANDBY'}, 'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': None} |__Flow 'BACKUP-octavia-get-amphora-for-lb-subflow' |__Flow 'BACKUP-octavia-plug-net-subflow' |__Flow 'octavia-create-loadbalancer-flow' |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': None} |__Atom 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': , 'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': } |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } |__Atom 'reload-lb-before-allocate-vip' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': } |__Atom 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': None} |__Flow 'octavia-create-loadbalancer-flow': CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/cert_task.py", line 47, in execute 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker validity=CONF.certificates.cert_validity_time) 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker cert = cls.sign_cert(csr, validity, **kwargs) 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 91, in sign_cert 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker cls._validate_cert(ca_cert, ca_key, ca_key_pass) 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 53, in _validate_cert 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker .format(CONF.certificates.ca_certificate) 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker 2020-04-01 14:40:41.969 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:41.972 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:41.975 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:41.975 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting create amphora in DB for amp id 191958e3-2577-4a8a-a1ff-b8f048056b72 2020-04-01 14:40:41.992 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:41.995 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (97f157c5-8b35-476d-a3d9-586087ecf235) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:41.996 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer d7ca9fb7-eda3-4a17-a615-c6d7f31d32d8 in the database. 2020-04-01 14:40:42.003 164881 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. 2020-04-01 14:40:42.003 164881 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. 2020-04-01 14:40:42.005 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (97f157c5-8b35-476d-a3d9-586087ecf235) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.006 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'FAILURE' from state 'RUNNING': CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/cert_task.py", line 47, in execute 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker validity=CONF.certificates.cert_validity_time) 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker cert = cls.sign_cert(csr, validity, **kwargs) 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 91, in sign_cert 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker cls._validate_cert(ca_cert, ca_key, ca_key_pass) 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 53, in _validate_cert 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker .format(CONF.certificates.ca_certificate) 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker 2020-04-01 14:40:42.013 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.014 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.017 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (145e3ecd-816e-415e-90a4-b7b09ca09c60) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.018 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting create amphora in DB for amp id 1ecbc19a-2644-4f3a-a9fc-bf6ace1655e3 2020-04-01 14:40:42.034 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (145e3ecd-816e-415e-90a4-b7b09ca09c60) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.038 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (a17713f7-52df-4d3b-8cd2-5e592ce29a6a) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.038 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer d7ca9fb7-eda3-4a17-a615-c6d7f31d32d8 in the database. 2020-04-01 14:40:42.047 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (a17713f7-52df-4d3b-8cd2-5e592ce29a6a) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.052 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' (b6e38bf6-57d3-4b99-8226-486e16606d72) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.054 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' (b6e38bf6-57d3-4b99-8226-486e16606d72) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.059 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' (47efda4a-4ab4-4618-ae0d-f0d145ca75b0) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.062 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' (47efda4a-4ab4-4618-ae0d-f0d145ca75b0) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.066 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' (e24fb53e-195e-401d-b300-a798503d1f97) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.068 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' (e24fb53e-195e-401d-b300-a798503d1f97) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:42.073 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' (11bbd801-d889-4499-ab7d-768d81153939) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:42.073 164881 WARNING octavia.controller.worker.v1.tasks.network_tasks [-] Deallocating vip 172.20.250.184 2020-04-01 14:40:42.199 164881 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security group b2430a12-2c07-4ca9-a381-3af79f702715 from port a52f2cfa-765b-4664-b4ad-c2a11dd870de 2020-04-01 14:40:43.189 164881 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security group b2430a12-2c07-4ca9-a381-3af79f702715 2020-04-01 14:40:43.994 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' (11bbd801-d889-4499-ab7d-768d81153939) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:43.999 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'reload-lb-before-allocate-vip' (01c2a7f3-9114-41f3-a2c0-42601b2b48f0) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:44.002 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'reload-lb-before-allocate-vip' (01c2a7f3-9114-41f3-a2c0-42601b2b48f0) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:44.007 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (2339e5d5-e545-4f1d-9147-4f5a7b2f9ce9) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 2020-04-01 14:40:44.017 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (2339e5d5-e545-4f1d-9147-4f5a7b2f9ce9) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2020-04-01 14:40:44.028 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Flow 'octavia-create-loadbalancer-flow' (aab75b85-a8f1-486f-99e8-5c81e21aa3f3) transitioned into state 'REVERTED' from state 'RUNNING' 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server [-] Exception during message handling: WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem., Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem.] 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 274, in dispatch 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/queue/v1/endpoints.py", line 45, in create_load_balancer 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server self.worker.create_load_balancer(load_balancer_id, flavor) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 292, in wrapped_f 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self.call(f, *args, **kw) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 358, in call 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server do = self.iter(retry_state=retry_state) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 319, in iter 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return fut.result() 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 422, in result 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self.__get_result() 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 361, in call 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server result = fn(*args, **kwargs) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/controller_worker.py", line 344, in create_load_balancer 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server create_lb_tf.run() 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 247, in run 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server for _state in self.run_iter(timeout=timeout): 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server failure.Failure.reraise_if_any(er_failures) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 341, in reraise_if_any 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server raise exc.WrappedFailure(failures) 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem., Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem.] 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server hao7.liu at midea.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Apr 1 23:10:54 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 1 Apr 2020 23:10:54 +0000 Subject: [tripleo] meeting notes from the last two weeks In-Reply-To: References: Message-ID: <20200401231053.djhidfvnoouh7vbg@yuggoth.org> On 2020-03-31 08:34:37 -0600 (-0600), Wesley Hayutin wrote: > I need to see why the openstack meeting bot is not in the #tripleo > channel. A record was not automatically kept.. so here is the raw > txt from the last two meetings.. I'll get the bot fixed asap. [...] The bot's been there: http://eavesdrop.openstack.org/irclogs/%23tripleo/%23tripleo.2020-03-31.log.html#t2020-03-31T13:55:11 Looks like it never saw you issue a "#startmeeting tripleo" to initiate the meeting. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sam47priya at gmail.com Thu Apr 2 00:49:08 2020 From: sam47priya at gmail.com (Sam P) Date: Thu, 2 Apr 2020 09:49:08 +0900 Subject: [all][TC][PTL][election] Nominations Close & Campaigning Begins In-Reply-To: <17137719904.b4501dca64597.1492619151253275881@ghanshyammann.com> References: <171332c2258.e47af13f24342.3352020126460654343@ghanshyammann.com> <80b0c748-6cde-1e2e-da33-c6fa16eed5d8@redhat.com> <17137719904.b4501dca64597.1492619151253275881@ghanshyammann.com> Message-ID: Hi All, Really sorry, I missed the deadline for PTL Nomination periods. I can lead the Masakari for next cycle. --- Regards, Sampath On Thu, Apr 2, 2020 at 5:36 AM Ghanshyam Mann wrote: > > ---- On Wed, 01 Apr 2020 15:19:13 -0500 Douglas Mendizabal wrote ---- > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA256 > > > > On 3/31/20 7:37 PM, Ghanshyam Mann wrote: > > [...] > > > Just curious about Barbican PTL nomination[1]. Douglas is returning > > > PTL and email id the same as what we have currently in the > > > governance page[2]. Also saw Ussuri cycle nomination also with the > > > same email id[3]. > > > > > > did he disabled/changed his OSF profile or some automatic disable > > > happened? Though running as PTL even not voting in elections should > > > be considered as an active member of the foundation. > > > > It appears that my OSF membership was automatically disabled, as I did > > not expect my nomination to be rejected. > > > > In retrospect, I should have submitted my nomination a bit earlier > > because the deadline has now passed, and I am not sure how to proceed. > > > > I've fixed my membership status with the OSF to once again be a > > member, and I do still want to continue to shepherd the Barbican > > project through the next cycle. > > Thanks Douglas for stepping up for leading Barbican. This is unfortunate that nomination > did not get in for membership things but you do not need to worry. > > As next step, TC will appoint the PTL for the leaderless project and your name is > in the list for Barbican. We will update you once TC finalizes the appointments. > > -gmann > > > > > > - - Douglas > > -----BEGIN PGP SIGNATURE----- > > > > iQIzBAEBCAAdFiEEwcapj5oGTj2zd3XogB6WFOq/OrcFAl6E90AACgkQgB6WFOq/ > > OrcyWA//TwI1FpLzEyviyoU0VWSaUDnsxXeOWMZ6FUUjck55XjS6orrWquAQwDoe > > UCUXItt0IuVKT8g2hcqyJBM0asIryKi1zfNSpLjtapiD+jatfjF9EpgHHZBlpUYw > > XQJ5fabBC8UOpmB3FB/28aSTm9McmUc6dMNONlAJCTKPpc662X/ZNXLaDxYjQnxN > > +fjhsIGcF0W+uSKjYjkxxg0Ey/V1u+uq46Hbf1/a+9aRQ/IHleFJgeEDCuWRW7xm > > cL5KS4+If2YM8JyUczqK+H/xfUPSkwTY6JE3VbCG6Wr0y9QD3Kq+cCqYAgoeiRaN > > ZYKSWLIyJKXwFc4usSiaEYSZwFKaFGRig3Y7VV5o/90KeNRrwerLn6L1yLryQLCF > > VTjdx2Rz83Ba8j6OpRdTAkg5+EqaS3iJXRIq9BAdq1FcI+D2oZlS2SSq98YeiPcP > > 5Wa7q81JI0q7DuGGSgJe9vFXcpRu5htkX5fNUBgd+rcRCsdUAG/vD5GEsXWwuC0w > > G9LDdR91VHsK7Y14xXVcde6QNoKj4XAu/a7oKcloYlxg1qgdCurKErrDx/e6TQp5 > > jnYQ2pSj6/S4MxaPKLvRmUmcj3qm4LoWUU2LXU9PBuOAvwxiDSa3gzqhkKusKHRb > > 9UL03Z1jILKZUFX88KMp3T3kGehRQYPwKJKPMkCNCKLyZ5VTZNs= > > =WmMw > > -----END PGP SIGNATURE----- > > > > > > > From flux.adam at gmail.com Thu Apr 2 01:38:59 2020 From: flux.adam at gmail.com (Adam Harwell) Date: Thu, 2 Apr 2020 10:38:59 +0900 Subject: [election][octavia][ptl] Announcing my PTL candidacy for Octavia In-Reply-To: References: Message-ID: Four more years! Four more years! On Wed, Apr 1, 2020, 09:09 Michael Johnson wrote: > My fellow OpenStack community, > > I would like to nominate myself for Octavia PTL during the Victoria > cycle. I have previously led the team during the Stein release, and I > would like to continue helping our team provide network load balancing > services for OpenStack. > > Looking forward to Victoria I expect the team to finish some major new > features, such as the Jobboard transition and introducing HTTP/2 > support. > > Thank you for your support and your consideration for Victoria, > > Michael Johnson (johnsom) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Apr 2 01:54:07 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 1 Apr 2020 19:54:07 -0600 Subject: [tripleo] meeting notes from the last two weeks In-Reply-To: <20200401231053.djhidfvnoouh7vbg@yuggoth.org> References: <20200401231053.djhidfvnoouh7vbg@yuggoth.org> Message-ID: On Wed, Apr 1, 2020 at 5:11 PM Jeremy Stanley wrote: > On 2020-03-31 08:34:37 -0600 (-0600), Wesley Hayutin wrote: > > I need to see why the openstack meeting bot is not in the #tripleo > > channel. A record was not automatically kept.. so here is the raw > > txt from the last two meetings.. I'll get the bot fixed asap. > [...] > > The bot's been there: > > > http://eavesdrop.openstack.org/irclogs/%23tripleo/%23tripleo.2020-03-31.log.html#t2020-03-31T13:55:11 > > Looks like it never saw you issue a "#startmeeting tripleo" to > initiate the meeting. > -- > Jeremy Stanley > Thanks Jeremy! You are correct, after I checked my notes.. I have a slash in front of the pound e.g. /#startmeeting tripleo. Cut and paste fail in my template for the meeting. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Apr 2 02:21:21 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 01 Apr 2020 21:21:21 -0500 Subject: [all][TC][PTL][election] Nominations Close & Campaigning Begins In-Reply-To: References: <171332c2258.e47af13f24342.3352020126460654343@ghanshyammann.com> <80b0c748-6cde-1e2e-da33-c6fa16eed5d8@redhat.com> <17137719904.b4501dca64597.1492619151253275881@ghanshyammann.com> Message-ID: <17138b161ed.ef308e6167388.9167423726710369919@ghanshyammann.com> ---- On Wed, 01 Apr 2020 19:49:08 -0500 Sam P wrote ---- > Hi All, > > Really sorry, I missed the deadline for PTL Nomination periods. > I can lead the Masakari for next cycle. ACK. Thanks Sam for resposne and leading next cycle also. -gmann > > --- Regards, > Sampath > > On Thu, Apr 2, 2020 at 5:36 AM Ghanshyam Mann wrote: > > > > ---- On Wed, 01 Apr 2020 15:19:13 -0500 Douglas Mendizabal wrote ---- > > > -----BEGIN PGP SIGNED MESSAGE----- > > > Hash: SHA256 > > > > > > On 3/31/20 7:37 PM, Ghanshyam Mann wrote: > > > [...] > > > > Just curious about Barbican PTL nomination[1]. Douglas is returning > > > > PTL and email id the same as what we have currently in the > > > > governance page[2]. Also saw Ussuri cycle nomination also with the > > > > same email id[3]. > > > > > > > > did he disabled/changed his OSF profile or some automatic disable > > > > happened? Though running as PTL even not voting in elections should > > > > be considered as an active member of the foundation. > > > > > > It appears that my OSF membership was automatically disabled, as I did > > > not expect my nomination to be rejected. > > > > > > In retrospect, I should have submitted my nomination a bit earlier > > > because the deadline has now passed, and I am not sure how to proceed. > > > > > > I've fixed my membership status with the OSF to once again be a > > > member, and I do still want to continue to shepherd the Barbican > > > project through the next cycle. > > > > Thanks Douglas for stepping up for leading Barbican. This is unfortunate that nomination > > did not get in for membership things but you do not need to worry. > > > > As next step, TC will appoint the PTL for the leaderless project and your name is > > in the list for Barbican. We will update you once TC finalizes the appointments. > > > > -gmann > > > > > > > > > > - - Douglas > > > -----BEGIN PGP SIGNATURE----- > > > > > > iQIzBAEBCAAdFiEEwcapj5oGTj2zd3XogB6WFOq/OrcFAl6E90AACgkQgB6WFOq/ > > > OrcyWA//TwI1FpLzEyviyoU0VWSaUDnsxXeOWMZ6FUUjck55XjS6orrWquAQwDoe > > > UCUXItt0IuVKT8g2hcqyJBM0asIryKi1zfNSpLjtapiD+jatfjF9EpgHHZBlpUYw > > > XQJ5fabBC8UOpmB3FB/28aSTm9McmUc6dMNONlAJCTKPpc662X/ZNXLaDxYjQnxN > > > +fjhsIGcF0W+uSKjYjkxxg0Ey/V1u+uq46Hbf1/a+9aRQ/IHleFJgeEDCuWRW7xm > > > cL5KS4+If2YM8JyUczqK+H/xfUPSkwTY6JE3VbCG6Wr0y9QD3Kq+cCqYAgoeiRaN > > > ZYKSWLIyJKXwFc4usSiaEYSZwFKaFGRig3Y7VV5o/90KeNRrwerLn6L1yLryQLCF > > > VTjdx2Rz83Ba8j6OpRdTAkg5+EqaS3iJXRIq9BAdq1FcI+D2oZlS2SSq98YeiPcP > > > 5Wa7q81JI0q7DuGGSgJe9vFXcpRu5htkX5fNUBgd+rcRCsdUAG/vD5GEsXWwuC0w > > > G9LDdR91VHsK7Y14xXVcde6QNoKj4XAu/a7oKcloYlxg1qgdCurKErrDx/e6TQp5 > > > jnYQ2pSj6/S4MxaPKLvRmUmcj3qm4LoWUU2LXU9PBuOAvwxiDSa3gzqhkKusKHRb > > > 9UL03Z1jILKZUFX88KMp3T3kGehRQYPwKJKPMkCNCKLyZ5VTZNs= > > > =WmMw > > > -----END PGP SIGNATURE----- > > > > > > > > > > > > > From gmann at ghanshyammann.com Thu Apr 2 02:49:14 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 01 Apr 2020 21:49:14 -0500 Subject: [goals][Drop Python 2.7 Support] Week R-6 Update Message-ID: <17138caea16.1062fb0e367449.7456167229285935355@ghanshyammann.com> Hello Everyone, Below is the progress on "Drop Python 2.7 Support" at end of R-6 week. Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule Highlights: ======== * This is very close to mark as complete. * We merged many of charm repo patches and few more projects. * Few ansible repo failing on centos7 and waiting for migration on centos8 * I tried to reach out to team for pending patches. Project wise status and need reviews: ============================ Phase-1 status: All the OpenStack services have dropped the python2.7. Phase-2 status: * Pending Tempest plugins: ** cyborg-tempest-plugin: https://review.opendev.org/#/c/704076/ ** kuryr-tempest-plugin: https://review.opendev.org/#/c/704072/ * Pending pythonclient: ** python-barbicanclient: https://review.opendev.org/#/c/699096/2 *** gate is already broken waiting for gate to be fixed. ** python-zaqarclient: https://review.opendev.org/#/c/692011/4 ** python-tripleoclient: https://review.opendev.org/#/c/703344 * Few more repo patches need to merge: ** masakari-specs: https://review.opendev.org/#/c/698982/ ** cyborg-specs: https://review.opendev.org/#/c/698824/ ** nova-powervm: https://review.opendev.org/#/c/700683/ ** paunch: https://review.opendev.org/#/c/703344/ * Started pushing the required updates on deployment projects. ** Completed or no updates required: *** Openstack-Chef - not required *** Packaging-Rpm - Done *** Puppet Openstack- Done *** Tripleo - except python client, all is done. ** In progress: *** Openstack Charms - Most of them merged, few failing on func job. debugging. *** Openstackansible - In-progress. centos7 jobs are failing on few projects. ** Waiting from projects team to know the status: *** Openstack-Helm (Helm charts for OpenStack services) * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open Phase-3 status: This is audit and requirement repo work which is not started yet. I will start the audit in parallel to finishing the pending things listed above. How you can help: ============== - Review the patches. Push the patches if I missed any repo. -gmann From skaplons at redhat.com Thu Apr 2 07:20:22 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 2 Apr 2020 09:20:22 +0200 Subject: Inline comments from Zuul In-Reply-To: <7FBF7A19-F0A4-41BC-832E-F9ADC5B4177C@inaugust.com> References: <7FBF7A19-F0A4-41BC-832E-F9ADC5B4177C@inaugust.com> Message-ID: <2D37BD91-FAFE-4FCD-B502-E9FED6338E41@redhat.com> Hi, Thx. For e.g. pep8 job it looks like super useful as we can now see pep8 issues directly commented in PS, like e.g. at https://review.opendev.org/#/c/716773/1/neutron/tests/fullstack/test_securitygroup.py :) > On 1 Apr 2020, at 17:51, Monty Taylor wrote: > > Hey everybody, > > Yesterday mnaser finished up a long-standing TODO list item we had of leveraging Zuul’s ability to leave inline comments on changes by parsing out things like linter output and dropping them on changes. This is now live. > > We’ve run in to a few gotchas (turns out there’s a lot of people doing a lot of different things) - all of which we’ve either fixed or have fixes in flight for. Notably there is a usage pattern of running pylint but only partially caring about the results, which turns inline comments from pylint output into complete noise. We’ve turned off the inline comments in openstack-tox-pylint: > > https://review.opendev.org/#/c/716599/ > https://review.opendev.org/#/c/716600/ > > although if your project uses it and would like inline comments from it, they can be re-enabled in your project. Similarly the same flag can be used to disable inline comments if your project decides they don't want them for some reason. > > Work is under way to add parsing for Sphinx output and golangci-lint output. > > If anybody runs in to any issues - like the results are too noisy or something is breaking where it shouldn’t, please let us know and we’ll get on it as quickly as possible. > > Thanks! > Monty > — Slawek Kaplonski Senior software engineer Red Hat From aj at suse.com Thu Apr 2 07:34:08 2020 From: aj at suse.com (Andreas Jaeger) Date: Thu, 2 Apr 2020 09:34:08 +0200 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> Message-ID: <5cf322eb-4d8e-ab69-a1dd-3836b280a2cd@suse.com> On 01.04.20 15:32, Thierry Carrez wrote: > [...] > I'd like to make the following suggestion: > > - aggressively delete all non-openstack things from the openstack org, > keeping only official, active repositories Please double check your list, you missed all the repos listed in https://opendev.org/openstack/governance/src/branch/master/reference/sigs-repos.yaml#L22 - those should be treated as official as well, Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From dtantsur at redhat.com Thu Apr 2 07:51:19 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 2 Apr 2020 09:51:19 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: <49b4357ae50f9eb3df8d24baae969eb3022b1a09.camel@redhat.com> References: <49b4357ae50f9eb3df8d24baae969eb3022b1a09.camel@redhat.com> Message-ID: On Wed, Apr 1, 2020 at 11:25 PM Sean Mooney wrote: > On Wed, 2020-04-01 at 14:32 -0400, Artom Lifshitz wrote: > > On Wed, Apr 1, 2020 at 1:06 PM Dmitry Tantsur > wrote: > > > Finally, I believe that our independence (can I call it “Irexit” > please?) > given the political overtones and baggage related to Irexit > https://en.wikipedia.org/wiki/Irish_Freedom_Party > i would avoid that like the plague both in real life and in this proposal > unless you want to advocate for alt-right, populist, xenophobe polictical > movement > that will piss off most irish people. > Ugh, please accept my apologies, I did not realize there was something under this name already. No offense intended for anyone. Dmitry > > > > No, but I will accept "smelting" or "oxidation" ;) > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Apr 2 08:24:22 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 2 Apr 2020 10:24:22 +0200 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <5cf322eb-4d8e-ab69-a1dd-3836b280a2cd@suse.com> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <5cf322eb-4d8e-ab69-a1dd-3836b280a2cd@suse.com> Message-ID: <2447dc82-c2dc-048c-072f-eaa3109b5ba4@openstack.org> Andreas Jaeger wrote: > On 01.04.20 15:32, Thierry Carrez wrote: >> [...] >> I'd like to make the following suggestion: >> >> - aggressively delete all non-openstack things from the openstack org, >> keeping only official, active repositories > > Please double check your list, you missed all the repos listed in > https://opendev.org/openstack/governance/src/branch/master/reference/sigs-repos.yaml#L22 > - those should be treated as official as well, Good catch! I'll fix it before activating... the goal of the early analysis was to expose the variety of cases. -- Thierry Carrez (ttx) From radoslaw.piliszek at gmail.com Thu Apr 2 08:29:24 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 2 Apr 2020 10:29:24 +0200 Subject: [all][dev][qa] cirros 0.5.1 In-Reply-To: References: Message-ID: It's April already so I think a progress report might be beneficial to not lose the topic. The change switching DevStack to CirrOS 0.5.1 [1] is still paused. This is mostly because I would really prefer to hit CI early in the Victoria cycle rather than late in Ussuri. Please let me know if you beg to differ. ;-) That said, thank you Dmitry, Iury and Maciej for taking the time to test it on Ironic [2] and Neutron [3] sides. Re Neutron - glad to see Xenial go away with this. Finally! :-) Neutron looks overally happy with 0.5.1. Ironic in general too, but metalsmith's job seems to have issues with available storage space. As for merged progress, the CI images now have CirrOS 0.5.1 cached for x86_64 [4] and aarch64 [5]. Also, we already switched Kolla Ansible to test against CirrOS 0.5.1 (for better aarch64 support). Even if we do the switch in Victoria, it's best to test early, so please let me know if you have questions and/or need help with switching your CI. [1] https://review.opendev.org/711492 [2] https://review.opendev.org/712728 [3] https://review.opendev.org/711425 [4] https://review.opendev.org/714475 [5] https://review.opendev.org/714481 PS: I just noticed I wrote 'cinder' in one place in the original message. Please forgive me the confusion. -yoctozepto czw., 12 mar 2020 o 12:55 Radosław Piliszek napisał(a): > > Hiya Folks, > > as you might have noticed, cinder 0.5.1 has been released. > This build seems to be passing the current devstack gate. [1] > Big thanks to hrw and smoser for letting cirros 0.5.1 happen (and > cirros having bright future yet again). > Also thanks to mordred for cleaning up SDK testing to pass. :-) > > I think it would be nice to merge this in Ussuri still, preferably before April. > On the other hand, we all know that devstack gate is not super > comprehensive and hence I would like to ask teams whose tests depend > on interaction with guest OS to test their gates on this patch (or > help me help you do that). > I deliberately marked it W-1 to avoid merging too early. > > Let the discussion begin. :-) > > [1] https://review.opendev.org/711492 > > -yoctozepto From dtantsur at redhat.com Thu Apr 2 08:40:06 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 2 Apr 2020 10:40:06 +0200 Subject: [all][dev][qa] cirros 0.5.1 In-Reply-To: References: Message-ID: On Thu, Apr 2, 2020 at 10:31 AM Radosław Piliszek < radoslaw.piliszek at gmail.com> wrote: > It's April already so I think a progress report might be beneficial to > not lose the topic. > > The change switching DevStack to CirrOS 0.5.1 [1] is still paused. > This is mostly because I would really prefer to hit CI early in the > Victoria cycle rather than late in Ussuri. > Please let me know if you beg to differ. ;-) > That said, thank you Dmitry, Iury and Maciej for taking the time to > test it on Ironic [2] and Neutron [3] sides. > Re Neutron - glad to see Xenial go away with this. Finally! :-) > > Neutron looks overally happy with 0.5.1. > Ironic in general too, but metalsmith's job seems to have issues with > available storage space. > I hope that https://review.opendev.org/716894 will solve this issue. > > As for merged progress, the CI images now have CirrOS 0.5.1 cached for > x86_64 [4] and aarch64 [5]. > Also, we already switched Kolla Ansible to test against CirrOS 0.5.1 > (for better aarch64 support). > > Even if we do the switch in Victoria, it's best to test early, so > please let me know if you have questions and/or need help with > switching your CI. > > [1] https://review.opendev.org/711492 > [2] https://review.opendev.org/712728 > [3] https://review.opendev.org/711425 > [4] https://review.opendev.org/714475 > [5] https://review.opendev.org/714481 > > PS: I just noticed I wrote 'cinder' in one place in the original > message. Please forgive me the confusion. > > -yoctozepto > > czw., 12 mar 2020 o 12:55 Radosław Piliszek > napisał(a): > > > > Hiya Folks, > > > > as you might have noticed, cinder 0.5.1 has been released. > > This build seems to be passing the current devstack gate. [1] > > Big thanks to hrw and smoser for letting cirros 0.5.1 happen (and > > cirros having bright future yet again). > > Also thanks to mordred for cleaning up SDK testing to pass. :-) > > > > I think it would be nice to merge this in Ussuri still, preferably > before April. > > On the other hand, we all know that devstack gate is not super > > comprehensive and hence I would like to ask teams whose tests depend > > on interaction with guest OS to test their gates on this patch (or > > help me help you do that). > > I deliberately marked it W-1 to avoid merging too early. > > > > Let the discussion begin. :-) > > > > [1] https://review.opendev.org/711492 > > > > -yoctozepto > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Apr 2 09:00:33 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 2 Apr 2020 11:00:33 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: Dmitry Tantsur wrote: > [...] > TL;DR I’m proposing to make Ironic a top-level project under opendev.org > and the OpenStack Foundation, following the same > model as Zuul. I don’t propose severing current relationships with other > OpenStack projects, nor making substantial changes in how the project is > operated. > [...] I agree that in the current situation, you have to explain that OpenStack is a collection of tools and it's OK if you install just one, while a lot of people assume "OpenStack" means "lots of services". So I think I understand the problem you're trying to solve. The main risk I see here is the slippery slope. Other OpenStack projects can be run standalone (Cinder, Swift...), so under that same reason could also move to be their own top-level project. At which point we end up with a collection of projects under the OSF, very much like we currently have a collection of projects under the TC. The only difference will be that the integration between the various components (currently the role of the TC) will no longer be under the responsibility of anyone. I'm not really sure that with less integration, we'll be collectively better off as a result. So I think we need to dig deeper in the strategy for Ironic, the adoption obstacles we are trying to remove, and discuss the options. Being set up as a separate top-level project is one option. But I feel like changing governance (i.e. no longer be under the TC) has a lot less impact to change the perception than, say, creating a separate ironic product website that explains Ironic completely outside of the OpenStack context (which we could do without changing governance). The release management issue is, I think, mostly a red herring. As swift has proven in the past, you can totally have a product strategy with the cycle-with-intermediary model. You can even maintain your own extra branches (think stable/2.0) if you want. The only extra limitations AFAIK in the integrated release are that (1) you also maintain stable branches at openstack common release points (like stable/train), and (2) that the openstack release management team has an opportunity to check the release numbering for semver sanity. -- Thierry Carrez (ttx) From dtantsur at redhat.com Thu Apr 2 09:38:10 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 2 Apr 2020 11:38:10 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: Hi, On Thu, Apr 2, 2020 at 11:05 AM Thierry Carrez wrote: > Dmitry Tantsur wrote: > > [...] > > TL;DR I’m proposing to make Ironic a top-level project under opendev.org > > and the OpenStack Foundation, following the same > > model as Zuul. I don’t propose severing current relationships with other > > OpenStack projects, nor making substantial changes in how the project is > > operated. > > [...] > > I agree that in the current situation, you have to explain that > OpenStack is a collection of tools and it's OK if you install just one, > while a lot of people assume "OpenStack" means "lots of services". So I > think I understand the problem you're trying to solve. > Exactly. Going a bit philosophical, I wonder if we actually want people to see OpenStack as a collection of tools. In my view, it may improve perception of OpenStack if we start more clearly defining what it is and is not, including promoting OpenStack as a set of services to build an IaaS solution. Including drawing borders with weird citizens like Ironic. I may be horribly wrong here, of course. > > The main risk I see here is the slippery slope. Other OpenStack projects > can be run standalone (Cinder, Swift...), so under that same reason > could also move to be their own top-level project. I'm surprised that Swift hasn't done that, to be honest :) > At which point we end > up with a collection of projects under the OSF, very much like we > currently have a collection of projects under the TC. The only > difference will be that the integration between the various components > (currently the role of the TC) will no longer be under the > responsibility of anyone. I'm not really sure that with less > integration, we'll be collectively better off as a result. > Cinder specifically is a much more inherent part of OpenStack than Ironic is. Supporting volumes in a cloud is much more natural than supporting bare metal machines. The fact that it can be used standalone is secondary here (and, honestly, I wish more services could be used standalone - I even see a case for standalone Nova!). I don't quite agree that the integration right now is the responsibility of the TC. For me it comes from the customer demand and will die out if such demand diminishes, with or without TC. In Ironic we don't integrate with Nova, Neutron, Glance, Cinder, Swift and Keystone because TC makes us. We do it because our users want us to. We will continue while they do. If at some point, say, Swift consumers stop caring about the rest of OpenStack, I don't think any TC will or any official status will hold the integration together for too long. > > So I think we need to dig deeper in the strategy for Ironic, the > adoption obstacles we are trying to remove, and discuss the options. > Please count me in! > Being set up as a separate top-level project is one option. But I feel > like changing governance (i.e. no longer be under the TC) has a lot less > impact to change the perception than, say, creating a separate ironic > product website that explains Ironic completely outside of the OpenStack > context (which we could do without changing governance). > This is a great idea, and I agree that we should do it irregardless of this decision. However, if we do this and also the release management changes you're talking about below, what will be left of our participation in OpenStack? Dmitry > > The release management issue is, I think, mostly a red herring. As swift > has proven in the past, you can totally have a product strategy with the > cycle-with-intermediary model. You can even maintain your own extra > branches (think stable/2.0) if you want. The only extra limitations > AFAIK in the integrated release are that (1) you also maintain stable > branches at openstack common release points (like stable/train), and (2) > that the openstack release management team has an opportunity to check > the release numbering for semver sanity. > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Thu Apr 2 09:40:25 2020 From: elfosardo at gmail.com (Riccardo Pittau) Date: Thu, 2 Apr 2020 11:40:25 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: TL;DR thanks Dmitry, I'm glad to see this concretize after all the talks :) On Wed, Apr 1, 2020 at 7:08 PM Dmitry Tantsur wrote: > > Hi everyone! > > This topic should not come as a huge surprise for many, since it has been raised numerous times in the past years. I have a feeling that the end of Ussuri, now that we’ve re-acquired our PTL and are on the verge of selecting new TC members, may be a good time to propose it for a formal discussion. > > TL;DR I’m proposing to make Ironic a top-level project under opendev.org and the OpenStack Foundation, following the same model as Zuul. I don’t propose severing current relationships with other OpenStack projects, nor making substantial changes in how the project is operated. > > (And no, it’s not an April 1st joke) > > Background > ========= > > Ironic was born as a Nova plugin, but has grown way beyond this single case since then. The first commit in Bifrost dates to February 2015. During these 5 years (hey, we forgot to celebrate!) it has developed into a commonly used data center management tool - and still based on standalone Ironic! The Metal3 project uses standalone Ironic as its hardware management backend. We haven’t been “just” a component of OpenStack for a while now, I think it’s time to officially recognize it. I can definitely confirm, being deeply involved in Metal3 and Openshift on the hardware management side, having ironic as an "independent" product from the top will probably simplify the road ahead, and maybe save some of what has left of my sanity :) > > And before you ask: in no case do I suggest scaling down our invaluable integration with Nova. We’re observing a solid growth of deployments using Ironic as an addition to their OpenStack clouds, and this proposal doesn’t try to devalue this use case. The intention is to accept publicly and officially that it’s not the only or the main use case, but one of the main use cases. I don’t think it comes as a surprise to the Nova team. > > Okay, so why? > =========== > > The first and the main reason is the ambiguity in our positioning. We do see prospective operators and users confused by the perception that Ironic is a part of OpenStack, especially when it comes to the standalone use case. “But what if I don’t need OpenStack” is a question that I hear in most of these conversations. Changing from “a part of OpenStack” to “a FOSS tool that can integrate with OpenStack” is critical for our project to keep growing into new fields. To me personally it feels in line with how OpenDev itself is reaching into new areas beyond just the traditional IaaS. The next OpenDev even will apparently have a bare metal management track, so why not a top-level project for it? Since I joined the ironic team this is probably *the* recurrent question from former colleagues and operators in general: how can I manage my infrastructure with ironic standalone? The need of integration with Openstack comes after that, it's definitely a plus and a convenience, but not the first thought. > > Another reason is release cadence. We have repeatedly expressed the desire to release Ironic and its sub-projects more often than we do now. Granted, *technically* we can release often even now. We can even abandon the current release model and switch to “independent”, but it doesn’t entirely solve the issue at hand. First, we don’t want to lose the notion of stable branches. One way or another, we need to support consumers with bug fix releases. Second, to become truly “independent” we’ll need to remove any tight coupling with any projects that do integrated releases. Which is, essentially, what I’m proposing here. > > Finally, I believe that our independence (can I call it “Irexit” please?) has already happened in reality, we just shy away from recognizing it. Look: > 1. All integration points with other OpenStack projects are optional. > 2. We can work fully standalone and even provide a project for that. > 3. Many new features (RAID, BIOS to name a few) are exposed to standalone users much earlier than to those going through Nova. Again can definitely confirm the latest two points, we saw a lot of features being included in Metal3 as "finished product", from hardware management perspective, than generally available in a current Openstack distribution. > 4. We even have our own mini-scheduler (although its intention is not and has not been to replace the Placement service). > 5. We make releases more often than the “core” OpenStack projects (but see above). > > What we will do > ============ > > This proposal involves in the short term: > * Creating a new git namespace: opendev.org/ironic > * Creating a new website (name TBD, bare metal puns are welcome). > * If we can have https://docs.opendev.org/ironic/, it may be just fine though. > * Keeping the same governance model, only adjusted to the necessary extent. > * Keeping the same policies (reviews, CI, stable). > * Defining a new release cadence and stable branch support schedule. > > In the long term we will consider (not necessary do): > * Reshaping our CI to rely less on devstack and grenade (only use them for jobs involving OpenStack). > * Reducing or removing reliance on oslo libraries. > * Stopping using rabbitmq for messaging (we’ve already made it optional). > * Integrating with non-OpenStack services (kubernetes?) and providing lighter alternatives (think, built-in authentication). This is unavoidable and much needed! > > What we will NOT do > ================ > > At least this proposal does NOT involve: > * Stopping maintaining the Ironic virt driver in Nova. > * Stopping running voting CI jobs with OpenStack services. > * Dropping optional integration with OpenStack services. > * Leaving OpenDev completely. > > What do you think? > =============== > > Please let us know what you think about this proposal. Any hints on how to proceed with it, in case we reach a consensus, are also welcome. > > Cheers, > Dmitry From mark at stackhpc.com Thu Apr 2 10:01:44 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 2 Apr 2020 11:01:44 +0100 Subject: [kolla][uc] Kolla SIG In-Reply-To: References: Message-ID: On Tue, 24 Mar 2020 at 16:24, Mark Goddard wrote: > > On Fri, 6 Mar 2020 at 16:39, Mark Goddard wrote: > > > > Hi, > > > > I'd like to propose the creation of a Special Interest Group (SIG) [0] > > for Kolla. > > > > The main aim of the group would be to improve communication between > > operators and developers. > > > > The SIG would host regular virtual project onboarding, project update, > > and feedback sessions, ideally via video calls. This should remove the > > necessity of being physically present at Open Infra Summits for > > participation in the project. I like to think of this as the fifth > > open [1] (name TBD). > > > > I propose that in addition to the above sessions, the SIG should host > > more informal discussions, probably every 2-4 weeks with the aim of > > meeting other community members, discussing successes and failures, > > sharing knowledge, and generally getting to know each other a bit > > better. These could be via video call, IRC, or a mix. > > > > Finally - I propose that we build and maintain a list of Kolla users, > > including details of their environments and areas of interest and > > expertise. Of course this would be opt-in. This would help us to > > connect with subject matter experts and interested parties to help > > answer queries in IRC, or when making changes to a specific area. > > > > This is all up for discussion, and subject to sufficient interest. If > > you are interested, please add your name and email address to the > > Etherpad [2], along with any comments, thoughts or suggestions. > > > > [0] https://governance.openstack.org/sigs/ > > [1] https://www.openstack.org/four-opens/ > > [2] https://etherpad.openstack.org/p/kolla-sig > > > > Cheers, > > Mark > > We have had a good number of people express interest in this group. > Based on feedback it will not be a SIG, but an informal group > affiliated with the Kolla project. > > Let's try to schedule a slot for some meetings. I've created a Doodle > poll [1] with hour-long slots between 12:00 UTC and 17:00 UTC, for > next week and the week after. I suggest we start with meetings every > two weeks while we build momentum, but we should probably drop to once > per month eventually. > > [1] https://doodle.com/poll/9g7czxmdngd5zz4t The winning slot is 15:00 UTC - 16:00 UTC starting on 9th April, repeating every two weeks. I propose we use Meet for the video call, but please get in touch if this does not work for you. Meeting link: https://meet.google.com/hph-pynx-vsy I look forward to speaking to you. Mark > > Cheers, > Mark From jean-philippe at evrard.me Thu Apr 2 10:14:13 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Thu, 02 Apr 2020 12:14:13 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> Hello, My opinion (a little bit conservative, for once) is inline... On Wed, 2020-04-01 at 19:03 +0200, Dmitry Tantsur wrote: > The first and the main reason is the ambiguity in our positioning. We > do see prospective operators and users confused by the perception > that Ironic is a part of OpenStack, especially when it comes to the > standalone use case. > “But what if I don’t need OpenStack” is a question that I hear in > most of these conversations. Changing from “a part of OpenStack” to > “a FOSS tool that can integrate with OpenStack” is critical for our > project to keep growing into new fields. I think the name "Ironic" associated with the term "bare metal" maps to OpenStack. You are right. It's in the heads of the people who were involved, in the search engines, etc. However, I am not 100% sure it maps with "needs Nova" or "cannot be standalone". If you consider OpenStack taints this "standalone" part of Ironic, do you think that putting it as a top project of the **OpenStack Foundation ** will help? I don't think so. People will still see it's an OpenStack _related_ technology, with a history of being an OpenStack project, which is now a standalone project inside the OpenStack foundation. At best, it confuses people which are not really aware of all these details. If you really want to leave OpenStack for standalone purpose, I would encourage to try a more radical approach, like gnocchi, who completely split out. Sadly it didn't turn out fine for them, but at least the difference was clear. It was out. Again, I don't think that being in the "openstack" namespace prevents standalone. In fact that's what I would like to see more with our current openstack projects: Be relevant as standalone. Swift and Ironic are very good examples. They make sense in OpenStack, as standalone services, that happen to work well together in an ecosystem for IaaS. The whole point of the OpenStack name, for me, is the coordinated testing and releasing. "Those independents bits are tested together!". Can't we work on the branding, naming, and message without the move? Why the hassle of moving things? Does that really bring value to your team? Before forging my final (personal) opinion, I would like more information than just gut feelings. > To me personally it feels in line with how OpenDev itself is reaching > into new areas beyond just the traditional IaaS. The next OpenDev > even will apparently have a bare metal management track, so why not a > top-level project for it? I like the OpenDev idea, but I cannot unshake this off the OSF. In other words, I am not sure how far it goes "beyond the traditional IaaS" in my mind, because the OSF (and OpenDev) have missions after all. Would you care to explain for me? How does that map with the ironic reflection? I am confused, Ironic _for me_ is an infrastructure tool... > Another reason is release cadence. We have repeatedly expressed the > desire to release Ironic and its sub-projects more often than we do > now. Granted, *technically* we can release often even now. We can > even abandon the current release model and switch to “independent”, > but it doesn’t entirely solve the issue at hand. > First, we don’t want to lose the notion of stable branches. Independant doesn't mean _not branching_. In the old times, before cycle-trailing existed, OpenStack-Ansible was independent, we manually branched, and used the release tooling to do official releases. It's pretty close to what you are looking for, IMO. > One way or another, we need to support consumers with bug fix > releases. Second, to become truly “independent” we’ll need to remove > any tight coupling with any projects that do integrated releases. > Which is, essentially, what I’m proposing here. That's for me the simplest change you can do. When you move to independant, while still benefit from the release tooling and help of your current reviewer sets. Please note that I am still writing an idea in our ideas framework, proposing a change in the release cycles (that conversation again! but with a little twist), which I guess you might be interested in. Regards, Jean-Philippe Evrard -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Thu Apr 2 10:19:31 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Thu, 02 Apr 2020 12:19:31 +0200 Subject: [all] Curating the openstack org on GitHub In-Reply-To: <2c5b28fb-7fef-a547-d00d-703a0803d10c@openstack.org> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> <52258b9f-05b9-9a6a-b394-89627840940a@ham.ie> <0c993967-f1c7-4a11-cccb-9e7257f50f93@nemebean.com> <28424a20-21ed-6608-1adf-38ceaaeb9100@gmx.com> <2c5b28fb-7fef-a547-d00d-703a0803d10c@openstack.org> Message-ID: <1b3095a544c07eff311bd8d70ea7a57babc1a975.camel@evrard.me> On Wed, 2020-04-01 at 18:36 +0200, Thierry Carrez wrote: > Sean McGinnis wrote: > > [...] We would just: > > > > 1. Stop mirroring retired repos from our gitea to GitHub > > 2. Manually do a repo transfer from openstack/ to openstack-attic/ > > on > > GitHub > > 3. Set the openstack-attic/repo to Archived > > > > Then anyone that still tries to clone from the old openstack > > namespaces > > GitHub location will still get the files, but from that point on we > > don't need to worry about mirroring or ongoing maintenance. It's > > basically just a historical record. And if someone wants to for > > whatever > > reason, they can fork that repo and do whatever they want to do > > with it. > > Assuming Matt is right and org-to-org transfer does not generate > manual > confirmation if they have a shared owner, that's definitely a > possibility. > > I prefer openstack-archive because openstack-attic actually exists > on > opendev so having it in two places but containing different things > is > likely to be confusing. I'd rather transfer openstack-attic to > openstack-archive as well. > LGTM. Thanks for the work on the cleanup. It's welcomed, or should I dare to say ... even necessary! Regards, Jean-Philippe Evrard From jean-philippe at evrard.me Thu Apr 2 10:26:44 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Thu, 02 Apr 2020 12:26:44 +0200 Subject: [tc][election] Simple question for the TC candidates Message-ID: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Hello, I read your nominations, and as usual I will ask what do you _technically_ will do during your mandate, what do you _actively_ want to change in OpenStack? This can be a change in governance, in the projects, in the current structure... it can be really anything. I am just hoping to see practical OpenStack-wide changes here. It doesn't need to be a fully refined idea, but something that can be worked upon. Thanks for your time. Regards, Jean-Philippe Evrard From etingof at gmail.com Thu Apr 2 10:36:35 2020 From: etingof at gmail.com (Ilya Etingof) Date: Thu, 2 Apr 2020 12:36:35 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: > So I think we need to dig deeper in the strategy for Ironic, the > adoption obstacles we are trying to remove, and discuss the options. > Being set up as a separate top-level project is one option. But I feel > like changing governance (i.e. no longer be under the TC) has a lot less > impact to change the perception than, say, creating a separate ironic > product website that explains Ironic completely outside of the OpenStack > context (which we could do without changing governance). Ironic product web-site might be highly beneficial anyway. Looking at the functionally similar tool [1], those guys have an impressive web-presence. I can imagine prospective enterprise users would be gravitating toward MaaS just because of that. 1. https://maas.io From dtantsur at redhat.com Thu Apr 2 10:38:02 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 2 Apr 2020 12:38:02 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> Message-ID: Hi, On Thu, Apr 2, 2020 at 12:17 PM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Hello, > > My opinion (a little bit conservative, for once) is inline... > > On Wed, 2020-04-01 at 19:03 +0200, Dmitry Tantsur wrote: > > > The first and the main reason is the ambiguity in our positioning. We do > see prospective operators and users confused by the perception that Ironic > is a part of OpenStack, especially when it comes to the standalone use > case. > “But what if I don’t need OpenStack” is a question that I hear in most of > these conversations. Changing from “a part of OpenStack” to “a FOSS tool > that can integrate with OpenStack” is critical for our project to keep > growing into new fields. > > > I think the name "Ironic" associated with the term "bare metal" maps to > OpenStack. You are right. > It's in the heads of the people who were involved, in the search engines, > etc. > > However, I am not 100% sure it maps with "needs Nova" or "cannot be > standalone". > People do get surprised when they hear that Ironic can be used standalone, yes. "A part of OpenStack" maps to "installed inside OpenStack" rather than "is developed on the OpenStack platform". > > If you consider OpenStack taints this "standalone" part of Ironic, do you > think that putting it as a top project of the **OpenStack Foundation ** > will help? I don't think so. People will still see it's an OpenStack > _related_ technology, with a history of being an OpenStack project, which > is now a standalone project inside the OpenStack foundation. At best, it > confuses people which are not really aware of all these details. > Time to rename the Foundation? :) How is the same problem solved for Zuul or Kata Containers? > > If you really want to leave OpenStack for standalone purpose, I would > encourage to try a more radical approach, like gnocchi, who completely > split out. Sadly it didn't turn out fine for them, but at least the > difference was clear. It was out. > As an aside, I don't think gnocchi fell victim of their split, but rather shared the overall fate of the Telemetry project. I also think your suggestion goes against the idea of OpenDev, which to me is to embrace a fast collection of Open Infrastructure projects, related to OpenStack or not. If you say that anything going to OpenDev will be seen as an OpenStack project, it defeats the purpose of OpenDev. > > Again, I don't think that being in the "openstack" namespace prevents > standalone. > In fact that's what I would like to see more with our current openstack > projects: Be relevant as standalone. Swift and Ironic are very good > examples. They make sense in OpenStack, as standalone services, that happen > to work well together in an ecosystem for IaaS. The whole point of the > OpenStack name, for me, is the coordinated testing and releasing. "Those > independents bits are tested together!". > > Can't we work on the branding, naming, and message without the move? Why > the hassle of moving things? Does that really bring value to your team? > Before forging my final (personal) opinion, I would like more information > than just gut feelings. > It's not "just gut feelings", it's the summary of numerous conversations that Julia and I have to hold when advocating for Ironic outside of the Nova context. We do this "Ironic does not imply OpenStack" explanation over and over, often enough unsuccessfully. And then some people just don't like OpenStack... Now, I do agree that there are steps that can be taken before we go all nuclear. We can definitely work on our own website, we can reduce reliance on oslo, start releasing independently, and so on. I'm wondering what will be left of our participation in OpenStack in the end. Thierry has suggested the role of the TC in ensuring integration. I'm of the opinion that if all stakeholders in Ironic lose interest in Ironic as part of OpenStack, no power will prevent the integration from slowly falling apart. > > To me personally it feels in line with how OpenDev itself is reaching into > new areas beyond just the traditional IaaS. The next OpenDev even will > apparently have a bare metal management track, so why not a top-level > project for it? > > > I like the OpenDev idea, but I cannot unshake this off the OSF. In other > words, I am not sure how far it goes "beyond the traditional IaaS" in my > mind, because the OSF (and OpenDev) have missions after all. Would you care > to explain for me? How does that map with the ironic reflection? I am > confused, Ironic _for me_ is an infrastructure tool... > I'm referring to a very narrow sense of Nova+company. I.e. a solution for providing virtual machines booting from virtual volumes on virtual networks. Ironic does not clearly fit there, nor does, say, Zuul. > > Another reason is release cadence. We have repeatedly expressed the desire > to release Ironic and its sub-projects more often than we do now. Granted, > *technically* we can release often even now. We can even abandon the > current release model and switch to “independent”, but it doesn’t entirely > solve the issue at hand. > First, we don’t want to lose the notion of stable branches. > > > Independant doesn't mean _not branching_. In the old times, before > cycle-trailing existed, OpenStack-Ansible was independent, we manually > branched, and used the release tooling to do official releases. It's pretty > close to what you are looking for, IMO. > Good point, noted. > > One way or another, we need to support consumers with bug fix releases. > Second, to become truly “independent” we’ll need to remove any tight > coupling with any projects that do integrated releases. Which is, > essentially, what I’m proposing here. > > > That's for me the simplest change you can do. When you move to > independant, while still benefit from the release tooling and help of your > current reviewer sets. > > Please note that I am still writing an idea in our ideas framework, > proposing a change in the release cycles (that conversation again! but with > a little twist), which I guess you might be interested in. > Please let me know when it's ready, I really am. Dmitry > > Regards, > Jean-Philippe Evrard > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Apr 2 11:19:55 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 02 Apr 2020 12:19:55 +0100 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <49b4357ae50f9eb3df8d24baae969eb3022b1a09.camel@redhat.com> Message-ID: <0e9206fe94470a34266a55cb46c5b42120b4e477.camel@redhat.com> On Thu, 2020-04-02 at 09:51 +0200, Dmitry Tantsur wrote: > On Wed, Apr 1, 2020 at 11:25 PM Sean Mooney wrote: > > > On Wed, 2020-04-01 at 14:32 -0400, Artom Lifshitz wrote: > > > On Wed, Apr 1, 2020 at 1:06 PM Dmitry Tantsur > > > > wrote: > > > > Finally, I believe that our independence (can I call it “Irexit” > > > > please?) > > given the political overtones and baggage related to Irexit > > https://en.wikipedia.org/wiki/Irish_Freedom_Party > > i would avoid that like the plague both in real life and in this proposal > > unless you want to advocate for alt-right, populist, xenophobe polictical > > movement > > that will piss off most irish people. > > > > Ugh, please accept my apologies, I did not realize there was something > under this name already. > > No offense intended for anyone. no offense taken, just didnt want to see that name come back to bit you incase you were not aware of the other context its used in. > > Dmitry > > > > > > > > No, but I will accept "smelting" or "oxidation" ;) > > > > > > > > > > From smooney at redhat.com Thu Apr 2 11:40:45 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 02 Apr 2020 12:40:45 +0100 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: <0cc620adf7685f977fd18324ab1cc77e6c15b4f9.camel@redhat.com> On Thu, 2020-04-02 at 11:40 +0200, Riccardo Pittau wrote: > TL;DR thanks Dmitry, I'm glad to see this concretize after all the talks :) > > On Wed, Apr 1, 2020 at 7:08 PM Dmitry Tantsur wrote: > > > > Hi everyone! > > > > This topic should not come as a huge surprise for many, since it has been raised numerous times in the past years. I > > have a feeling that the end of Ussuri, now that we’ve re-acquired our PTL and are on the verge of selecting new TC > > members, may be a good time to propose it for a formal discussion. > > > > TL;DR I’m proposing to make Ironic a top-level project under opendev.org and the OpenStack Foundation, following the > > same model as Zuul. I don’t propose severing current relationships with other OpenStack projects, nor making > > substantial changes in how the project is operated. > > > > (And no, it’s not an April 1st joke) > > > > Background > > ========= > > > > Ironic was born as a Nova plugin, but has grown way beyond this single case since then. The first commit in Bifrost > > dates to February 2015. During these 5 years (hey, we forgot to celebrate!) it has developed into a commonly used > > data center management tool - and still based on standalone Ironic! The Metal3 project uses standalone Ironic as its > > hardware management backend. We haven’t been “just” a component of OpenStack for a while now, I think it’s time to > > officially recognize it. > > I can definitely confirm, being deeply involved in Metal3 and Openshift > on the hardware management side, having ironic as an "independent" > product from the top will probably simplify the road ahead, and maybe > save some of what has left of my sanity :) given that ironci can already change to release cycle independt and release as offten as it like and give it is already deployable standalone with bifrost i dont think it will actully make any difference if it a seperate top level project in regards to openshift integration. also we generally dont use the term product upstream. products are sold and have supprot implications, project are collaberations of like mined folk trying to solve common problems and helping each other out. that said im not against the idea or ironic being used standalone. i have use bifrost for personal use in the past and love the kaobe/kolla-ansible approch of using standalone ironic via bifrost to manage your hardware and then kolla- ansbile to deploy your cloud. doing the same thing with metal3 is also great im not sure if a governace change will help but i agree that the seperate marketing page for ironic an related services to better cater for that usecase which is mentioned else where in this mail is a good idea. > > > > > > And before you ask: in no case do I suggest scaling down our invaluable integration with Nova. We’re observing a > > solid growth of deployments using Ironic as an addition to their OpenStack clouds, and this proposal doesn’t try to > > devalue this use case. The intention is to accept publicly and officially that it’s not the only or the main use > > case, but one of the main use cases. I don’t think it comes as a surprise to the Nova team. > > > > Okay, so why? > > =========== > > > > The first and the main reason is the ambiguity in our positioning. We do see prospective operators and users > > confused by the perception that Ironic is a part of OpenStack, especially when it comes to the standalone use case. > > “But what if I don’t need OpenStack” is a question that I hear in most of these conversations. Changing from “a part > > of OpenStack” to “a FOSS tool that can integrate with OpenStack” is critical for our project to keep growing into > > new fields. To me personally it feels in line with how OpenDev itself is reaching into new areas beyond just the > > traditional IaaS. The next OpenDev even will apparently have a bare metal management track, so why not a top-level > > project for it? > > Since I joined the ironic team this is probably *the* recurrent question from > former colleagues and operators in general: > how can I manage my infrastructure with ironic standalone? > The need of integration with Openstack comes after that, it's definitely > a plus and a convenience, but not the first thought. > > > > > > Another reason is release cadence. We have repeatedly expressed the desire to release Ironic and its sub-projects > > more often than we do now. Granted, *technically* we can release often even now. We can even abandon the current > > release model and switch to “independent”, but it doesn’t entirely solve the issue at hand. First, we don’t want to > > lose the notion of stable branches. One way or another, we need to support consumers with bug fix releases. Second, > > to become truly “independent” we’ll need to remove any tight coupling with any projects that do integrated releases. > > Which is, essentially, what I’m proposing here. > > > > Finally, I believe that our independence (can I call it “Irexit” please?) has already happened in reality, we just > > shy away from recognizing it. Look: > > 1. All integration points with other OpenStack projects are optional. > > 2. We can work fully standalone and even provide a project for that. > > 3. Many new features (RAID, BIOS to name a few) are exposed to standalone users much earlier than to those going > > through Nova. > > Again can definitely confirm the latest two points, we saw a lot of features > being included in Metal3 as "finished product", from hardware management > perspective, than generally available in a current Openstack distribution. given we have not had a lot of specs or blueprints propsoed to consume new ironic feature for nova is that because no one cares about those features in a nova context, no one has had time to work on it or the nova team does not know about them. from my perspecitive of someone who works on nova and does take an active interst in review the nova-spec repo i have not had much visablity into what feature were added for metal3 that are not supproted by nova. i suspect that a lot of them are just not comunicated well. for example i know many redhat folks have contiubted to meltal3 but i dont think many of them have reached out the too the compute dfg to tell use what new feature they are adding or asking for use to support them in nova so im not sure this is beause its harder to integrate in nova so much as no one has done the work. it is harder to integrate with nova but that is beside the point. > > > 4. We even have our own mini-scheduler (although its intention is not and has not been to replace the Placement > > service). > > 5. We make releases more often than the “core” OpenStack projects (but see above). > > > > What we will do > > ============ > > > > This proposal involves in the short term: > > * Creating a new git namespace: opendev.org/ironic > > * Creating a new website (name TBD, bare metal puns are welcome). > > * If we can have https://docs.opendev.org/ironic/, it may be just fine though. > > * Keeping the same governance model, only adjusted to the necessary extent. > > * Keeping the same policies (reviews, CI, stable). > > * Defining a new release cadence and stable branch support schedule. > > > > In the long term we will consider (not necessary do): > > * Reshaping our CI to rely less on devstack and grenade (only use them for jobs involving OpenStack). > > * Reducing or removing reliance on oslo libraries. > > * Stopping using rabbitmq for messaging (we’ve already made it optional). > > * Integrating with non-OpenStack services (kubernetes?) and providing lighter alternatives (think, built-in > > authentication). > > This is unavoidable and much needed! given that kuryr exists i dont think there is anything preventing openstack service form integrating with non openstack service today. nova and other serivce have plugin for auth. granted the only production one is keystone but i dont see any reason that ironic could not have a plugin for native kubernets auth or some other light weight alternivie although since kubernets already support integreating with keystone using keystone auth is praobly still the better approch. of all of the service in openstack keystone is proably one of the lightest weight services there is. placment might be slightly lighter but both are not exactuly combersum to deploy standalone and consume with ironic/bifrost. > > > > > What we will NOT do > > ================ > > > > At least this proposal does NOT involve: > > * Stopping maintaining the Ironic virt driver in Nova. > > * Stopping running voting CI jobs with OpenStack services. > > * Dropping optional integration with OpenStack services. > > * Leaving OpenDev completely. > > > > What do you think? > > =============== > > > > Please let us know what you think about this proposal. Any hints on how to proceed with it, in case we reach a > > consensus, are also welcome. > > > > Cheers, > > Dmitry > > From smooney at redhat.com Thu Apr 2 11:55:06 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 02 Apr 2020 12:55:06 +0100 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> Message-ID: <089b9ff7c5b61d84582147f3bd08a6930e9cf8cc.camel@redhat.com> On Thu, 2020-04-02 at 12:38 +0200, Dmitry Tantsur wrote: > > confused, Ironic _for me_ is an infrastructure tool... > > > > I'm referring to a very narrow sense of Nova+company. I.e. a solution for > providing virtual machines booting from virtual volumes on virtual > networks. Ironic does not clearly fit there, nor does, say, Zuul. well that is mischaracterising openstack as a vm centric iaas platform the very fact that nova-baremeatal dates back to folsom and we have many baremetal only cloud means that it is false to assume that the goal of openstack and nova specificly is only to cater for vms we have had several container backend like nova-lxd and nova-docker over the years. i for one was alwasy a fan of nova libvirt/lxc and we have other project like zun that integrate with kata containers. so i think ironci fits perfectly in the view of a cloud. if it did not then it would not be a good fit in kubernetes/openshift either which would suggest that metal3 is not a useful thing. given the interst metal3 has been getting i think we can agree that is not the case and that baremetal clouds are a think and a good synergy of technologies. openstack is a cloud plathform and as much as i like nova it is not what defines openstack its the collection of service and comuntieis that we have built that does and nova and ironic are both today part of that. that does nto prevent any of the combponets form being used on there own if the service chooses to support that. > > > > > > Another reason is release cadenc From thierry at openstack.org Thu Apr 2 12:07:04 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 2 Apr 2020 14:07:04 +0200 Subject: [all][summary] Curating the openstack org on GitHub In-Reply-To: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> Message-ID: OK, so to summarize, the now-proposed plan is to: 0. Create an openstack-archive organization on GitHub before some org-squatter steals it [DONE] 1. Build a list of official openstack repositories, not forgetting to include SIG, board and UC-owned ones 2. Remove openstack namespace-wide mirroring, replace it with repo-specific jobs for official repositories 3. Move the (no-longer replicated) non-official repositories from the openstack org to the openstack-archive one, mark them "Archived" if they are not already 4. Move the repositories from openstack-attic and openstack-dev organizations on GitHub to openstack-archive as well I will start proceeding on those tasks unless there are new objections posted before end of week. Open questions: - We have a bunch of stale repositories under the openstack-infra organization on GitHub, should we also move them to openstack-archive ? - After the repo transfer, should we destroy the no-longer used openstack-attic and openstack-dev (and openstack-infra) organizations, or are they somehow needed for the automagic redirection to happen ? -- Thierry Carrez (ttx) From gr at ham.ie Thu Apr 2 12:39:13 2020 From: gr at ham.ie (Graham Hayes) Date: Thu, 2 Apr 2020 13:39:13 +0100 Subject: [all][summary] Curating the openstack org on GitHub In-Reply-To: References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> Message-ID: On 02/04/2020 13:07, Thierry Carrez wrote: > OK, so to summarize, the now-proposed plan is to: > > 0. Create an openstack-archive organization on GitHub before some > org-squatter steals it [DONE] > > 1. Build a list of official openstack repositories, not forgetting to > include SIG, board and UC-owned ones > > 2. Remove openstack namespace-wide mirroring, replace it with > repo-specific jobs for official repositories > 3. Move the (no-longer replicated) non-official repositories from the > openstack org to the openstack-archive one, mark them "Archived" if they > are not already > > 4. Move the repositories from openstack-attic and openstack-dev > organizations on GitHub to openstack-archive as well > > I will start proceeding on those tasks unless there are new objections > posted before end of week. > > Open questions: > > - We have a bunch of stale repositories under the openstack-infra > organization on GitHub, should we also move them to openstack-archive ? > > - After the repo transfer, should we destroy the no-longer used > openstack-attic and openstack-dev (and openstack-infra) organizations, > or are they somehow needed for the automagic redirection to happen ? > If only to avoid someone squatting on the org, I think we should keep them. Not sure if we need them for the redirects, but I think it is safer to keep an empty org. - Graham From amotoki at gmail.com Thu Apr 2 12:41:03 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 2 Apr 2020 21:41:03 +0900 Subject: [OpenStack-I18n] [I18n][PTL][election] Nominations were over - no PTL candidacy In-Reply-To: <6175f5cf-94e6-045d-6291-1effd140d6d8@gmail.com> References: <6175f5cf-94e6-045d-6291-1effd140d6d8@gmail.com> Message-ID: On Wed, Apr 1, 2020 at 10:31 PM Ian Y. Choi wrote: > > Hello, > > I would like to share with I18n team members + OpenStackers who are > interested in I18n that there were no PTL nominations for upcoming > Victoria release cycle. > I am still serving as an appointed PTL since I cannot run for I18n PTL > as an election official, but also I honestly tell that my contribution > has been decreased due to my personal life > (my baby was born ~50 days ago - so busy but really happy as a father). First of all, Congrats! Enjoy your new life stage! > TC started to discuss on leaderless projects [1], and seems that a > candidate to become to SIG [2] is being discussed as Docs team > previously moved to Technical Writing SIG. > I would like to kindly ask opinions from I18n team - unless there are no > other strong opinions, I will support to become SIG and try to move > forward for next possible steps. I wonder what will change if we move to SIG. That's a big question which hits me. We need to clarify what are the scope of the i18n team (or SIG). In my understanding, the roles the i18n team usually care are: (1) to coordinate and help individual translation teams including handling translation failures detected during translation imports (2) to maintain the translation platform and scripts which import translation scripts (3) to encourage translations support in individual projects (mainly in horizon and horizon plugins) What are things the SIG cares and who are responsible for what? (1) and (2) still need to be covered. Regarding (3), it is already cared by individual project teams like horizon and their bug tracking system. the i18n team is not involved so it can be out of scope of the SIG. Thanks, Akihiro > > > With many thanks, > > /Ian > > [1] https://etherpad.openstack.org/p/victoria-leaderless > [2] https://governance.openstack.org/sigs/ > > -------- Forwarded Message -------- > Subject: [all][TC][PTL][election] Nominations Close & Campaigning Begins > Date: Tue, 31 Mar 2020 16:46:39 -0700 > From: Kendall Nelson > To: OpenStack Discuss > > > > The PTL and TC Nomination periods are now over. The official candidate lists > are available on the election website[0][1]. > > --PTL Election Details-- > > There are 16 projects without candidates, so according to this > resolution[2], the TC will have to decide how the following projects > will proceed: Adjutant Barbican Cloudkitty Congress I18n Infrastructure > Loci Masakari Oslo Packaging_Rpm Placement Rally Swift Tacker Tricircle > Zaqar > > There are no projects with more than one candidate so we won't need to > hold any runoffs! Congratulations to our new and returning PTLs! [0] > > --TC Election Details-- > > The official candidate list is available on the election website[1]. > > There are will be a TC election following the campaigning period that > has now begun and runs till Apr 07, 2020 23:45 UTC when the polling will > begin. > > You are encouraged to ask questions to the candidates on the ML to help > inform your decisions when polling begins. > > Thank you, > > -Kendall (diablo_rojo) & the Election Officials > > [0] https://governance.openstack.org/election/#victoria-ptl-candidates > [1] https://governance.openstack.org/election/#victoria-tc-candidates > [2] > https://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html > > > _______________________________________________ > OpenStack-I18n mailing list > OpenStack-I18n at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n From fungi at yuggoth.org Thu Apr 2 13:05:22 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 2 Apr 2020 13:05:22 +0000 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> Message-ID: <20200402130522.v6gum4lxdmer2vvl@yuggoth.org> On 2020-04-02 12:38:02 +0200 (+0200), Dmitry Tantsur wrote: [...] > I also think your suggestion goes against the idea of OpenDev, > which to me is to embrace a fast collection of Open Infrastructure > projects, related to OpenStack or not. If you say that anything > going to OpenDev will be seen as an OpenStack project, it defeats > the purpose of OpenDev. [...] I'll refrain from jumping into the rest of this, but please be aware that OpenDev's scope is not just limited to Open Infrastructure projects. The OpenDev team maintains a collaboration platform for any open source projects who are interested in collectively maintaining the development tooling on which their communities rely. OpenDev's collaboratory *is* a form of "infrastructure" but you don't need to be an infrastructure-focused project to be hosted within it. To quote the first two sentences from the current text on the https://opendev.org/ Web site: OpenDev is a space for collaborative Open Source software development. OpenDev’s mission is to provide project hosting, continuous integration tooling, and virtual collaboration spaces for Open Source software projects. It's also the case that not all the OSF's official "Open Infrastructure projects" are hosted in OpenDev's collaboratory (most are though, yes). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From justin.ferrieu at objectif-libre.com Thu Apr 2 10:14:54 2020 From: justin.ferrieu at objectif-libre.com (Justin Ferrieu) Date: Thu, 02 Apr 2020 12:14:54 +0200 Subject: [election][cloudkitty][ptl] Announcing my PTL candidacy for CloudKitty (Justin Ferrieu) Message-ID: Hi all, I would like to nominate myself to be the CloudKitty PTL for the Victoria cycle. The project is quite stable now and just needs an overall maintenance and to meet the community goals. I feel ready to do that. Thank you a lot. Best regards, ---- Justin Ferrieu Mail : justin.ferrieu at objectif-libre.com Consultant Cloud Objectif Libre URL : www.objectif-libre.com [1] Au service de votre Cloud Twitter : @objectiflibre Suivez les actualités OpenStack en français en vous abonnant à la Pause OpenStack http://olib.re/pause-openstack Links: ------ [1] http://www.objectif-libre.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Thu Apr 2 14:42:11 2020 From: allison at openstack.org (Allison Price) Date: Thu, 2 Apr 2020 09:42:11 -0500 Subject: OpenStack Foundation Community Meetings In-Reply-To: <2B7FE5C9-910E-4B47-AA8D-B86E59E60039@openstack.org> References: <1633187D-8928-4671-B126-CB9CAC98377B@openstack.org> <2B7FE5C9-910E-4B47-AA8D-B86E59E60039@openstack.org> Message-ID: <881796BF-F6B6-4B18-8AAF-EE301B0C5D66@openstack.org> Hi everyone, Attached are the slides we will be covering at 10am CT / 1500 UTC today. We will be covering OSF project updates—including OpenStack—as well as updates on OSF events. See you soon! Allison > On Apr 1, 2020, at 9:54 AM, Allison Price wrote: > > Hi everyone - > > In case you didn’t see the post on the Foundation mailing list, we have a community meeting tomorrow where we will be talking about updates to OSF events as well as project updates and how you can get involved in the 10th year of OpenStack campaign. > > One meeting will be in English and one in Mandarin. > > Bring your questions and see you then! > > Cheers, > Allison > > > > >> Begin forwarded message: >> >> From: Allison Price > >> Subject: OpenStack Foundation Community Meetings >> Date: March 26, 2020 at 10:21:15 AM CDT >> To: foundation at lists.openstack.org >> >> Hi everyone, >> >> Next week we are going to have two community meetings to discuss the OpenStack 10th anniversary planning, current community projects, and an update on OSF events. Please join if you would like to hear updates or if you have questions for the OpenStack Foundation team. >> >> Join us: >> Thursday, April 2 at 10am CT / 1500 UTC  >> Friday, April 3 in Mandarin at 10am China Standard Time >> >> If you are unable to join one of the above times, I will share a recording to the mailing list after the meetings. >> >> Cheers, >> Allison >> >> >> Allison Price >> OpenStack Foundation >> allison at openstack.org >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: April 2020 Community Update.pdf Type: application/pdf Size: 1501541 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.sauthier at objectif-libre.com Thu Apr 2 15:27:34 2020 From: christophe.sauthier at objectif-libre.com (Christophe Sauthier) Date: Thu, 02 Apr 2020 11:27:34 -0400 Subject: [election][cloudkitty][ptl] Announcing my PTL candidacy for CloudKitty (Justin Ferrieu) In-Reply-To: References: Message-ID: Thanks for stepping up Justin ! I fully endorse you ! Christophe ---- Christophe Sauthier Directeur Général Objectif Libre : Au service de votre Cloud +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com https://www.objectif-libre.com | @objectiflibre Recevez la Pause Cloud Et DevOps : https://olib.re/abo-pause Le 2020-04-02 06:14, Justin Ferrieu a écrit : > Hi all, > > I would like to nominate myself to be the CloudKitty PTL for the > Victoria cycle. > > The project is quite stable now and just needs an overall maintenance > and to meet the community goals. > > I feel ready to do that. > > Thank you a lot. > > Best regards, > > ---- > Justin Ferrieu Mail : > justin.ferrieu at objectif-libre.com > Consultant Cloud > Objectif Libre URL : www.objectif-libre.com > [1] > Au service de votre Cloud Twitter : @objectiflibre > > Suivez les actualités OpenStack en français en vous abonnant à la > Pause OpenStack > http://olib.re/pause-openstack > > > Links: > ------ > [1] http://www.objectif-libre.com From johnsomor at gmail.com Thu Apr 2 17:39:25 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 2 Apr 2020 10:39:25 -0700 Subject: =?UTF-8?B?UmU6IOOAkG9jdGF2aWHjgJFGYWlsZWQgdG8gbG9hZCBDQSBDZXJ0aWZpY2F0ZSAvZXRjLw==?= =?UTF-8?B?b2N0YXZpYS9jZXJ0cy9zZXJ2ZXJfY2EuY2VydC5wZW0=?= In-Reply-To: <2020040114483393995311@midea.com> References: <2020040114483393995311@midea.com> Message-ID: Hi, The certificate error you are reporting is a configuration error. Please see the Octavia Certificate Configuration Guide, https://docs.openstack.org/octavia/latest/admin/guides/certificates.html for information on how to setup and configure your control plane certificates. If you walk step by step through the guide, you should find the configuration issue that you are facing. Michael On Wed, Apr 1, 2020 at 4:00 PM hao7.liu at midea.com wrote: > > OS version:CentOS7.6 > openstack version:Train > when i deployed my openstack with octavia,and create a lb,the worker report error logs: > > 2020-04-01 14:40:41.842 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.865 164881 INFO octavia.controller.worker.v1.tasks.database_tasks [-] Created Amphora in DB with id 191958e3-2577-4a8a-a1ff-b8f048056b72 > 2020-04-01 14:40:41.869 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'SUCCESS' from state 'RUNNING' with result '191958e3-2577-4a8a-a1ff-b8f048056b72' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:183 > 2020-04-01 14:40:41.874 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.927 164881 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. > 2020-04-01 14:40:41.927 164881 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. > 2020-04-01 14:40:41.946 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'FAILURE' from state 'RUNNING' > 13 predecessors (most recent first): > Atom 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': u'191958e3-2577-4a8a-a1ff-b8f048056b72'} > |__Flow 'BACKUP-octavia-create-amp-for-lb-subflow' > |__Atom 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'flavor': {u'loadbalancer_topology': u'ACTIVE_STANDBY'}, 'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': None} > |__Flow 'BACKUP-octavia-get-amphora-for-lb-subflow' > |__Flow 'BACKUP-octavia-plug-net-subflow' > |__Flow 'octavia-create-loadbalancer-flow' > |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } > |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': None} > |__Atom 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': , 'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': } > |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } > |__Atom 'reload-lb-before-allocate-vip' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': } > |__Atom 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': None} > |__Flow 'octavia-create-loadbalancer-flow': CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/cert_task.py", line 47, in execute > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker validity=CONF.certificates.cert_validity_time) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker cert = cls.sign_cert(csr, validity, **kwargs) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 91, in sign_cert > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker cls._validate_cert(ca_cert, ca_key, ca_key_pass) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 53, in _validate_cert > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker .format(CONF.certificates.ca_certificate) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker > 2020-04-01 14:40:41.969 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.972 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:41.975 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.975 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting create amphora in DB for amp id 191958e3-2577-4a8a-a1ff-b8f048056b72 > 2020-04-01 14:40:41.992 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:41.995 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (97f157c5-8b35-476d-a3d9-586087ecf235) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.996 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer d7ca9fb7-eda3-4a17-a615-c6d7f31d32d8 in the database. > 2020-04-01 14:40:42.003 164881 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. > 2020-04-01 14:40:42.003 164881 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. > 2020-04-01 14:40:42.005 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (97f157c5-8b35-476d-a3d9-586087ecf235) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.006 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'FAILURE' from state 'RUNNING': CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/cert_task.py", line 47, in execute > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker validity=CONF.certificates.cert_validity_time) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker cert = cls.sign_cert(csr, validity, **kwargs) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 91, in sign_cert > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker cls._validate_cert(ca_cert, ca_key, ca_key_pass) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 53, in _validate_cert > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker .format(CONF.certificates.ca_certificate) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker > 2020-04-01 14:40:42.013 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.014 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.017 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (145e3ecd-816e-415e-90a4-b7b09ca09c60) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.018 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting create amphora in DB for amp id 1ecbc19a-2644-4f3a-a9fc-bf6ace1655e3 > 2020-04-01 14:40:42.034 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (145e3ecd-816e-415e-90a4-b7b09ca09c60) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.038 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (a17713f7-52df-4d3b-8cd2-5e592ce29a6a) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.038 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer d7ca9fb7-eda3-4a17-a615-c6d7f31d32d8 in the database. > 2020-04-01 14:40:42.047 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (a17713f7-52df-4d3b-8cd2-5e592ce29a6a) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.052 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' (b6e38bf6-57d3-4b99-8226-486e16606d72) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.054 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' (b6e38bf6-57d3-4b99-8226-486e16606d72) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.059 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' (47efda4a-4ab4-4618-ae0d-f0d145ca75b0) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.062 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' (47efda4a-4ab4-4618-ae0d-f0d145ca75b0) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.066 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' (e24fb53e-195e-401d-b300-a798503d1f97) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.068 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' (e24fb53e-195e-401d-b300-a798503d1f97) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.073 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' (11bbd801-d889-4499-ab7d-768d81153939) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.073 164881 WARNING octavia.controller.worker.v1.tasks.network_tasks [-] Deallocating vip 172.20.250.184 > 2020-04-01 14:40:42.199 164881 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security group b2430a12-2c07-4ca9-a381-3af79f702715 from port a52f2cfa-765b-4664-b4ad-c2a11dd870de > 2020-04-01 14:40:43.189 164881 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security group b2430a12-2c07-4ca9-a381-3af79f702715 > 2020-04-01 14:40:43.994 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' (11bbd801-d889-4499-ab7d-768d81153939) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:43.999 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'reload-lb-before-allocate-vip' (01c2a7f3-9114-41f3-a2c0-42601b2b48f0) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:44.002 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'reload-lb-before-allocate-vip' (01c2a7f3-9114-41f3-a2c0-42601b2b48f0) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:44.007 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (2339e5d5-e545-4f1d-9147-4f5a7b2f9ce9) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:44.017 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (2339e5d5-e545-4f1d-9147-4f5a7b2f9ce9) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:44.028 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Flow 'octavia-create-loadbalancer-flow' (aab75b85-a8f1-486f-99e8-5c81e21aa3f3) transitioned into state 'REVERTED' from state 'RUNNING' > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server [-] Exception during message handling: WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem., Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem.] > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server Traceback (most recent call last): > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 274, in dispatch > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/queue/v1/endpoints.py", line 45, in create_load_balancer > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server self.worker.create_load_balancer(load_balancer_id, flavor) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 292, in wrapped_f > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self.call(f, *args, **kw) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 358, in call > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server do = self.iter(retry_state=retry_state) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 319, in iter > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return fut.result() > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 422, in result > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self.__get_result() > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 361, in call > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server result = fn(*args, **kwargs) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/controller_worker.py", line 344, in create_load_balancer > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server create_lb_tf.run() > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 247, in run > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server for _state in self.run_iter(timeout=timeout): > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server failure.Failure.reraise_if_any(er_failures) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 341, in reraise_if_any > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server raise exc.WrappedFailure(failures) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem., Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem.] > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server > > > ________________________________ > hao7.liu at midea.com From johnsomor at gmail.com Thu Apr 2 17:45:11 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 2 Apr 2020 10:45:11 -0700 Subject: =?UTF-8?B?UmU6IOOAkG9jdGF2aWHjgJFGYWlsZWQgdG8gbG9hZCBDQSBDZXJ0aWZpY2F0ZSAvZXRjLw==?= =?UTF-8?B?b2N0YXZpYS9jZXJ0cy9zZXJ2ZXJfY2EuY2VydC5wZW0=?= In-Reply-To: <2020040114513476847214@midea.com> References: <2020040114483393995311@midea.com> <2020040114513476847214@midea.com> Message-ID: Hi there. For the first issue, try building your image for the Train release by adding the "-g stable/train" parameter. For the other errors, those are errors that the python packages failed to download. Maybe the internet connection dropped during the image build? Michael On Wed, Apr 1, 2020 at 4:01 PM hao7.liu at midea.com wrote: > > OS version:CentOS7.6, ubuntu1804 > openstack version:Train > when i create an amphora image, always may errors, such as: > > ./diskimage-create.sh -i ubuntu -d bionic -r 123456 -s 5 -o amphora-x64-haproxy-ubuntu-1804-0401 > > 2020-04-01 05:34:13.189 | Ignoring actdiag: markers 'python_version == "3.7"' don't match your environment > 2020-04-01 05:34:13.192 | Ignoring sphinxcontrib-applehelp: markers 'python_version == "3.6"' don't match your environment > 2020-04-01 05:34:13.194 | Ignoring sphinxcontrib-applehelp: markers 'python_version == "3.7"' don't match your environment > 2020-04-01 05:34:13.197 | Ignoring scikit-learn: markers 'python_version == "3.6"' don't match your environment > 2020-04-01 05:34:13.199 | Ignoring scikit-learn: markers 'python_version == "3.7"' don't match your environment > 2020-04-01 05:34:13.203 | Processing /opt/amphora-agent > 2020-04-01 05:34:14.758 | ERROR: Package 'octavia' requires a different Python: 2.7.5 not in '>=3.6' > 2020-04-01 05:34:14.823 | Unmount /tmp/dib_build.EjDukNCf/mnt/tmp/yum > 2020-04-01 05:34:14.867 | Unmount /tmp/dib_build.EjDukNCf/mnt/tmp/pip > 2020-04-01 05:34:14.887 | Unmount /tmp/dib_build.EjDukNCf/mnt/tmp/in_target.d > 2020-04-01 05:34:14.915 | Unmount /tmp/dib_build.EjDukNCf/mnt/sys > 2020-04-01 05:34:14.935 | Unmount /tmp/dib_build.EjDukNCf/mnt/proc > 2020-04-01 05:34:14.963 | Unmount /tmp/dib_build.EjDukNCf/mnt/dev/pts > 2020-04-01 05:34:14.991 | Unmount /tmp/dib_build.EjDukNCf/mnt/dev > 2020-04-01 05:34:15.721 | INFO diskimage_builder.block_device.blockdevice [-] State already cleaned - no way to do anything here > root at ip-172-31-53-210:/apps/octavia/diskimage-create# > > > > 2020-04-01 05:47:47.398 | Successfully uninstalled pip-9.0.1 > 2020-04-01 05:47:48.444 | Successfully installed pip-20.0.2 setuptools-44.1.0 wheel-0.34.2 > 2020-04-01 05:47:51.309 | Collecting virtualenv > 2020-04-01 05:47:51.966 | Downloading virtualenv-20.0.15-py2.py3-none-any.whl (4.6 MB) > 2020-04-01 05:49:29.260 | ERROR: Exception: > 2020-04-01 05:49:29.261 | Traceback (most recent call last): > 2020-04-01 05:49:29.261 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 425, in _error_catcher > 2020-04-01 05:49:29.261 | yield > 2020-04-01 05:49:29.261 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 507, in read > 2020-04-01 05:49:29.261 | data = self._fp.read(amt) if not fp_closed else b"" > 2020-04-01 05:49:29.261 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/cachecontrol/filewrapper.py", line 62, in read > 2020-04-01 05:49:29.261 | data = self.__fp.read(amt) > 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/http/client.py", line 459, in read > 2020-04-01 05:49:29.261 | n = self.readinto(b) > 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/http/client.py", line 503, in readinto > 2020-04-01 05:49:29.261 | n = self.fp.readinto(b) > 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/socket.py", line 586, in readinto > 2020-04-01 05:49:29.261 | return self._sock.recv_into(b) > 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/ssl.py", line 1012, in recv_into > 2020-04-01 05:49:29.261 | return self.read(nbytes, buffer) > 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/ssl.py", line 874, in read > 2020-04-01 05:49:29.261 | return self._sslobj.read(len, buffer) > 2020-04-01 05:49:29.261 | File "/usr/lib/python3.6/ssl.py", line 631, in read > 2020-04-01 05:49:29.261 | v = self._sslobj.read(len, buffer) > 2020-04-01 05:49:29.261 | socket.timeout: The read operation timed out > 2020-04-01 05:49:29.261 | > 2020-04-01 05:49:29.261 | During handling of the above exception, another exception occurred: > 2020-04-01 05:49:29.261 | > 2020-04-01 05:49:29.261 | Traceback (most recent call last): > 2020-04-01 05:49:29.261 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 186, in _main > 2020-04-01 05:49:29.261 | status = self.run(options, args) > 2020-04-01 05:49:29.261 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 331, in run > 2020-04-01 05:49:29.262 | resolver.resolve(requirement_set) > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 177, in resolve > 2020-04-01 05:49:29.262 | discovered_reqs.extend(self._resolve_one(requirement_set, req)) > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 333, in _resolve_one > 2020-04-01 05:49:29.262 | abstract_dist = self._get_abstract_dist_for(req_to_install) > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 282, in _get_abstract_dist_for > 2020-04-01 05:49:29.262 | abstract_dist = self.preparer.prepare_linked_requirement(req) > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 482, in prepare_linked_requirement > 2020-04-01 05:49:29.262 | hashes=hashes, > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 287, in unpack_url > 2020-04-01 05:49:29.262 | hashes=hashes, > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 159, in unpack_http_url > 2020-04-01 05:49:29.262 | link, downloader, temp_dir.path, hashes > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 303, in _download_http_url > 2020-04-01 05:49:29.262 | for chunk in download.chunks: > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/ui.py", line 160, in iter > 2020-04-01 05:49:29.262 | for x in it: > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_internal/network/utils.py", line 39, in response_chunks > 2020-04-01 05:49:29.262 | decode_content=False, > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 564, in stream > 2020-04-01 05:49:29.262 | data = self.read(amt=amt, decode_content=decode_content) > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 529, in read > 2020-04-01 05:49:29.262 | raise IncompleteRead(self._fp_bytes_read, self.length_remaining) > 2020-04-01 05:49:29.262 | File "/usr/lib/python3.6/contextlib.py", line 99, in __exit__ > 2020-04-01 05:49:29.262 | self.gen.throw(type, value, traceback) > 2020-04-01 05:49:29.262 | File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 430, in _error_catcher > 2020-04-01 05:49:29.262 | raise ReadTimeoutError(self._pool, None, "Read timed out.") > 2020-04-01 05:49:29.262 | pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. > 2020-04-01 05:49:29.424 | Unmount /tmp/dib_build.QPWJUysz/mnt/var/cache/apt/archives > 2020-04-01 05:49:29.459 | Unmount /tmp/dib_build.QPWJUysz/mnt/tmp/pip > 2020-04-01 05:49:29.490 | Unmount /tmp/dib_build.QPWJUysz/mnt/tmp/in_target.d > 2020-04-01 05:49:29.522 | Unmount /tmp/dib_build.QPWJUysz/mnt/sys > 2020-04-01 05:49:29.546 | Unmount /tmp/dib_build.QPWJUysz/mnt/proc > 2020-04-01 05:49:29.573 | Unmount /tmp/dib_build.QPWJUysz/mnt/dev/pts > 2020-04-01 05:49:29.607 | Unmount /tmp/dib_build.QPWJUysz/mnt/dev > 2020-04-01 05:49:30.562 | INFO diskimage_builder.block_device.blockdevice [-] State already cleaned - no way to do anything here > > and may other errors. > > ________________________________ > hao7.liu at midea.com > > > 发件人: hao7.liu at midea.com > 发送时间: 2020-04-01 14:48 > 收件人: openstack-discuss at lists.openstack.org > 主题: 【octavia】Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem > OS version:CentOS7.6 > openstack version:Train > when i deployed my openstack with octavia,and create a lb,the worker report error logs: > > 2020-04-01 14:40:41.842 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.865 164881 INFO octavia.controller.worker.v1.tasks.database_tasks [-] Created Amphora in DB with id 191958e3-2577-4a8a-a1ff-b8f048056b72 > 2020-04-01 14:40:41.869 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'SUCCESS' from state 'RUNNING' with result '191958e3-2577-4a8a-a1ff-b8f048056b72' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:183 > 2020-04-01 14:40:41.874 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.927 164881 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. > 2020-04-01 14:40:41.927 164881 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. > 2020-04-01 14:40:41.946 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'FAILURE' from state 'RUNNING' > 13 predecessors (most recent first): > Atom 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': u'191958e3-2577-4a8a-a1ff-b8f048056b72'} > |__Flow 'BACKUP-octavia-create-amp-for-lb-subflow' > |__Atom 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'flavor': {u'loadbalancer_topology': u'ACTIVE_STANDBY'}, 'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': None} > |__Flow 'BACKUP-octavia-get-amphora-for-lb-subflow' > |__Flow 'BACKUP-octavia-plug-net-subflow' > |__Flow 'octavia-create-loadbalancer-flow' > |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } > |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': None} > |__Atom 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': , 'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': } > |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } > |__Atom 'reload-lb-before-allocate-vip' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': } > |__Atom 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': None} > |__Flow 'octavia-create-loadbalancer-flow': CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/cert_task.py", line 47, in execute > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker validity=CONF.certificates.cert_validity_time) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker cert = cls.sign_cert(csr, validity, **kwargs) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 91, in sign_cert > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker cls._validate_cert(ca_cert, ca_key, ca_key_pass) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 53, in _validate_cert > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker .format(CONF.certificates.ca_certificate) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker > 2020-04-01 14:40:41.969 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.972 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:41.975 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.975 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting create amphora in DB for amp id 191958e3-2577-4a8a-a1ff-b8f048056b72 > 2020-04-01 14:40:41.992 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:41.995 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (97f157c5-8b35-476d-a3d9-586087ecf235) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.996 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer d7ca9fb7-eda3-4a17-a615-c6d7f31d32d8 in the database. > 2020-04-01 14:40:42.003 164881 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. > 2020-04-01 14:40:42.003 164881 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. > 2020-04-01 14:40:42.005 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (97f157c5-8b35-476d-a3d9-586087ecf235) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.006 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'FAILURE' from state 'RUNNING': CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/cert_task.py", line 47, in execute > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker validity=CONF.certificates.cert_validity_time) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker cert = cls.sign_cert(csr, validity, **kwargs) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 91, in sign_cert > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker cls._validate_cert(ca_cert, ca_key, ca_key_pass) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 53, in _validate_cert > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker .format(CONF.certificates.ca_certificate) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker > 2020-04-01 14:40:42.013 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.014 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.017 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (145e3ecd-816e-415e-90a4-b7b09ca09c60) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.018 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting create amphora in DB for amp id 1ecbc19a-2644-4f3a-a9fc-bf6ace1655e3 > 2020-04-01 14:40:42.034 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (145e3ecd-816e-415e-90a4-b7b09ca09c60) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.038 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (a17713f7-52df-4d3b-8cd2-5e592ce29a6a) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.038 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer d7ca9fb7-eda3-4a17-a615-c6d7f31d32d8 in the database. > 2020-04-01 14:40:42.047 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (a17713f7-52df-4d3b-8cd2-5e592ce29a6a) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.052 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' (b6e38bf6-57d3-4b99-8226-486e16606d72) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.054 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' (b6e38bf6-57d3-4b99-8226-486e16606d72) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.059 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' (47efda4a-4ab4-4618-ae0d-f0d145ca75b0) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.062 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' (47efda4a-4ab4-4618-ae0d-f0d145ca75b0) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.066 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' (e24fb53e-195e-401d-b300-a798503d1f97) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.068 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' (e24fb53e-195e-401d-b300-a798503d1f97) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.073 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' (11bbd801-d889-4499-ab7d-768d81153939) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.073 164881 WARNING octavia.controller.worker.v1.tasks.network_tasks [-] Deallocating vip 172.20.250.184 > 2020-04-01 14:40:42.199 164881 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security group b2430a12-2c07-4ca9-a381-3af79f702715 from port a52f2cfa-765b-4664-b4ad-c2a11dd870de > 2020-04-01 14:40:43.189 164881 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security group b2430a12-2c07-4ca9-a381-3af79f702715 > 2020-04-01 14:40:43.994 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' (11bbd801-d889-4499-ab7d-768d81153939) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:43.999 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'reload-lb-before-allocate-vip' (01c2a7f3-9114-41f3-a2c0-42601b2b48f0) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:44.002 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'reload-lb-before-allocate-vip' (01c2a7f3-9114-41f3-a2c0-42601b2b48f0) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:44.007 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (2339e5d5-e545-4f1d-9147-4f5a7b2f9ce9) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:44.017 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (2339e5d5-e545-4f1d-9147-4f5a7b2f9ce9) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:44.028 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Flow 'octavia-create-loadbalancer-flow' (aab75b85-a8f1-486f-99e8-5c81e21aa3f3) transitioned into state 'REVERTED' from state 'RUNNING' > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server [-] Exception during message handling: WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem., Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem.] > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server Traceback (most recent call last): > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 274, in dispatch > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/queue/v1/endpoints.py", line 45, in create_load_balancer > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server self.worker.create_load_balancer(load_balancer_id, flavor) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 292, in wrapped_f > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self.call(f, *args, **kw) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 358, in call > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server do = self.iter(retry_state=retry_state) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 319, in iter > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return fut.result() > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 422, in result > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self.__get_result() > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 361, in call > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server result = fn(*args, **kwargs) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/controller_worker.py", line 344, in create_load_balancer > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server create_lb_tf.run() > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 247, in run > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server for _state in self.run_iter(timeout=timeout): > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server failure.Failure.reraise_if_any(er_failures) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 341, in reraise_if_any > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server raise exc.WrappedFailure(failures) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem., Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem.] > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server > > > ________________________________ > hao7.liu at midea.com From cboylan at sapwetik.org Thu Apr 2 17:56:15 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 02 Apr 2020 10:56:15 -0700 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> Message-ID: On Thu, Apr 2, 2020, at 3:38 AM, Dmitry Tantsur wrote: > Snip because I wanted to respond to one specific point made below. > Now, I do agree that there are steps that can be taken before we go all > nuclear. We can definitely work on our own website, we can reduce > reliance on oslo, start releasing independently, and so on. I'm > wondering what will be left of our participation in OpenStack in the > end. Thierry has suggested the role of the TC in ensuring integration. > I'm of the opinion that if all stakeholders in Ironic lose interest in > Ironic as part of OpenStack, no power will prevent the integration from > slowly falling apart. Opinion from someone that has worked on OpenStack for a long time: I don't think using oslo, sticking to a 6 month release cadence, integration with Nova is what defines "OpenStack". The goal has been to build tools for API driven management of data center resources. When looked at in this way some of the other examples mentioned, Zuul and Gnocchi, don't quite fit. But within that goal even if we aren't using the same underlying libraries or releasing in tight synchronization the involved individuals and projects can learn from each other and help each other in significant ways. Taking Ironic as the example, I think one of the major ways Ironic can contribute to OpenStack is showing how you can evolve to do things like 1) operate in a standalone fashion to meet user demands 2) remove/refactor/replace existing dependencies like rabbitmq to improve operability and stability 3) rely less on devstack for testing and so on. Whether or not the proposed split happens isn't up to me, but I'm worried we think that using oslo, integrating with nova, and strict adherence to a 6 month release cycle is what defines OpenStack. What will be left is your participation in the community to not only make management of baremetal servers in the datacenter better, but also networking, and virtualization, and storage and so on. Clark From skaplons at redhat.com Thu Apr 2 19:53:42 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 2 Apr 2020 21:53:42 +0200 Subject: [neutron] Drivers meeting - agenda Message-ID: Hi, I though that it may be useful for You if I will send earlier agenda for next drivers meeting to You. That You will have some time to go through RFEs which are planned to be discussed on our next meeting. For 3.04.2020 I have 3 RFEs to discuss: * https://bugs.launchpad.net/neutron/+bug/1592028 - [RFE] Support security-group-rule creation with address-groups * https://bugs.launchpad.net/neutron/+bug/1869129 - neutron accepts CIDR in security groups that are invalid in ovn - this was reported as issue in ovn but it seems that this may be potentially security issue which will require changes in API to fix, Already proposed patch https://review.opendev.org/#/c/716806/ * https://bugs.launchpad.net/neutron/+bug/1870319 - [RFE] Network cascade deletion API call See You tomorrow on drivers meeting. — Slawek Kaplonski Senior software engineer Red Hat From juliaashleykreger at gmail.com Thu Apr 2 20:15:59 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 2 Apr 2020 13:15:59 -0700 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> Message-ID: On Thu, Apr 2, 2020 at 10:58 AM Clark Boylan wrote: > > On Thu, Apr 2, 2020, at 3:38 AM, Dmitry Tantsur wrote: > > > > Snip because I wanted to respond to one specific point made below. > > > Now, I do agree that there are steps that can be taken before we go all > > nuclear. We can definitely work on our own website, we can reduce > > reliance on oslo, start releasing independently, and so on. I'm > > wondering what will be left of our participation in OpenStack in the > > end. Thierry has suggested the role of the TC in ensuring integration. > > I'm of the opinion that if all stakeholders in Ironic lose interest in > > Ironic as part of OpenStack, no power will prevent the integration from > > slowly falling apart. > > Opinion from someone that has worked on OpenStack for a long time: I don't think using oslo, sticking to a 6 month release cadence, integration with Nova is what defines "OpenStack". The goal has been to build tools for API driven management of data center resources. When looked at in this way some of the other examples mentioned, Zuul and Gnocchi, don't quite fit. But within that goal even if we aren't using the same underlying libraries or releasing in tight synchronization the involved individuals and projects can learn from each other and help each other in significant ways. > You raise many good points, and I would hope that there would be a continuing cross-learning and collaboration. I feel like the idea of independence driven by trying to solve two distinct issues (perceptions (of, about, and related to OpenStack as related to Ironic), and human resistance to any different pattern of behavior (releases _AND_ consumption their of)) has elicited a bit of a nuclear response and interpretation. > Taking Ironic as the example, I think one of the major ways Ironic can contribute to OpenStack is showing how you can evolve to do things like 1) operate in a standalone fashion to meet user demands 2) remove/refactor/replace existing dependencies like rabbitmq to improve operability and stability 3) rely less on devstack for testing and so on. > I think we already have. Although, I'm unsure if the developer community at large places any value on those things. In my experience, consumers of ironic software seem to. Has Ironic failed to really broadcast those things out? Likely, but I'm fairly sure we've mentioned some of these things in multiple forums (including Forums and project update). As a result I'm also unsure of what really more we CAN [effectively] do given the pre-conditions and resistance to change along with the existing team divisions and focuses. Is there a better way that reaches everyone? An additional way? I don't know, but would sure like to know of one. > Whether or not the proposed split happens isn't up to me, but I'm worried we think that using oslo, integrating with nova, and strict adherence to a 6 month release cycle is what defines OpenStack. What will be left is your participation in the community to not only make management of baremetal servers in the datacenter better, but also networking, and virtualization, and storage and so on. > > Clark > I believe it is ultimately up to the active contributors to the project as a whole in terms of splitting, and I guess that should be put to a vote at some point. Last time we polled among the cores about a ?year? ago upon revisiting an ask for us to consider working towards becoming a top level project, it was 50/50. Given everyone's comments, I'm fairly sure there would not be agreement to move forward. Maybe out of this, as a community, we could have a serious discussion of perceptions and headaches, but given our tendency to try and create process and tools for issues that are fundamentally related to humans... I am unsure. -Julia From nate.johnston at redhat.com Thu Apr 2 21:18:34 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Thu, 2 Apr 2020 17:18:34 -0400 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: <20200402211834.mxr7tsdoofpriase@firewall> Back in 2017 when Gnocchi moved out from OpenStack governance [1], in a review Dean Troyer wrote "This actually feels like a graduation, and I have a hunch it will not be the last one we see this year". I feel the same about Ironic moving beyond OpenStack governance. When OpenStack was riding high we created the big tent in order to accomodate new projects that desired entry. Now as OpenStack is later in it's lifecycle I think we should be just as accomodating for projects that see a brighter future in separation. When you think of the top open source projects for bootstrapping bare metal like Foreman or Cobbler, a datacenter automation engineer's first thought isn't "oh, isn't that part of some other thing I'm not interested in?" I think Ironic should be free to take it's place next to those other tools and be considered in it's own right. Since this proposal represents the collective will of the Ironic team, based on the endorsement of their elected team lead, I support the decision. [1] https://review.opendev.org/#/c/447438/ On Wed, Apr 01, 2020 at 07:03:47PM +0200, Dmitry Tantsur wrote: > Hi everyone! > > This topic should not come as a huge surprise for many, since it has been > raised numerous times in the past years. I have a feeling that the end of > Ussuri, now that we’ve re-acquired our PTL and are on the verge of > selecting new TC members, may be a good time to propose it for a formal > discussion. > > TL;DR I’m proposing to make Ironic a top-level project under opendev.org > and the OpenStack Foundation, following the same model as Zuul. I don’t > propose severing current relationships with other OpenStack projects, nor > making substantial changes in how the project is operated. > > (And no, it’s not an April 1st joke) > > Background > ========= > > Ironic was born as a Nova plugin, but has grown way beyond this single case > since then. The first commit in Bifrost dates to February 2015. During > these 5 years (hey, we forgot to celebrate!) it has developed into a > commonly used data center management tool - and still based on standalone > Ironic! The Metal3 project uses standalone Ironic as its hardware > management backend. We haven’t been “just” a component of OpenStack for a > while now, I think it’s time to officially recognize it. > > And before you ask: in no case do I suggest scaling down our invaluable > integration with Nova. We’re observing a solid growth of deployments using > Ironic as an addition to their OpenStack clouds, and this proposal doesn’t > try to devalue this use case. The intention is to accept publicly and > officially that it’s not the only or the main use case, but one of the main > use cases. I don’t think it comes as a surprise to the Nova team. > > Okay, so why? > =========== > > The first and the main reason is the ambiguity in our positioning. We do > see prospective operators and users confused by the perception that Ironic > is a part of OpenStack, especially when it comes to the standalone use > case. “But what if I don’t need OpenStack” is a question that I hear in > most of these conversations. Changing from “a part of OpenStack” to “a FOSS > tool that can integrate with OpenStack” is critical for our project to keep > growing into new fields. To me personally it feels in line with how OpenDev > itself is reaching into new areas beyond just the traditional IaaS. The > next OpenDev even will apparently have a bare metal management track, so > why not a top-level project for it? > > Another reason is release cadence. We have repeatedly expressed the desire > to release Ironic and its sub-projects more often than we do now. Granted, > *technically* we can release often even now. We can even abandon the > current release model and switch to “independent”, but it doesn’t entirely > solve the issue at hand. First, we don’t want to lose the notion of stable > branches. One way or another, we need to support consumers with bug fix > releases. Second, to become truly “independent” we’ll need to remove any > tight coupling with any projects that do integrated releases. Which is, > essentially, what I’m proposing here. > > Finally, I believe that our independence (can I call it “Irexit” please?) > has already happened in reality, we just shy away from recognizing it. Look: > 1. All integration points with other OpenStack projects are optional. > 2. We can work fully standalone and even provide a project for that. > 3. Many new features (RAID, BIOS to name a few) are exposed to standalone > users much earlier than to those going through Nova. > 4. We even have our own mini-scheduler (although its intention is not and > has not been to replace the Placement service). > 5. We make releases more often than the “core” OpenStack projects (but see > above). > > What we will do > ============ > > This proposal involves in the short term: > * Creating a new git namespace: opendev.org/ironic > * Creating a new website (name TBD, bare metal puns are welcome). > * If we can have https://docs.opendev.org/ironic/, it may be just fine > though. > * Keeping the same governance model, only adjusted to the necessary extent. > * Keeping the same policies (reviews, CI, stable). > * Defining a new release cadence and stable branch support schedule. > > In the long term we will consider (not necessary do): > * Reshaping our CI to rely less on devstack and grenade (only use them for > jobs involving OpenStack). > * Reducing or removing reliance on oslo libraries. > * Stopping using rabbitmq for messaging (we’ve already made it optional). > * Integrating with non-OpenStack services (kubernetes?) and providing > lighter alternatives (think, built-in authentication). > > What we will NOT do > ================ > > At least this proposal does NOT involve: > * Stopping maintaining the Ironic virt driver in Nova. > * Stopping running voting CI jobs with OpenStack services. > * Dropping optional integration with OpenStack services. > * Leaving OpenDev completely. > > What do you think? > =============== > > Please let us know what you think about this proposal. Any hints on how to > proceed with it, in case we reach a consensus, are also welcome. > > Cheers, > Dmitry From anlin.kong at gmail.com Fri Apr 3 00:46:20 2020 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 3 Apr 2020 13:46:20 +1300 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: <20200402211834.mxr7tsdoofpriase@firewall> References: <20200402211834.mxr7tsdoofpriase@firewall> Message-ID: I see we are talking about another "Gnocchi", when Gnocchi moved out of OpenStack, people said they could run Gnocchi in standalone mode without installing the other OpenStack services, then they changed default dependency of some other projects (Ceilometer, Panko, etc) to Gnocchi. As a result, they are all dead (or almost dead). Another example is a long time ago in one OpenStack project, there was a demand for secret management, people said, Barbican is not mature and not production ready yet, we shouldn't dependent on Barbican but could make it optional, as a result, Barbican never adopted in the project in real deployment. I have been involved in OpenStack community since 2013, I see people came and left, I see projects created and died, until now, there are only a few of projects alive and actively maintained. IMHO, as a community, we should try our best to integrate projects with each other, no project can live well without some others help, projects rarely stand or fall alone. Well, I'm not part of TC, I'm not the person or team can decide how Ironic project goes in this situation. But as a developer who is trying very hard to maintain several OpenStack projects, that what I'm thinking. My 0.02. - Best regards, Lingxian Kong Catalyst Cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Fri Apr 3 07:30:38 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 03 Apr 2020 09:30:38 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> Message-ID: <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Hello, On Thu, 2020-04-02 at 12:38 +0200, Dmitry Tantsur wrote: > Hi, > > (snipped) > > > People do get surprised when they hear that Ironic can be used > standalone, yes. "A part of OpenStack" maps to "installed inside > OpenStack" rather than "is developed on the OpenStack platform". That's indeed what we need to change. > > > If you consider OpenStack taints this "standalone" part of Ironic, > > do you think that putting it as a top project of the **OpenStack > > Foundation ** will help? I don't think so. People will still see > > it's an OpenStack _related_ technology, with a history of being an > > OpenStack project, which is now a standalone project inside the > > OpenStack foundation. At best, it confuses people which are not > > really aware of all these details. > > > > Time to rename the Foundation? :) How is the same problem solved for > Zuul or Kata Containers? The first made me smile :) I would say that Kata and Zuul have a different history. To my eyes, Kata started as completely separated. How Zuul will eventually manage to detatch itself from the OpenStack name could be interesting. Please note that I have received the same questions (Do I need openstack? Is it a part of openstack?) when questioned about Zuul, in some events. (snipped) > As an aside, I don't think gnocchi fell victim of their split, but > rather shared the overall fate of the Telemetry project. I don't disagree. > I also think your suggestion goes against the idea of OpenDev, which > to me is to embrace a fast collection of Open Infrastructure > projects, related to OpenStack or not. If you say that anything going > to OpenDev will be seen as an OpenStack project, it defeats the > purpose of OpenDev. I wrongly worded this then. This is not my intent. OpenDev is a good name/good branding IMO. It feels detached from OpenStack. I can totally see many projects to be successful there without appearing to be attached to the OpenStack name. For people searching a little bit, it wouldn't take long to see that OpenStack was behind OpenDev, and therefore people can still attach the name if they want. I think that what matters is to be explicit in the project message. (snipped) > > Can't we work on the branding, naming, and message without the > > move? Why the hassle of moving things? Does that really bring value > > to your team? Before forging my final (personal) opinion, I would > > like more information than just gut feelings. > > > > It's not "just gut feelings", it's the summary of numerous > conversations that Julia and I have to hold when advocating for > Ironic outside of the Nova context. We do this "Ironic does not imply > OpenStack" explanation over and over, often enough unsuccessfully. Let me rephrase this: Do you have feedback from people not active in the project which would be happy to step up/in if Ironic was not an OpenStack project anymore? What could be changed from the OpenStack side to change that mindset? > And then some people just don't like OpenStack... I don't disagree, sadly. I know it's a hard task, but I prefer tackling this. Make OpenStack (more) likeable. To me, that seems a better goal in itself. But that's maybe me :) > Now, I do agree that there are steps that can be taken before we go > all nuclear. We can definitely work on our own website, we can reduce > reliance on oslo, start releasing independently, and so on. I'm > wondering what will be left of our participation in OpenStack in the > end. Thierry has suggested the role of the TC in ensuring > integration. I'm of the opinion that if all stakeholders in Ironic > lose interest in Ironic as part of OpenStack, no power will prevent > the integration from slowly falling apart. I don't see it that way. I see this as an opportunity to make OpenStack more clear, more reachable, more interesting. For me, Ironic, Cinder, Manila (to only name a few), are very good example of datacenter/IaaS software that could be completely independent in their consumption, and additionally released together. For me, the strength of OpenStack was always in the fact it had multiple small projects that work well together, compared to a single big blob of software which does everything. We just didn't bank enough on the standalone IMHO. But I am sure we are aligned there... Wouldn't the next steps be instead to make it easier to consume standalone? Also, how is the reliance on oslo a problem? Do you want to use another library in the python ecosystem instead? If true, what about phasing out that part of oslo, so we don't have to maintain it? Just curious. > (snipped) I'm referring to a very narrow sense of Nova+company. I.e. > a solution for providing virtual machines booting from virtual > volumes on virtual networks. Ironic does not clearly fit there, nor > does, say, Zuul. Got it. That's not my understanding of what OpenStack is, but I concede that I might have a different view than most. > > )snipped) Please note that I am still writing an idea in our ideas > > framework, proposing a change in the release cycles (that > > conversation again! but with a little twist), which I guess you > > might be interested in. > > > > Please let me know when it's ready, I really am. Will do! Regards, JP From jean-philippe at evrard.me Fri Apr 3 07:32:36 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 03 Apr 2020 09:32:36 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> Message-ID: <95285e29fedea8f2800daf510c22c92a523dbd62.camel@evrard.me> On Thu, 2020-04-02 at 10:56 -0700, Clark Boylan wrote: > Taking Ironic as the example, I think one of the major ways Ironic > can contribute to OpenStack is showing how you can evolve to do > things like 1) operate in a standalone fashion to meet user demands > 2) remove/refactor/replace existing dependencies like rabbitmq to > improve operability and stability 3) rely less on devstack for > testing and so on. Just FYI, I feel the same. I wouldn't stop on those 3 items, as I think project identity and branding will need a lot of work. I just think this work is to be done, regardless of where it's hosted. Regards, JP From jean-philippe at evrard.me Fri Apr 3 07:43:04 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 03 Apr 2020 09:43:04 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> Message-ID: <5d808e6503e222c6c421b75f425f7e051f9f5d3b.camel@evrard.me> On Thu, 2020-04-02 at 13:15 -0700, Julia Kreger wrote: > > I believe it is ultimately up to the active contributors to the > project as a whole in terms of splitting, and I guess that should be > put to a vote at some point. Last time we polled among the cores > about > a ?year? ago upon revisiting an ask for us to consider working > towards > becoming a top level project, it was 50/50. Given everyone's > comments, > I'm fairly sure there would not be agreement to move forward. I wholeheartily agree with the fact it's up to the active contributors. I just hope that if/when the move is done, it's a positive effort, and everyone had the required elements to decide. I can't tell for the other answers in this thread, but for me, I want to ask the difficult questions to make sure we have done everything that's right :) Let me rephrase that: If the Ironic community wants to split, let it be! I am not against the split by itself. I won't ask to stay if everybody wants to leave the boat. My intent is not to be a warden, because OpenStack isn't a prison :) I just love playing Devil's advocate. In this case, I am particularily interested, because I just genuinely care, and I am trying to think at the ecosystem level, not only at a project level. I try to understand how I can help Ironic grow, while helping the other projects too. I hope I am not the only one responding with that kind of view. It seems ironic has a few ideas on how to improve its standalone identity forward that doesn't seem needing to split out of OpenStack... I am just questioning whether the split out needs to happen now, or maybe phased afterwards. > Maybe out of this, as a community, we could have a serious discussion > of perceptions and headaches, but given our tendency to try and > create > process and tools for issues that are fundamentally related to > humans... I am unsure. That's totally fair. I think we need to simplify openstack futher. Be the bazaar, not the Cathedral. Regards, JP From thierry at openstack.org Fri Apr 3 08:58:25 2020 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 3 Apr 2020 10:58:25 +0200 Subject: [OpenStack-I18n] [I18n][PTL][election] Nominations were over - no PTL candidacy In-Reply-To: References: <6175f5cf-94e6-045d-6291-1effd140d6d8@gmail.com> Message-ID: Akihiro Motoki wrote: > [...] > I wonder what will change if we move to SIG. > That's a big question which hits me. We need to clarify what are the > scope of the i18n team (or SIG). The main difference between a SIG and a Project Team is that SIGs have less constraints. Project Teams are typically used to produce a part of the OpenStack release, and so we require some level of accountability (know who is empowered to sign off releases, know how to contact for embargoed security issues). That is why we currently require that a PTL is determined every 6 months. SIGs on the other hand are just a group of people sharing a common interest. There might be group leads, but no real need for a final call to be made. It's just a way to pool resources toward a common goal. Historically we've considered translations as a "part" of the openstack release, and so I18n is currently a project team. That said, I18n is arguably a special interest, it does not really need PTLs to be designated, and the release is OK even if some translations are not complete. It's a 'best effort' work, so it does not require the heavy type of accountability that we require from project teams. So in summary: making I18n a SIG would remove the need to designate a PTL every 6 months, and just continue work as usual. -- Thierry Carrez (ttx) From pshchelokovskyy at mirantis.com Fri Apr 3 09:41:46 2020 From: pshchelokovskyy at mirantis.com (Pavlo Shchelokovskyy) Date: Fri, 3 Apr 2020 12:41:46 +0300 Subject: [goals][Drop Python 2.7 Support] Week R-6 Update In-Reply-To: <17138caea16.1062fb0e367449.7456167229285935355@ghanshyammann.com> References: <17138caea16.1062fb0e367449.7456167229285935355@ghanshyammann.com> Message-ID: On Thu, Apr 2, 2020 at 5:53 AM Ghanshyam Mann wrote: > Hello Everyone, > > Below is the progress on "Drop Python 2.7 Support" at end of R-6 week. > > Schedule: > https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule > > > Highlights: > ======== > * This is very close to mark as complete. > * We merged many of charm repo patches and few more projects. > * Few ansible repo failing on centos7 and waiting for migration on centos8 > * I tried to reach out to team for pending patches. > > Project wise status and need reviews: > ============================ > Phase-1 status: > All the OpenStack services have dropped the python2.7. > > Phase-2 status: > > * Pending Tempest plugins: > ** cyborg-tempest-plugin: https://review.opendev.org/#/c/704076/ > ** kuryr-tempest-plugin: https://review.opendev.org/#/c/704072/ > > * Pending pythonclient: > ** python-barbicanclient: https://review.opendev.org/#/c/699096/2 > *** gate is already broken waiting for gate to be fixed. > ** python-zaqarclient: https://review.opendev.org/#/c/692011/4 > ** python-tripleoclient: https://review.opendev.org/#/c/703344 > > * Few more repo patches need to merge: > ** masakari-specs: https://review.opendev.org/#/c/698982/ > ** cyborg-specs: https://review.opendev.org/#/c/698824/ > ** nova-powervm: https://review.opendev.org/#/c/700683/ > ** paunch: https://review.opendev.org/#/c/703344/ > > * Started pushing the required updates on deployment projects. > > ** Completed or no updates required: > *** Openstack-Chef - not required > *** Packaging-Rpm - Done > *** Puppet Openstack- Done > *** Tripleo - except python client, all is done. > > ** In progress: > *** Openstack Charms - Most of them merged, few failing on func job. > debugging. > *** Openstackansible - In-progress. centos7 jobs are failing on few > projects. > > ** Waiting from projects team to know the status: > *** Openstack-Helm (Helm charts for OpenStack services) > Disclaimer: I am not a part of the openstack-helm team/community in the full sense of it, so these are my personal thoughts on the OpenStack-Helm vs Py3 OpenStack-Helm is a bit odd here: - it does not have stable branches - it has single (very ancient, 3 years ago) tag of 0.1.0 - it still supports (as in deploys on CI) Ocata Given community goal of "Python3 first" was targeting and completed in Stein, I don't think that openstack-helm should drop support of Python2 in their startup/readiness/liveness and other scripts unless they are willing to drop support of OpenStack releases earlier that Stein and demand images built on Py3 exclusively - and AFAICT they don't. Best regards, Pavlo. > > > * Open review: > https://review.opendev.org/#/q/topic:drop-py27-support+status:open > > Phase-3 status: > This is audit and requirement repo work which is not started yet. I will > start the audit in parallel to finishing the pending things listed above. > > How you can help: > ============== > - Review the patches. Push the patches if I missed any repo. > > -gmann > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjozefcz at redhat.com Fri Apr 3 10:07:53 2020 From: mjozefcz at redhat.com (Maciej Jozefczyk) Date: Fri, 3 Apr 2020 12:07:53 +0200 Subject: [packaging] New package: ovn-octavia-provider Message-ID: Hello, During this cycle Neutron team merged networking-ovn code to Neutron [1]. As a consequence of it we needed to create a new project called: "OVN Octavia provider", which is a provider driver for Octavia. Before it the driver was placed in networking-ovn tree. Neutron cannot rely on Octavia-lib, so that is why this decision was made. The code move process to Neutron took some time and we released the OVN Octavia provider driver yesterday - under version 0.1.0. [2]. If you're working on distro packaging - please be aware that the driver is now delivered by a new package. RPM spec in RDO is ongoing: [3]. We are sorry for this late announcement. [1] https://blueprints.launchpad.net/neutron/+spec/neutron-ovn-merge [2] https://review.opendev.org/#/c/710200/ [3] https://review.rdoproject.org/r/#/c/25972/ -- Best regards, Maciej Józefczyk -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Fri Apr 3 11:48:20 2020 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Fri, 3 Apr 2020 13:48:20 +0200 Subject: [packaging] New package: ovn-octavia-provider In-Reply-To: References: Message-ID: Hello Maciej, Thank you for the heads up about the new project. Just wanted to share that in Ubuntu we have been tracking the merge of ``networking-ovn`` into Neutron closely this cycle. We decided to prepare packaging for the new ``ovn-octavia-provider`` source repository back in February [0], and if all pans out as planned we will ship it with the upcoming Ubuntu Focal Fossa (LTS) release as well as making it available in the Bionic/Ussuri Ubuntu Cloud Archive. 0: https://git.launchpad.net/~ubuntu-server-dev/ubuntu/+source/ovn-octavia-provider -- Frode Nordahl On Fri, Apr 3, 2020 at 12:11 PM Maciej Jozefczyk wrote: > > Hello, > > During this cycle Neutron team merged networking-ovn code to Neutron [1]. As a consequence of it we needed to create a new project called: "OVN Octavia provider", which is a provider driver for Octavia. Before it the driver was placed in networking-ovn tree. > Neutron cannot rely on Octavia-lib, so that is why this decision was made. > > The code move process to Neutron took some time and we released the OVN Octavia provider driver yesterday - under version 0.1.0. [2]. > > If you're working on distro packaging - please be aware that the driver is now delivered by a new package. RPM spec in RDO is ongoing: [3]. > > > We are sorry for this late announcement. > > > [1] https://blueprints.launchpad.net/neutron/+spec/neutron-ovn-merge > [2] https://review.opendev.org/#/c/710200/ > [3] https://review.rdoproject.org/r/#/c/25972/ > > -- > Best regards, > Maciej Józefczyk From tpb at dyncloud.net Fri Apr 3 11:59:26 2020 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 3 Apr 2020 07:59:26 -0400 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: <20200403115926.izy4xn3cq4xwwc2j@barron.net> On 03/04/20 09:30 +0200, Jean-Philippe Evrard wrote: >Hello, > >On Thu, 2020-04-02 at 12:38 +0200, Dmitry Tantsur wrote: > (snipped) I'm referring to a very narrow sense of Nova+company. I.e. >> a solution for providing virtual machines booting from virtual >> volumes on virtual networks. Ironic does not clearly fit there, nor >> does, say, Zuul. > >Got it. That's not my understanding of what OpenStack is, but I concede >that I might have a different view than most. +1000 and I hope that we're all moving beyond this narrow understanding of what OpenStack is. I think of myself as working on OpenStack because I work on self-service storage cloud infrastructure with hard multi-tenant separatiion in a community committed to a design and development process committed to The Four Opens [1]. None of that requires that the consumers of this self-service storage infrastructure be virtual machines booting from virtual volumes on virtual networks. Sure, Nova VMs consume Manila shares. But so do bare metal machines and container workloads (via CSI plugins) themselves running on VMs and running on bare metal, where the hosts theselves may or may not be part of an OpenStack cloud. It would be interesting to see the question at hand about Ironic framed in the context of the recent OpenStack Technical Vision [2]. Is there an Ironic Vision that does not really align with it? -- Tom Barron [1] https://governance.openstack.org/tc/reference/opens.html [2] https://governance.openstack.org/tc/reference/technical-vision.html From zigo at debian.org Fri Apr 3 13:45:19 2020 From: zigo at debian.org (Thomas Goirand) Date: Fri, 3 Apr 2020 15:45:19 +0200 Subject: =?UTF-8?B?UmU6IOWbnuWkjTog44CQb2N0YXZpYeOAkUZhaWxlZCB0byBsb2FkIENB?= =?UTF-8?Q?_Certificate_/etc/octavia/certs/server=5fca=2ecert=2epem?= In-Reply-To: <2020040114513476847214@midea.com> References: <2020040114483393995311@midea.com> <2020040114513476847214@midea.com> Message-ID: On 4/1/20 8:51 AM, hao7.liu at midea.com wrote: > OS version:CentOS7.6, ubuntu1804 > openstack version:Train > when i create an amphora image, always may errors, such as: > > ./diskimage-create.sh -i ubuntu -d bionic -r 123456 -s 5 -o > amphora-x64-haproxy-ubuntu-1804-0401 Hi, I found it particularly difficult to create the certs, so I created a script to do it all: https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/train/utils/usr/bin/oci-octavia-certs Maybe you should give it a try? I also found that the image provided by upstream has many problems, the biggest of them is that, when having a lot of traffic on your load balancer, haproxy logs a lot in /var/log/haproxy.log, and the default of the haproxy package is to keep 52 days of logs, rotated weekly. That's nearly one year of logs. If there's a lot of traffic on your load balancer, it will quickly fill-up the small HDD for the amphora, especially if you leave the default of 2GB (I strongly recommend 4GB instead of 2). I also found it very problematic that most of the files made with diskimage-builder end up not being part of any package. They are just there, floating around, with no package owning them. In the Debian package, I made sure that as much as possible, everything is held by the octavia-agent package. As a result, the setup script becomes super minimalist. Using my own tool (openstack-debian-images, used to create the official Debian OpenStack images), I made a very simple script to build Octavia images. This isn't using diskimage-builder. Upstream isn't happy about it, because they can't have their hands on it, and I'd have to rebase my change whenever they do one. But ... there's no way I'm going to keep such a dirty setup as they propose. So many things just belong to packaging, and not to such an image script. I also find that the DIB elements are kind of over-engineering things. Getting the files needed in the package was kind of not easy. The resulting build script can be found here: https://salsa.debian.org/openstack-team/debian/openstack-debian-images/-/tree/debian/train/contrib%2Foctavia The amphora-build is what should be launch, the other script is where it all resides: tweak of /etc/logrotate.d/haproxy and tweak of logrotate.timer (so that logrotate starts every hour, not just every day). If you want the built image, I have just pushed a copy for you here: http://shade.infomaniak.ch/octavia-amphora/ Note that I'm working toward having all of this shipped as an official Debian image, generated automatically with the other images. I hope this helps, Cheers, Thomas Goirand From sean.mcginnis at gmx.com Fri Apr 3 14:52:33 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 3 Apr 2020 09:52:33 -0500 Subject: [release] Release countdown for week R-5, April 6 - 10 Message-ID: <20200403145233.GA411080@sm-workstation> Development Focus ----------------- We are getting close to the end of the Ussuri cycle! Next week on April 9 is the ussuri-3 milestone, also known as feature freeze. It's time to wrap up feature work in the services and their client libraries, and defer features that won't make it to the Ussuri cycle. General Information ------------------- This coming week is the deadline for client libraries: their last feature release needs to happen before "Client library freeze" on April 9. Only bugfix releases will be allowed beyond this point. When requesting those library releases, you can also include the stable/ussuri branching request with the review (as an example, see the "branches" section here: https://opendev.org/openstack/releases/src/branch/master/deliverables/pike/os-brick.yaml#n2) April 9 is also the deadline for feature work in all OpenStack deliverables following the cycle-with-rc model. To help those projects produce a first release candidate in time, only bugfixes should be allowed in the master branch beyond this point. Any feature work past that deadline has to be approved by the team PTL. Finally, feature freeze is also the deadline for submitting a first version of your cycle-highlights. Cycle highlights are the raw data hat helps shape what is communicated in press releases and other release activity at the end of the cycle, avoiding direct contacts from marketing folks. See https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights for more details. Upcoming Deadlines & Dates -------------------------- Ussuri-3 milestone (feature freeze): April 9 (R-5 week) Cycle Highlights Due: April 9 (R-5 week) RC1 deadline: April 23 (R-3 week) Final RC deadline: May 7 (R-1 week) Final Ussuri release: May 13 From hao7.liu at midea.com Fri Apr 3 00:40:57 2020 From: hao7.liu at midea.com (hao7.liu at midea.com) Date: Fri, 3 Apr 2020 08:40:57 +0800 Subject: =?UTF-8?B?UmU6IFJlOiDjgJBvY3Rhdmlh44CRRmFpbGVkIHRvIGxvYWQgQ0EgQ2VydGlmaWNhdGUgL2V0Yy9vY3RhdmlhL2NlcnRzL3NlcnZlcl9jYS5jZXJ0LnBlbQ==?= References: <2020040114483393995311@midea.com>, Message-ID: <2020040308405732776115@midea.com> yeah, this error has been resolved by the Guide. thankyou. hao7.liu at midea.com 发件人: Michael Johnson 发送时间: 2020-04-03 01:39 收件人: hao7.liu at midea.com 抄送: openstack-discuss at lists.openstack.org 主题: Re: 【octavia】Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem Hi, The certificate error you are reporting is a configuration error. Please see the Octavia Certificate Configuration Guide, https://docs.openstack.org/octavia/latest/admin/guides/certificates.html for information on how to setup and configure your control plane certificates. If you walk step by step through the guide, you should find the configuration issue that you are facing. Michael On Wed, Apr 1, 2020 at 4:00 PM hao7.liu at midea.com wrote: > > OS version:CentOS7.6 > openstack version:Train > when i deployed my openstack with octavia,and create a lb,the worker report error logs: > > 2020-04-01 14:40:41.842 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.865 164881 INFO octavia.controller.worker.v1.tasks.database_tasks [-] Created Amphora in DB with id 191958e3-2577-4a8a-a1ff-b8f048056b72 > 2020-04-01 14:40:41.869 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'SUCCESS' from state 'RUNNING' with result '191958e3-2577-4a8a-a1ff-b8f048056b72' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:183 > 2020-04-01 14:40:41.874 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.927 164881 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. > 2020-04-01 14:40:41.927 164881 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. > 2020-04-01 14:40:41.946 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'FAILURE' from state 'RUNNING' > 13 predecessors (most recent first): > Atom 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {}, 'provides': u'191958e3-2577-4a8a-a1ff-b8f048056b72'} > |__Flow 'BACKUP-octavia-create-amp-for-lb-subflow' > |__Atom 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'flavor': {u'loadbalancer_topology': u'ACTIVE_STANDBY'}, 'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': None} > |__Flow 'BACKUP-octavia-get-amphora-for-lb-subflow' > |__Flow 'BACKUP-octavia-plug-net-subflow' > |__Flow 'octavia-create-loadbalancer-flow' > |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } > |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': None} > |__Atom 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'vip': , 'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': } > |__Atom 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': }, 'provides': } > |__Atom 'reload-lb-before-allocate-vip' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': } > |__Atom 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer_id': u'd7ca9fb7-eda3-4a17-a615-c6d7f31d32d8'}, 'provides': None} > |__Flow 'octavia-create-loadbalancer-flow': CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/cert_task.py", line 47, in execute > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker validity=CONF.certificates.cert_validity_time) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker cert = cls.sign_cert(csr, validity, **kwargs) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 91, in sign_cert > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker cls._validate_cert(ca_cert, ca_key, ca_key_pass) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 53, in _validate_cert > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker .format(CONF.certificates.ca_certificate) > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:41.946 164881 ERROR octavia.controller.worker.v1.controller_worker > 2020-04-01 14:40:41.969 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.972 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7f312151-6f92-4ae7-9826-0fccc315ba43) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:41.975 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.975 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting create amphora in DB for amp id 191958e3-2577-4a8a-a1ff-b8f048056b72 > 2020-04-01 14:40:41.992 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (667607d7-6357-4bac-a498-725c370a2b34) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:41.995 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (97f157c5-8b35-476d-a3d9-586087ecf235) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:41.996 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer d7ca9fb7-eda3-4a17-a615-c6d7f31d32d8 in the database. > 2020-04-01 14:40:42.003 164881 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally. > 2020-04-01 14:40:42.003 164881 INFO octavia.certificates.generator.local [-] Using CA Certificate from config. > 2020-04-01 14:40:42.005 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'BACKUP-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (97f157c5-8b35-476d-a3d9-586087ecf235) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.006 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'FAILURE' from state 'RUNNING': CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker Traceback (most recent call last): > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker result = task.execute(**arguments) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/tasks/cert_task.py", line 47, in execute > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker validity=CONF.certificates.cert_validity_time) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker cert = cls.sign_cert(csr, validity, **kwargs) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 91, in sign_cert > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker cls._validate_cert(ca_cert, ca_key, ca_key_pass) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker File "/usr/lib/python2.7/site-packages/octavia/certificates/generator/local.py", line 53, in _validate_cert > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker .format(CONF.certificates.ca_certificate) > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem. > 2020-04-01 14:40:42.006 164881 ERROR octavia.controller.worker.v1.controller_worker > 2020-04-01 14:40:42.013 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'REVERTING' from state 'FAILURE' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.014 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-generate-serverpem' (7abe1523-7802-48ad-a7c1-1d2f8f32f706) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.017 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (145e3ecd-816e-415e-90a4-b7b09ca09c60) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.018 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting create amphora in DB for amp id 1ecbc19a-2644-4f3a-a9fc-bf6ace1655e3 > 2020-04-01 14:40:42.034 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-create-amp-for-lb-subflow-octavia-create-amphora-indb' (145e3ecd-816e-415e-90a4-b7b09ca09c60) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.038 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (a17713f7-52df-4d3b-8cd2-5e592ce29a6a) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.038 164881 WARNING octavia.controller.worker.v1.tasks.database_tasks [-] Reverting Amphora allocation for the load balancer d7ca9fb7-eda3-4a17-a615-c6d7f31d32d8 in the database. > 2020-04-01 14:40:42.047 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'MASTER-octavia-get-amphora-for-lb-subflow-octavia-mapload-balancer-to-amphora' (a17713f7-52df-4d3b-8cd2-5e592ce29a6a) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.052 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' (b6e38bf6-57d3-4b99-8226-486e16606d72) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.054 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.GetSubnetFromVIP' (b6e38bf6-57d3-4b99-8226-486e16606d72) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.059 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' (47efda4a-4ab4-4618-ae0d-f0d145ca75b0) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.062 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIPSecurityGroup' (47efda4a-4ab4-4618-ae0d-f0d145ca75b0) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.066 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' (e24fb53e-195e-401d-b300-a798503d1f97) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.068 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.database_tasks.UpdateVIPAfterAllocation' (e24fb53e-195e-401d-b300-a798503d1f97) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:42.073 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' (11bbd801-d889-4499-ab7d-768d81153939) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:42.073 164881 WARNING octavia.controller.worker.v1.tasks.network_tasks [-] Deallocating vip 172.20.250.184 > 2020-04-01 14:40:42.199 164881 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Removing security group b2430a12-2c07-4ca9-a381-3af79f702715 from port a52f2cfa-765b-4664-b4ad-c2a11dd870de > 2020-04-01 14:40:43.189 164881 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Deleted security group b2430a12-2c07-4ca9-a381-3af79f702715 > 2020-04-01 14:40:43.994 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.network_tasks.AllocateVIP' (11bbd801-d889-4499-ab7d-768d81153939) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:43.999 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'reload-lb-before-allocate-vip' (01c2a7f3-9114-41f3-a2c0-42601b2b48f0) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:44.002 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'reload-lb-before-allocate-vip' (01c2a7f3-9114-41f3-a2c0-42601b2b48f0) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:44.007 164881 DEBUG octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (2339e5d5-e545-4f1d-9147-4f5a7b2f9ce9) transitioned into state 'REVERTING' from state 'SUCCESS' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:194 > 2020-04-01 14:40:44.017 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Task 'octavia.controller.worker.v1.tasks.lifecycle_tasks.LoadBalancerIDToErrorOnRevertTask' (2339e5d5-e545-4f1d-9147-4f5a7b2f9ce9) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None' > 2020-04-01 14:40:44.028 164881 WARNING octavia.controller.worker.v1.controller_worker [-] Flow 'octavia-create-loadbalancer-flow' (aab75b85-a8f1-486f-99e8-5c81e21aa3f3) transitioned into state 'REVERTED' from state 'RUNNING' > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server [-] Exception during message handling: WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem., Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem.] > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server Traceback (most recent call last): > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 274, in dispatch > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/queue/v1/endpoints.py", line 45, in create_load_balancer > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server self.worker.create_load_balancer(load_balancer_id, flavor) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 292, in wrapped_f > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self.call(f, *args, **kw) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 358, in call > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server do = self.iter(retry_state=retry_state) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 319, in iter > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return fut.result() > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 422, in result > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server return self.__get_result() > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 361, in call > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server result = fn(*args, **kwargs) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/octavia/controller/worker/v1/controller_worker.py", line 344, in create_load_balancer > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server create_lb_tf.run() > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 247, in run > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server for _state in self.run_iter(timeout=timeout): > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server failure.Failure.reraise_if_any(er_failures) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 341, in reraise_if_any > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server raise exc.WrappedFailure(failures) > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server WrappedFailure: WrappedFailure: [Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem., Failure: octavia.common.exceptions.CertificateGenerationException: Could not sign the certificate request: Failed to load CA Certificate /etc/octavia/certs/server_ca.cert.pem.] > 2020-04-01 14:40:44.029 164881 ERROR oslo_messaging.rpc.server > > > ________________________________ > hao7.liu at midea.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Fri Apr 3 16:11:35 2020 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Fri, 3 Apr 2020 12:11:35 -0400 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: Hi JP, Thanks for your great question. I admit that I struggled to answer it. Something that just came to my mind -- just right now, while trying to figure out an answer -- is that it would be really cool if we had some manner of documenting (repository tag?) which OpenStack services are able to run standalone or be reused outside of OpenStack. This is important information to highlight as standalone/reusable seems to be an important part of OpenStack's future. I could see it leading to a boost in new contributors too. Best, Jeremy On Thu, Apr 2, 2020 at 6:30 AM Jean-Philippe Evrard wrote: > > Hello, > > I read your nominations, and as usual I will ask what do you > _technically_ will do during your mandate, what do you _actively_ want > to change in OpenStack? > > This can be a change in governance, in the projects, in the current > structure... it can be really anything. I am just hoping to see > practical OpenStack-wide changes here. It doesn't need to be a fully > refined idea, but something that can be worked upon. > > Thanks for your time. > > Regards, > Jean-Philippe Evrard > > > > From mdulko at redhat.com Fri Apr 3 17:01:34 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Fri, 03 Apr 2020 19:01:34 +0200 Subject: [k8s][zun] Introduce a new feature in Ussuri - CRI integration In-Reply-To: References: <668362769fa2171e2c42d58ecb430968fd658b5f.camel@redhat.com> Message-ID: <48dc3f0315bacb9aab53fda3dc7b8d7023f96df9.camel@redhat.com> On Tue, 2020-03-24 at 09:16 -0400, Hongbin Lu wrote: > > > On Tue, Mar 24, 2020 at 7:28 AM wrote: > > On Mon, 2020-03-23 at 12:17 -0400, Hongbin Lu wrote: > > > > > > > > > On Mon, Mar 23, 2020 at 11:48 AM wrote: > > > > On Sun, 2020-03-22 at 13:28 -0400, Hongbin Lu wrote: > > > > > Hi all, > > > > > > > > > > As we are approaching the end of Ussuri cycle, I would like to take a > > > > > chance to introduce a new feature the Zun team implemented in this > > > > > cycle - CRI integration [1]. > > > > > > > > > > > Under the hook, a capsule is a podsandbox with one or more containers > > > > > in a CRI runtime (i.e. containerd). Compared to Docker, a CRI runtime > > > > > has a better support for the pod concept so we chose it to implement > > > > > capsule. A caveat is that CRI requires a CNI plugin for the > > > > > networking, so we need to implement a CNI plugin for Zun (called zun- > > > > > cni). The role of CNI plugin is similar as kuryr-libnetwork that we > > > > > are using for Docker except it implements a different networking > > > > > model (CNI). I summaries it as below: > > > > > > > > Hi, > > > > > > > > I noticed that Zun's CNI plugin [1] is basically a simplified version > > > > of kuryr-kubernetes code. While it's totally fine you've copied that, I > > > > wonder what modifications had been made to make it suit Zun? Is there a > > > > chance to converge this to make Zun use kuryr-kubernetes directly so > > > > that we won't develop two versions of that code in parallel? > > > > > > Right. I also investigated the possibilities of reusing the kuryr- > > > kubernetes codebase as well. Definitely, some codes are common among > > > two projects. If we can move the common code to a library (i.e. > > > kuryr-lib), Zun should be able to directly consume the code. In > > > particular, I am interesting to directly consume the CNI binding code > > > (kuryr_kubernetes/cni/binding/) and the VIF versioned object > > > (kuryr_kubernetes/objects). > > > > > > Most parts of kuryr-kubernetes code is coupling with the "list-and- > > > watch" logic against k8s API. Zun is not able to reuse those part of > > > code. However, I do advocate to move all the common code to kuryr-lib > > > so Zun can reuse it whenever it is appropriate. > > > > Uhm, moving more code into kuryr.lib is something Kuryr team would like > > to avoid. Our tendency is rather to stop depending from it, as kuryr- > > kubernetes being a CNI plugin is normally consumed as a container image > > and having any dependencies is a burden there. > > Kuyur-lib is already a dependency for kuryr-kubernetes: > https://github.com/openstack/kuryr-kubernetes/blob/master/requirements.txt > . Do you mean kuryr-kubernetes is going to remove kuryr-lib as a > dependency? And I don't quite get the "container image" > justification. Could you explain more? Hi! Sorry for late reply, I must have missed that email when reading the list. Our plan was to move the bits from kuryr-lib that we use directly into kuryr-kubernetes. This is basically the segmetnation_type_drivers module and some utility functions. kuryr-kubernetes is built as part of OpenShift (OKD) platform and OKD's buildsystem doesn't handle dependencies too well. The problem is that if we'd start to depend more on kuryr-lib, we would probably need to fork it too as part of openshift's GitHub (note that we maintain fork kuryr-kubernetes at github.com/openshift/kuryr-kubernetes with the bits related to OKD builds). > > That's why I was asking about modifications to kuryr-daemon code that > > Zun required - to see if we can modify kuryr-daemon to be pluggable > > enough to be consumed by Zun directly. > > In theory, you can refactor the code and make it pluggable. Suppose you are able to do that, I would still suggest to move the whole framework out as a library. That is a prerequisite for Zun (or any other projects) to consume it, right? Right, without a shared library it's not an ideal solution, but library is problematic for sure… Would it be possible for Zun to just run kuryr-daemon directly from a kuryr-kubernetes release? > > > > Thanks, > > > > Michał > > > > > > > > [1] https://github.com/openstack/zun/tree/master/zun/cni > > > > > > > > > +--------------+------------------------+---------------+ > > > > > | Concept | Container | Capsule (Pod) | > > > > > +--------------+------------------------+---------------+ > > > > > | API endpoint | /containers | /capsules | > > > > > | Engine | Docker | CRI runtime | > > > > > | Network | kuryr-libnetwork (CNM) | zun-cni (CNI) | > > > > > +--------------+------------------------+---------------+ > > > > > > > > > > Typically, a CRI runtime works well with Kata Container which > > > > > provides hypervisor-based isolation for neighboring containers in the > > > > > same node. As a result, it is secure to consolidate pods from > > > > > different tenants into a single node which increases the resource > > > > > utilization. For deployment, a typical stack looks like below: > > > > > > > > > > +----------------------------------------------+ > > > > > | k8s control plane | > > > > > +----------------------------------------------+ > > > > > | Virtual Kubelet (OpenStack provider) | > > > > > +----------------------------------------------+ > > > > > | OpenStack control plane (Zun, Neutron, etc.) | > > > > > +----------------------------------------------+ > > > > > | OpenStack data plane | > > > > > | (Zun compute agent, Neutron OVS agent, etc.) | > > > > > +----------------------------------------------+ > > > > > | Containerd (with CRI plugin) | > > > > > +----------------------------------------------+ > > > > > | Kata Container | > > > > > +----------------------------------------------+ > > > > > > > > > > In this stack, if a user creates a deployment or pod in k8s, the k8s > > > > > scheduler will schedule the pod to the virtual node registered by > > > > > Virtual Kubelet. Virtual Kubelet will pick up the pod and let the > > > > > configured cloud provider to handle it. The cloud provider invokes > > > > > Zun API to create a capsule. Upon receiving the API request to create > > > > > a capsule, Zun scheduler will schedule the capsule to a compute node. > > > > > The Zun compute agent in that node will provision the capsule using a > > > > > CRI runtime (containerd in this example). The Zun-CRI runtime > > > > > communication is done via a gRPC protocol through a unix socket. The > > > > > CRI runtime will first create the pod in Kata Container (or runc as > > > > > an alternative) that realizes the pod using a lightweight VM. > > > > > Furthermore, the CRI runtime will use a CNI plugin, which is the zun- > > > > > cni binary, to setup the network. The zun-cni binary is a thin > > > > > executable that dispatches the CNI command to a daemon service called > > > > > zun-cni-daemon. The community is via HTTP within localhost. The zun- > > > > > cni-daemon will look up the Neutron port information from DB and > > > > > perform the port binding. > > > > > > > > > > In conclusion, starting from Ussuri, Zun adds support for CRI- > > > > > compatible runtime. Zun uses CRI runtime to realize the concept of > > > > > pod. Using this feature together with Virtual Kubelet and Kata > > > > > Container, we can offer "serverless kubernetes pod" service which is > > > > > comparable with AWS EKS with Fargate. > > > > > > > > > > [1] https://blueprints.launchpad.net/zun/+spec/add-support-cri-runtime > > > > > [2] https://github.com/virtual-kubelet/virtual-kubelet > > > > > [3] https://github.com/virtual-kubelet/openstack-zun > > > > > [4] https://aws.amazon.com/about-aws/whats-new/2019/12/run-serverless-kubernetes-pods-using-amazon-eks-and-aws-fargate/ > > > > > [5] https://aws.amazon.com/blogs/aws/amazon-eks-on-aws-fargate-now-generally-available/ > > > > > > > > > > > > > > > > From kennelson11 at gmail.com Fri Apr 3 20:11:42 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 3 Apr 2020 13:11:42 -0700 Subject: [all][PTL][release] Call for Ussuri Cycle Highlights In-Reply-To: References: Message-ID: Hello Everyone! Wanted to bring this to the top of people's inboxes as we are getting close to the deadline! Next week, April 9th (same as Feature Freeze and m3) is the deadline for cycle highlights. If you add them later than that, they might not make it into the release marketing messaging. Can't wait to see what all the projects have accomplished! -Kendall (diablo_rojo) On Wed, Feb 26, 2020 at 12:51 PM Kendall Nelson wrote: > Hello Everyone! > > Its time to start thinking about calling out 'cycle-highlights' in your > deliverables! > > As PTLs, you probably get many pings towards the end of every release > cycle by various parties (marketing, management, journalists, etc) asking > for highlights of what is new and what significant changes are coming in > the new release. By putting them all in the same place it makes them easy > to reference because they get compiled into a pretty website like this from > the last few releases: Rocky[1], Stein[2], Train[3]. > > We don't need a fully fledged marketing message, just a few highlights > (3-4 ideally), from each project team. Looking through your release notes > might be a good place to start. > > *The deadline for cycle highlights is the end of the R-5 week [4] on April > 10th.* > > How To Reminder: > ------------------------- > > Simply add them to the deliverables/train/$PROJECT.yaml in the > openstack/releases repo like this: > > cycle-highlights: > - Introduced new service to use unused host to mine bitcoin. > > The formatting options for this tag are the same as what you are probably > used to with Reno release notes. > > Also, you can check on the formatting of the output by either running > locally: > > tox -e docs > > And then checking the resulting doc/build/html/train/highlights.html file > or the output of the build-openstack-sphinx-docs job under html/train/ > highlights.html. > > Can't wait to see you all have accomplished this release! > > Thanks :) > > -Kendall Nelson (diablo_rojo) > > [1] https://releases.openstack.org/rocky/highlights.html > [2] https://releases.openstack.org/stein/highlights.html > [3] https://releases.openstack.org/train/highlights.html > [4] htt > > https://releases.openstack.org/ussuri/schedule.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Sat Apr 4 00:29:09 2020 From: hongbin034 at gmail.com (Hongbin Lu) Date: Fri, 3 Apr 2020 20:29:09 -0400 Subject: [k8s][zun] Introduce a new feature in Ussuri - CRI integration In-Reply-To: <48dc3f0315bacb9aab53fda3dc7b8d7023f96df9.camel@redhat.com> References: <668362769fa2171e2c42d58ecb430968fd658b5f.camel@redhat.com> <48dc3f0315bacb9aab53fda3dc7b8d7023f96df9.camel@redhat.com> Message-ID: On Fri, Apr 3, 2020 at 1:01 PM wrote: > On Tue, 2020-03-24 at 09:16 -0400, Hongbin Lu wrote: > > > > > > On Tue, Mar 24, 2020 at 7:28 AM wrote: > > > On Mon, 2020-03-23 at 12:17 -0400, Hongbin Lu wrote: > > > > > > > > > > > > On Mon, Mar 23, 2020 at 11:48 AM wrote: > > > > > On Sun, 2020-03-22 at 13:28 -0400, Hongbin Lu wrote: > > > > > > Hi all, > > > > > > > > > > > > As we are approaching the end of Ussuri cycle, I would like to > take a > > > > > > chance to introduce a new feature the Zun team implemented in > this > > > > > > cycle - CRI integration [1]. > > > > > > > > > > > > > > > Under the hook, a capsule is a podsandbox with one or more > containers > > > > > > in a CRI runtime (i.e. containerd). Compared to Docker, a CRI > runtime > > > > > > has a better support for the pod concept so we chose it to > implement > > > > > > capsule. A caveat is that CRI requires a CNI plugin for the > > > > > > networking, so we need to implement a CNI plugin for Zun (called > zun- > > > > > > cni). The role of CNI plugin is similar as kuryr-libnetwork that > we > > > > > > are using for Docker except it implements a different networking > > > > > > model (CNI). I summaries it as below: > > > > > > > > > > Hi, > > > > > > > > > > I noticed that Zun's CNI plugin [1] is basically a simplified > version > > > > > of kuryr-kubernetes code. While it's totally fine you've copied > that, I > > > > > wonder what modifications had been made to make it suit Zun? Is > there a > > > > > chance to converge this to make Zun use kuryr-kubernetes directly > so > > > > > that we won't develop two versions of that code in parallel? > > > > > > > > Right. I also investigated the possibilities of reusing the kuryr- > > > > kubernetes codebase as well. Definitely, some codes are common among > > > > two projects. If we can move the common code to a library (i.e. > > > > kuryr-lib), Zun should be able to directly consume the code. In > > > > particular, I am interesting to directly consume the CNI binding code > > > > (kuryr_kubernetes/cni/binding/) and the VIF versioned object > > > > (kuryr_kubernetes/objects). > > > > > > > > Most parts of kuryr-kubernetes code is coupling with the "list-and- > > > > watch" logic against k8s API. Zun is not able to reuse those part of > > > > code. However, I do advocate to move all the common code to > kuryr-lib > > > > so Zun can reuse it whenever it is appropriate. > > > > > > Uhm, moving more code into kuryr.lib is something Kuryr team would like > > > to avoid. Our tendency is rather to stop depending from it, as kuryr- > > > kubernetes being a CNI plugin is normally consumed as a container image > > > and having any dependencies is a burden there. > > > > Kuyur-lib is already a dependency for kuryr-kubernetes: > > > https://github.com/openstack/kuryr-kubernetes/blob/master/requirements.txt > > . Do you mean kuryr-kubernetes is going to remove kuryr-lib as a > > dependency? And I don't quite get the "container image" > > justification. Could you explain more? > > Hi! Sorry for late reply, I must have missed that email when reading > the list. > > Our plan was to move the bits from kuryr-lib that we use directly into > kuryr-kubernetes. This is basically the segmetnation_type_drivers > module and some utility functions. > > kuryr-kubernetes is built as part of OpenShift (OKD) platform and OKD's > buildsystem doesn't handle dependencies too well. The problem is that > if we'd start to depend more on kuryr-lib, we would probably need to > fork it too as part of openshift's GitHub (note that we maintain fork > kuryr-kubernetes at github.com/openshift/kuryr-kubernetes with the bits > related to OKD builds). > Right. In this case, adding a new library dependency is not ideal from your perspective because it increases the maintenance burden from your downstream. From Zun's perspective, this is not ideal because we are not able to directly reuse the common code (which is primarily the binding module). This is the reason we ends up with two copies of some codes. > > > > That's why I was asking about modifications to kuryr-daemon code that > > > Zun required - to see if we can modify kuryr-daemon to be pluggable > > > enough to be consumed by Zun directly. > > > > In theory, you can refactor the code and make it pluggable. Suppose you > are able to do that, I would still suggest to move the whole framework out > as a library. That is a prerequisite for Zun (or any other projects) to > consume it, right? > > Right, without a shared library it's not an ideal solution, but library > is problematic for sure… Would it be possible for Zun to just run > kuryr-daemon directly from a kuryr-kubernetes release? > First, I am not sure if it is possible from technical perspective. Second, to address the problem, an alternative approach is to have kuryr-kubernetes run zun-cni-deamon directly from a Zun release. In theory, we can make Zun pluggable so other projects can extend it. The first approach (Zun depends on Kuryr-kubernetes) is not ideal for Zun because Zun will have a heavy dependency on an external service. The second approach (Kuryr-kubernetes depends on Zun) is not ideal for Kuryr-kubernetes under the same reason. I think there is a middle ground that can balance the benefits of both projects. For example, refactoring the common code into a shared library would be such option. > > > > > > Thanks, > > > > > Michał > > > > > > > > > > [1] https://github.com/openstack/zun/tree/master/zun/cni > > > > > > > > > > > +--------------+------------------------+---------------+ > > > > > > | Concept | Container | Capsule (Pod) | > > > > > > +--------------+------------------------+---------------+ > > > > > > | API endpoint | /containers | /capsules | > > > > > > | Engine | Docker | CRI runtime | > > > > > > | Network | kuryr-libnetwork (CNM) | zun-cni (CNI) | > > > > > > +--------------+------------------------+---------------+ > > > > > > > > > > > > Typically, a CRI runtime works well with Kata Container which > > > > > > provides hypervisor-based isolation for neighboring containers > in the > > > > > > same node. As a result, it is secure to consolidate pods from > > > > > > different tenants into a single node which increases the resource > > > > > > utilization. For deployment, a typical stack looks like below: > > > > > > > > > > > > +----------------------------------------------+ > > > > > > | k8s control plane | > > > > > > +----------------------------------------------+ > > > > > > | Virtual Kubelet (OpenStack provider) | > > > > > > +----------------------------------------------+ > > > > > > | OpenStack control plane (Zun, Neutron, etc.) | > > > > > > +----------------------------------------------+ > > > > > > | OpenStack data plane | > > > > > > | (Zun compute agent, Neutron OVS agent, etc.) | > > > > > > +----------------------------------------------+ > > > > > > | Containerd (with CRI plugin) | > > > > > > +----------------------------------------------+ > > > > > > | Kata Container | > > > > > > +----------------------------------------------+ > > > > > > > > > > > > In this stack, if a user creates a deployment or pod in k8s, the > k8s > > > > > > scheduler will schedule the pod to the virtual node registered by > > > > > > Virtual Kubelet. Virtual Kubelet will pick up the pod and let the > > > > > > configured cloud provider to handle it. The cloud provider > invokes > > > > > > Zun API to create a capsule. Upon receiving the API request to > create > > > > > > a capsule, Zun scheduler will schedule the capsule to a compute > node. > > > > > > The Zun compute agent in that node will provision the capsule > using a > > > > > > CRI runtime (containerd in this example). The Zun-CRI runtime > > > > > > communication is done via a gRPC protocol through a unix socket. > The > > > > > > CRI runtime will first create the pod in Kata Container (or runc > as > > > > > > an alternative) that realizes the pod using a lightweight VM. > > > > > > Furthermore, the CRI runtime will use a CNI plugin, which is the > zun- > > > > > > cni binary, to setup the network. The zun-cni binary is a thin > > > > > > executable that dispatches the CNI command to a daemon service > called > > > > > > zun-cni-daemon. The community is via HTTP within localhost. The > zun- > > > > > > cni-daemon will look up the Neutron port information from DB and > > > > > > perform the port binding. > > > > > > > > > > > > In conclusion, starting from Ussuri, Zun adds support for CRI- > > > > > > compatible runtime. Zun uses CRI runtime to realize the concept > of > > > > > > pod. Using this feature together with Virtual Kubelet and Kata > > > > > > Container, we can offer "serverless kubernetes pod" service > which is > > > > > > comparable with AWS EKS with Fargate. > > > > > > > > > > > > [1] > https://blueprints.launchpad.net/zun/+spec/add-support-cri-runtime > > > > > > [2] https://github.com/virtual-kubelet/virtual-kubelet > > > > > > [3] https://github.com/virtual-kubelet/openstack-zun > > > > > > [4] > https://aws.amazon.com/about-aws/whats-new/2019/12/run-serverless-kubernetes-pods-using-amazon-eks-and-aws-fargate/ > > > > > > [5] > https://aws.amazon.com/blogs/aws/amazon-eks-on-aws-fargate-now-generally-available/ > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tessa at plum.ovh Sat Apr 4 01:47:04 2020 From: tessa at plum.ovh (Tessa Plum) Date: Sat, 4 Apr 2020 09:47:04 +0800 Subject: choosing object storage Message-ID: <215ab035-fc25-c7ff-dc74-657c0343b3ae@plum.ovh> greetings, What object storage system to choose for integration with openstack environment? ceph or swift? Thank you. From sxmatch1986 at gmail.com Sat Apr 4 03:18:55 2020 From: sxmatch1986 at gmail.com (hao wang) Date: Sat, 4 Apr 2020 11:18:55 +0800 Subject: [tc][zaqar]Wanna be the PTL of Zaqar project to help in V Message-ID: Hi, all TC members I'm Wang Hao, be the Zaqar PTL since 2018. So sorry I missed the community vote for projects. But I still really want to be the Zaqar PTL in V to push this interesting project forward. Indeed, the contribution to Zaqar project has declined for a while. But there're still some projects that have dependencies on it like tripleo, heat, and manila, etc. and AFAIK, some Chinese cloud companies also use Zaqar as messaging service. If Zaqar is removed from core projects, it will have adverse impacts on those projects and companies. Also, we still have blueprints and list of works that should be implemented in V cycle. Finally, as a contributor in Zaqar since 2016, I really want to help this project going forward. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Sat Apr 4 13:42:25 2020 From: donny at fortnebula.com (Donny Davis) Date: Sat, 4 Apr 2020 09:42:25 -0400 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: JP, If elected to the TC, I would like to see some focus on efficiency in our overall CI system at the Openstack level. I have spent the last year working with the Infra team. The majority of my focus has been on improving the CI experience for our developers at my tiny scale for the things I can change. The workload profile for our CI jobs differ greatly. Some eat every last drop of memory they get, so don't eat any. Some jobs required significant amounts of I/O and others barely touched the disk. I am sure this topic has been discussed before, but I think it's worth looking at again. We are armed with enough data to make some decisions on how to best use our resources. We need to first analyze the data we have in what jobs run best on what providers. If possible we could prefer that provider for a given workload. While this will not add up to a significant speed up in the beginning, it could give us the power to make the system as a whole run faster. If we can shave just 30 seconds off every job we run by preferring the fastest provider for that job, we could save significant amounts of cpu hours. This results in a better overall experience for our team of developers. This is not a light task, and it would take a lot of work. It's also not something that would happen overnight or probably even in a single cycle. But it's worth the effort. Everything we do as a community is tested. Even minor improvements to the CI system that does the testing results in big gains at our scale. I would like to note that this isn't just an infra side issue. We need everyone to ensure their jobs are being conscious of the resources we do have. These resources aren't easy to come by and our infra team has done nothing short of amazing work to date. I don't have all of the answers to the technical questions, and I am not even sure how the community would feel about this. But I think a focus on the gate benefits everyone. Thanks Donny Davis On Thu, Apr 2, 2020 at 6:30 AM Jean-Philippe Evrard wrote: > Hello, > > I read your nominations, and as usual I will ask what do you > _technically_ will do during your mandate, what do you _actively_ want > to change in OpenStack? > > This can be a change in governance, in the projects, in the current > structure... it can be really anything. I am just hoping to see > practical OpenStack-wide changes here. It doesn't need to be a fully > refined idea, but something that can be worked upon. > > Thanks for your time. > > Regards, > Jean-Philippe Evrard > > > > > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sat Apr 4 17:53:08 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 4 Apr 2020 19:53:08 +0200 Subject: [all][PTL][release] Call for Ussuri Cycle Highlights In-Reply-To: References: Message-ID: Hi Kendall, we (Kolla) added our cycle highlights already not to miss the deadline but we are cycle-trailing and could be looking for a way to amend if we manage to squeeze more features in (note we are still well before feature freeze). Could you advise whether cycle-trailing projects could amend their cycle highlights later? -yoctozepto pt., 3 kwi 2020 o 22:21 Kendall Nelson napisał(a): > > Hello Everyone! > > Wanted to bring this to the top of people's inboxes as we are getting close to the deadline! > > Next week, April 9th (same as Feature Freeze and m3) is the deadline for cycle highlights. If you add them later than that, they might not make it into the release marketing messaging. > > Can't wait to see what all the projects have accomplished! > > -Kendall (diablo_rojo) > > On Wed, Feb 26, 2020 at 12:51 PM Kendall Nelson wrote: >> >> Hello Everyone! >> >> Its time to start thinking about calling out 'cycle-highlights' in your deliverables! >> >> As PTLs, you probably get many pings towards the end of every release cycle by various parties (marketing, management, journalists, etc) asking for highlights of what is new and what significant changes are coming in the new release. By putting them all in the same place it makes them easy to reference because they get compiled into a pretty website like this from the last few releases: Rocky[1], Stein[2], Train[3]. >> >> We don't need a fully fledged marketing message, just a few highlights (3-4 ideally), from each project team. Looking through your release notes might be a good place to start. >> >> The deadline for cycle highlights is the end of the R-5 week [4] on April 10th. >> >> How To Reminder: >> ------------------------- >> >> Simply add them to the deliverables/train/$PROJECT.yaml in the openstack/releases repo like this: >> >> cycle-highlights: >> - Introduced new service to use unused host to mine bitcoin. >> >> The formatting options for this tag are the same as what you are probably used to with Reno release notes. >> >> Also, you can check on the formatting of the output by either running locally: >> >> tox -e docs >> >> And then checking the resulting doc/build/html/train/highlights.html file or the output of the build-openstack-sphinx-docs job under html/train/highlights.html. >> >> Can't wait to see you all have accomplished this release! >> >> Thanks :) >> >> -Kendall Nelson (diablo_rojo) >> >> [1] https://releases.openstack.org/rocky/highlights.html >> [2] https://releases.openstack.org/stein/highlights.html >> [3] https://releases.openstack.org/train/highlights.html >> [4] htthttps://releases.openstack.org/ussuri/schedule.html >> From gmann at ghanshyammann.com Sun Apr 5 00:24:13 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 04 Apr 2020 19:24:13 -0500 Subject: [tc][zaqar]Wanna be the PTL of Zaqar project to help in V In-Reply-To: References: Message-ID: <17147b93ae3.dc781e05159111.3874772194444416998@ghanshyammann.com> ---- On Fri, 03 Apr 2020 22:18:55 -0500 hao wang wrote ---- > Hi, all TC members > I'm Wang Hao, be the Zaqar PTL since 2018. So sorry I missed thecommunity vote for projects. > But I still really want to be the Zaqar PTL in V to push this interesting project forward. > Indeed, the contribution to Zaqar project has declined for a while. > But there're still some projects that have dependencies on it like > tripleo, heat, and manila, etc. and AFAIK, some Chinese cloud companies also use Zaqar as messaging service. If Zaqar is removed > from core projects, it will have adverse impacts on those projects and > companies. Also, we still have blueprints and list of works that > should be implemented in V cycle. > > Finally, as a contributor in Zaqar since 2016, I really want to help this project going forward. > Thank you! Thanks Wang for continue leading it, I have proposed your appointment in governance, after your +1, TC will start the discussion there. - https://review.opendev.org/#/c/717349/ -gmann From gmann at ghanshyammann.com Sun Apr 5 00:52:42 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 04 Apr 2020 19:52:42 -0500 Subject: [tc][election] campaign discussion: what we can improve in TC & how? Message-ID: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> As we are in the campaigning phase of the TC election, where we start the debate on few topics. This is one of the topics where I would like to start the debate. First off, I'd like to thank all the candidates for showing interest to be part of or continuing as TC. What you think we should and must improve in TC ? This can be the involvement of TC in the process from the governance point of view or technical help for each project. Few of the question is below but feel free to add your improvement points. - Do we have too much restriction on project sides and not giving them a free hand? If yes, what we can improve and how? - Is there less interaction from TC with projects? I am sure few projects/members even do not know even what TC is for? What's your idea to solve this. -gmann From gmann at ghanshyammann.com Sun Apr 5 01:09:08 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 04 Apr 2020 20:09:08 -0500 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? Message-ID: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> This topic is a very important and critical area to solve in the OpenStack community. I personally feel and keep raising this issue wherever I get the opportunity. To develop or maintain any software, the very first thing we need is to have enough developer resources. Without enough developers (either open or closed source), none of the software can survive. OpenStack current situation on contributors is not the same as it was few years back. Almost every project is facing the less contributor issue as compare to requirements and incoming requests. Few projects already dead or going to be if we do not solve the less contributors issue now. I know, TC is not directly responsible to solve this issue but we should do something or at least find the way who can solve this. What do you think about what role TC can play to solve this? What platform or entity can be used by TC to raise this issue? or any new crazy Idea? -gmann From kennelson11 at gmail.com Sun Apr 5 04:41:55 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Sat, 4 Apr 2020 21:41:55 -0700 Subject: [all][PTL][release] Call for Ussuri Cycle Highlights In-Reply-To: References: Message-ID: Yes you can definitely add/edit them later! That said, whatever the state is at the deadline is what will be used for marketing for the release. -Kendall (diablo_rojo) On Sat, 4 Apr 2020, 10:53 am Radosław Piliszek, wrote: > Hi Kendall, > > we (Kolla) added our cycle highlights already not to miss the deadline > but we are cycle-trailing and could be looking for a way to amend if > we manage to squeeze more features in (note we are still well before > feature freeze). > > Could you advise whether cycle-trailing projects could amend their > cycle highlights later? > > -yoctozepto > > pt., 3 kwi 2020 o 22:21 Kendall Nelson napisał(a): > > > > Hello Everyone! > > > > Wanted to bring this to the top of people's inboxes as we are getting > close to the deadline! > > > > Next week, April 9th (same as Feature Freeze and m3) is the deadline for > cycle highlights. If you add them later than that, they might not make it > into the release marketing messaging. > > > > Can't wait to see what all the projects have accomplished! > > > > -Kendall (diablo_rojo) > > > > On Wed, Feb 26, 2020 at 12:51 PM Kendall Nelson > wrote: > >> > >> Hello Everyone! > >> > >> Its time to start thinking about calling out 'cycle-highlights' in your > deliverables! > >> > >> As PTLs, you probably get many pings towards the end of every release > cycle by various parties (marketing, management, journalists, etc) asking > for highlights of what is new and what significant changes are coming in > the new release. By putting them all in the same place it makes them easy > to reference because they get compiled into a pretty website like this from > the last few releases: Rocky[1], Stein[2], Train[3]. > >> > >> We don't need a fully fledged marketing message, just a few highlights > (3-4 ideally), from each project team. Looking through your release notes > might be a good place to start. > >> > >> The deadline for cycle highlights is the end of the R-5 week [4] on > April 10th. > >> > >> How To Reminder: > >> ------------------------- > >> > >> Simply add them to the deliverables/train/$PROJECT.yaml in the > openstack/releases repo like this: > >> > >> cycle-highlights: > >> - Introduced new service to use unused host to mine bitcoin. > >> > >> The formatting options for this tag are the same as what you are > probably used to with Reno release notes. > >> > >> Also, you can check on the formatting of the output by either running > locally: > >> > >> tox -e docs > >> > >> And then checking the resulting doc/build/html/train/highlights.html > file or the output of the build-openstack-sphinx-docs job under > html/train/highlights.html. > >> > >> Can't wait to see you all have accomplished this release! > >> > >> Thanks :) > >> > >> -Kendall Nelson (diablo_rojo) > >> > >> [1] https://releases.openstack.org/rocky/highlights.html > >> [2] https://releases.openstack.org/stein/highlights.html > >> [3] https://releases.openstack.org/train/highlights.html > >> [4] htthttps://releases.openstack.org/ussuri/schedule.html > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgoncalves at redhat.com Sun Apr 5 11:19:58 2020 From: cgoncalves at redhat.com (Carlos Goncalves) Date: Sun, 5 Apr 2020 13:19:58 +0200 Subject: [octavia] dropping support for stable/queens In-Reply-To: References: Message-ID: Hey folks, Apologies for my late response in this thread. This topic was also discussed on IRC #openstack-lbaas on March 19 [1]. I had some initial reservations that were clarified as we discussed the topic further. I am supportive now of declaring the Queens release EOL. Carlos [1] http://eavesdrop.openstack.org/irclogs/%23openstack-lbaas/%23openstack-lbaas.2020-03-19.log.html On Fri, Mar 20, 2020 at 2:44 AM Michael Johnson wrote: > Hi Erik, > > The Queens branch entered Extended Maintenance (EM) on August 25th, > 2019 (later extended to October 25th, 2019 to accommodate release team > resources[1]) and was tagged EM on November 6th, 2019. > > This proposal is to exit Extended Maintenance and mark the Queens > release as End-of-Life(EOL). This is following the guidelines for > stable branch releases[2]. > > Currently the Octavia team is maintaining five branches and supporting > queens is becoming more difficult as the test jobs are still based on > the Ubuntu Xenial release (released April 21. 2016). The python 2.7 > end-of-life combined with the older OS version has made it hard to > keep reliable test jobs[3]. Backporting patches is also becoming more > difficult as the Octavia code base has rapidly evolved in the two > years since it was released. > > If you or others are willing to help us with fixing the queens jobs > and backporting patches, that would be great and we may be able to > leave the queens branch in Extended Maintenance. If not, Adam is > proposing we move it into EOL. > > Personally, I agree with Adam that it is time to mark the Queens > release as EOL. Since you can run newer releases of Octavia on older > clouds (as many do), the branch cannot release under EM, and patches > on that branch are few. > > Michael > > [1] > https://github.com/openstack/releases/commit/be797b61677b1eafc0162bd664f09db54287ac81#diff-8fa0a5ef8165646609d2f3d50646c597 > [2] > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > [3] > https://zuul.openstack.org/builds?project=openstack%2Foctavia&branch=stable%2Fqueens&pipeline=check&job_name=octavia-v2-dsvm-scenario > > On Thu, Mar 19, 2020 at 11:40 AM Erik McCormick > wrote: > > > > > > On Thu, Mar 19, 2020, 11:57 AM Adam Harwell wrote: > >> > >> I'm officially proposing dropping support for the queens stable branch > of Octavia. > >> Given a quorum of cores, we will move forward with marking it EOL and > all that entails. > > > > > > Adam, > > > > Why can't it go into EM Instead of EOL? > > > > Thanks > > Erik > > > >> > >> Thanks, > >> --Adam > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sun Apr 5 20:06:45 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 5 Apr 2020 22:06:45 +0200 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: Hello Jean-Philippe, > what do you _technically_ will do during your mandate, what do you > _actively_ want to change in OpenStack? > > This can be a change in governance, in the projects, in the current > structure... it can be really anything. I am just hoping to see > practical OpenStack-wide changes here. It doesn't need to be a fully > refined idea, but something that can be worked upon. I feel I answered this, at least partially, in my nomination notice. I will try to expand it: My opinion is we need to invest more in delivery of OpenStack deliverables (pun was not avoidable). I want to design and coordinate efforts around usage of deployment-ready prebuilt images so that they can be used both for CI to avoid lengthy, repeatable processes and for friendlier end-user consumption. I am thinking something Kolla-esque as LOCI does not seem that alive nowadays (and does not seem to be getting that much level of testing Kolla has from Kolla Ansible and TripleO as both deployment projects consume Kolla images). The final idea has not fully crystallized yet. Another point for design around Kolla is that it provides both pre-requisites for OpenStack projects (think MariaDB, RabbitMQ) and services that work with OpenStack projects to deliver more value (e.g. collectd, Prometheus). On that matter, since Ironic plans of independence (more on that in a separate message of mine) have stirred such a lively discussion, I think my idea aligns with standalone goals of such projects. It would be easy to design deployment scenarios of subsets of OpenStack projects that work together to achieve some cloud-imagined goal other than plain IaaS. Want some baremetal provisioning? Here you go. Need a safe place for credentials? Be my guest! Since "kolla" is Greek for glue, I plan to use this wording for marketing of "gluing" the community around OpenStack projects. -yoctozepto From radoslaw.piliszek at gmail.com Sun Apr 5 20:07:21 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 5 Apr 2020 22:07:21 +0200 Subject: [tc][election] campaign discussion: what we can improve in TC & how? In-Reply-To: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> References: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> Message-ID: Hello Ghanshyam, > What you think we should and must improve in TC ? This can be > the involvement of TC in the process from the governance point of view > or technical help for each project. > > - Do we have too much restriction on project sides and not giving them > a free hand? If yes, what we can improve and how? >From my current point of view, OpenStack TC is very liberal. I base my opinion on some discussions of yours I read on ML and IRC and also the non-observability of TC influences in Kolla. :-) I think the current level of control is just right for many projects. But maybe not all. I guess this is a good question to ask all OpenStackers rather than just us. That said, I believe it is wise to consider this broad topic in the context of the recent Ironic thread on this ML. The not-so-well-defined goal of TC for the upcoming times would be to redefine OpenStack as something more (or maybe even "else") than open source platform for doing IaaS. OpenStack is, as the name gladly suggests, a stack, a pile of open source software projects, mostly in Python, sharing common quality standards (or at least trying to!) under TC guidance. It should be considered laudable to be part of OpenStack rather than seek a way to escape it. If it is not, then we might as well disband this and go home (btw, #stayhome). As for simpler matters, TC might assume and advertise its role as coordinator of cross-project efforts. And I don't mean current community goals. I am thinking: if someone sees that by using project X and project Y one could potentially achieve great thing Z, TC should be offering its guidance on how to best approach this, in coordination with cores from the relevant projects, and not in a way that enforces TC to always intervene. Note this idea aligns with possible upcoming TC-UC merger. > - Is there less interaction from TC with projects? I am sure few > projects/members even do not know even what TC is for? What's your > idea to solve this. I think this is partly because OpenStack core projects are considered very mature. Continuing on the thought of control, quality and prestige associated with OpenStack, a good short-term goal would be to revisit the OpenStack projects and possibly restructure/deprecate some that need this - considering both integral usability as well as standalone. I don't think TC transparency needs 'fixing'. This is actually good thing (TM) - as long as projects deliver quality we expect, that is. -yoctozepto From radoslaw.piliszek at gmail.com Sun Apr 5 20:07:53 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 5 Apr 2020 22:07:53 +0200 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: Hello Ghanshyam, > To develop or maintain any software, the very first thing we need is > to have enough developer resources. Without enough developers (either > open or closed source), none of the software can survive. > > OpenStack current situation on contributors is not the same as it was > few years back. Almost every project is facing the less contributor > issue as compare to requirements and incoming requests. Few projects > already dead or going to be if we do not solve the less contributors > issue now. > > I know, TC is not directly responsible to solve this issue but we > should do something or at least find the way who can solve this. > > What do you think about what role TC can play to solve this? > What platform or entity can be used by TC to raise this issue? > or any new crazy Idea? I believe this is well beyond my area of expertise and you might have already answered yourself in that this is well beyond what TC could realistically do. ;-) On the other hand, the points I discuss in the other threads may contribute to change in perception of OpenStack as a whole and make it easier to find eager contributors. Another fact is this should be really considered per-project. The core projects are churning well, then non-core not-so-much. Yet another thing to notice here is that human success of OpenStack depends A LOT on human success of OpenDev. Despite the split, TC should keep a close eye on OpenDev progression. -yoctozepto From radoslaw.piliszek at gmail.com Sun Apr 5 20:08:33 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 5 Apr 2020 22:08:33 +0200 Subject: [all][PTL][release] Call for Ussuri Cycle Highlights In-Reply-To: References: Message-ID: Noted, thanks! -yoctozepto niedz., 5 kwi 2020 o 06:42 Kendall Nelson napisał(a): > > Yes you can definitely add/edit them later! > > That said, whatever the state is at the deadline is what will be used for marketing for the release. > > -Kendall (diablo_rojo) > > On Sat, 4 Apr 2020, 10:53 am Radosław Piliszek, wrote: >> >> Hi Kendall, >> >> we (Kolla) added our cycle highlights already not to miss the deadline >> but we are cycle-trailing and could be looking for a way to amend if >> we manage to squeeze more features in (note we are still well before >> feature freeze). >> >> Could you advise whether cycle-trailing projects could amend their >> cycle highlights later? >> >> -yoctozepto >> >> pt., 3 kwi 2020 o 22:21 Kendall Nelson napisał(a): >> > >> > Hello Everyone! >> > >> > Wanted to bring this to the top of people's inboxes as we are getting close to the deadline! >> > >> > Next week, April 9th (same as Feature Freeze and m3) is the deadline for cycle highlights. If you add them later than that, they might not make it into the release marketing messaging. >> > >> > Can't wait to see what all the projects have accomplished! >> > >> > -Kendall (diablo_rojo) >> > >> > On Wed, Feb 26, 2020 at 12:51 PM Kendall Nelson wrote: >> >> >> >> Hello Everyone! >> >> >> >> Its time to start thinking about calling out 'cycle-highlights' in your deliverables! >> >> >> >> As PTLs, you probably get many pings towards the end of every release cycle by various parties (marketing, management, journalists, etc) asking for highlights of what is new and what significant changes are coming in the new release. By putting them all in the same place it makes them easy to reference because they get compiled into a pretty website like this from the last few releases: Rocky[1], Stein[2], Train[3]. >> >> >> >> We don't need a fully fledged marketing message, just a few highlights (3-4 ideally), from each project team. Looking through your release notes might be a good place to start. >> >> >> >> The deadline for cycle highlights is the end of the R-5 week [4] on April 10th. >> >> >> >> How To Reminder: >> >> ------------------------- >> >> >> >> Simply add them to the deliverables/train/$PROJECT.yaml in the openstack/releases repo like this: >> >> >> >> cycle-highlights: >> >> - Introduced new service to use unused host to mine bitcoin. >> >> >> >> The formatting options for this tag are the same as what you are probably used to with Reno release notes. >> >> >> >> Also, you can check on the formatting of the output by either running locally: >> >> >> >> tox -e docs >> >> >> >> And then checking the resulting doc/build/html/train/highlights.html file or the output of the build-openstack-sphinx-docs job under html/train/highlights.html. >> >> >> >> Can't wait to see you all have accomplished this release! >> >> >> >> Thanks :) >> >> >> >> -Kendall Nelson (diablo_rojo) >> >> >> >> [1] https://releases.openstack.org/rocky/highlights.html >> >> [2] https://releases.openstack.org/stein/highlights.html >> >> [3] https://releases.openstack.org/train/highlights.html >> >> [4] htthttps://releases.openstack.org/ussuri/schedule.html >> >> From knikolla at bu.edu Sun Apr 5 20:27:53 2020 From: knikolla at bu.edu (Nikolla, Kristi) Date: Sun, 5 Apr 2020 20:27:53 +0000 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: <8FB6369D-925B-4617-B009-E765948100E4@bu.edu> Hi JP, There's two areas that I can think at the top of my head. The first, is making sure that OpenStack is approachable to new contributors. This includes more documentation related community goals, which I consider especially important when a lot of people that have been here forever start to move on to new ventures, taking with them a lot of tribal and undocumented knowledge. We have to start being able to do more with less. The second, is investigating avenues for better integration with other open source communities and projects. It's very likely that OpenStack is only one of the many tools in an operators toolbox (eg., we're also running OpenShift, and our own bare metal provisioner.) Once we have acknowledged those integration points, we can start documenting, testing and developing with those in mind as well. The end result will be that deployers can pick OpenStack, or a specific set of OpenStack components and know that what they are trying to do is possible, is tested and is reproducible. Thank you, Kristi Nikolla > On Apr 2, 2020, at 6:26 AM, Jean-Philippe Evrard wrote: > > Hello, > > I read your nominations, and as usual I will ask what do you > _technically_ will do during your mandate, what do you _actively_ want > to change in OpenStack? > > This can be a change in governance, in the projects, in the current > structure... it can be really anything. I am just hoping to see > practical OpenStack-wide changes here. It doesn't need to be a fully > refined idea, but something that can be worked upon. > > Thanks for your time. > > Regards, > Jean-Philippe Evrard > > > > From Arkady.Kanevsky at dell.com Mon Apr 6 03:14:51 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 6 Apr 2020 03:14:51 +0000 Subject: choosing object storage In-Reply-To: <215ab035-fc25-c7ff-dc74-657c0343b3ae@plum.ovh> References: <215ab035-fc25-c7ff-dc74-657c0343b3ae@plum.ovh> Message-ID: Dell Customer Communication - Confidential Other one will work. First define what criteria you have. -----Original Message----- From: Tessa Plum Sent: Friday, April 3, 2020 8:47 PM To: openstack-discuss at lists.openstack.org Subject: choosing object storage [EXTERNAL EMAIL] greetings, What object storage system to choose for integration with openstack environment? ceph or swift? Thank you. From mark.kirkwood at catalyst.net.nz Mon Apr 6 06:47:36 2020 From: mark.kirkwood at catalyst.net.nz (Mark Kirkwood) Date: Mon, 6 Apr 2020 18:47:36 +1200 Subject: choosing object storage In-Reply-To: <215ab035-fc25-c7ff-dc74-657c0343b3ae@plum.ovh> References: <215ab035-fc25-c7ff-dc74-657c0343b3ae@plum.ovh> Message-ID: <51cb6461-dfa6-3b2b-c053-0318bc280c19@catalyst.net.nz> There are a number of considerations (disclaimer we run Ceph block and Swift object storage): Purely on a level of simplicity, Swift is easier to set up. However, if you are already using Ceph for block storage then it makes sense to keep using it for object too (since you are likely to be expert at Ceph at this point). On the other hand, if you have multiple Ceph clusters and want a geo replicated object storage solution, then doing this with Swift is much easier than with Ceph (geo replicated RGW still looks to be real complex to set up - a long page of archane commands). Finally (this is my 'big deal point'). I'd like my block and object storage to be completely independent - suppose a situation nukes my block storage (Ceph) - if my object storage is Swift then people's backups etc are still viable and when the Ceph cluster is rebuilt we can restore and continue. On the other hand If your object storage is Ceph too then.... regards Mark On 4/04/20 2:47 pm, Tessa Plum wrote: > greetings, > > What object storage system to choose for integration with openstack > environment? ceph or swift? > > Thank you. > From jean-philippe at evrard.me Mon Apr 6 07:13:35 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 06 Apr 2020 09:13:35 +0200 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: <1c7f1f490722a992283539553d2e78c62fc866e7.camel@evrard.me> On Fri, 2020-04-03 at 12:11 -0400, Jeremy Freudberg wrote: > it would be really cool if we had > some manner of documenting (repository tag?) which OpenStack services > are able to run standalone or be reused outside of OpenStack. This is > important information to highlight as standalone/reusable seems to be > an important part of OpenStack's future. I could see it leading to a > boost in new contributors too. I like this. Should this be self-asserted by the teams, or should we provide some kind of validation? For teams that are very close but have other openstack services projects dependencies, should the TC work on helping removing those dependencies? Regards, JP From jean-philippe at evrard.me Mon Apr 6 07:18:17 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 06 Apr 2020 09:18:17 +0200 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: On Sat, 2020-04-04 at 09:42 -0400, Donny Davis wrote: > > > > I don't have all of the answers to the technical questions, and I > > am not even > > sure how the community would feel about this. But I think a focus > > on the gate > > benefits everyone. Yes, gates (and coordinated gate testing) matters a lot, IMO! I like this. It seems like a couple of TC members should/could help on this. I think people outside the TC can also help, for example, clarkb :) Awesome! Regards, JP From jean-philippe at evrard.me Mon Apr 6 07:41:25 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 06 Apr 2020 09:41:25 +0200 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: On Sun, 2020-04-05 at 22:06 +0200, Radosław Piliszek wrote: > > My opinion is we need to invest more in delivery of OpenStack > deliverables (pun was not avoidable). I want to design and coordinate > efforts around usage of deployment-ready prebuilt images so that they > can be used both for CI to avoid lengthy, repeatable processes and > for friendlier end-user consumption. That sounds like what I hoped to work on when I started the containers SIG, but I didn't get the chance to work on it recently. So you know where I stand :) > I am thinking something Kolla-esque > as LOCI does not seem that alive nowadays (and does not seem to be > getting that much level of testing Kolla has from Kolla Ansible and > TripleO as both deployment projects consume Kolla images). Question: Why would we want another competing project? How do you intend to work with Kolla? Do you want to have this image building in the projects, and use another tooling to deploy those images? Did you start collaborating/discussing with non-TripleO projects on this? > > On that matter, since Ironic plans of independence (more on that in > a separate message of mine) have stirred such a lively discussion, > I think my idea aligns with standalone goals of such projects. Good news, that's what the TC is for :) > It would be easy to design deployment scenarios of subsets of > OpenStack > projects that work together to achieve some cloud-imagined goal other > than plain IaaS. Want some baremetal provisioning? Here you go. > Need a safe place for credentials? Be my guest! I am not sure what you mean there? Do you want to map OpenStack "sample configs" with gate jobs? Regards, JP From jean-philippe at evrard.me Mon Apr 6 07:45:47 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 06 Apr 2020 09:45:47 +0200 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: <8FB6369D-925B-4617-B009-E765948100E4@bu.edu> References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> <8FB6369D-925B-4617-B009-E765948100E4@bu.edu> Message-ID: On Sun, 2020-04-05 at 20:27 +0000, Nikolla, Kristi wrote: > > The first, is making sure that OpenStack is approachable to new > contributors. This includes more documentation related community > goals, which I consider especially important when a lot of people > that have been here forever start to move on to new ventures, taking > with them a lot of tribal and undocumented knowledge. We have to > start being able to do more with less. I think this maps very well with Kendall's efforts :) What do you particularily have in mind? > > The second, is investigating avenues for better integration with > other open source communities and projects. It's very likely that > OpenStack is only one of the many tools in an operators toolbox (eg., > we're also running OpenShift, and our own bare metal provisioner.) > Once we have acknowledged those integration points, we can start > documenting, testing and developing with those in mind as well. The > end result will be that deployers can pick OpenStack, or a specific > set of OpenStack components and know that what they are trying to do > is possible, is tested and is reproducible. Does that mean you want to introduce a sample configuration to deploy OpenShift on top of OpenStack and do conformance testing, in our jobs? Or did I get that wrong? Please note the CI topic maps with other candidates' efforts, so I already see future collaboration happening. I am glad to have asked my questions :) Regards, JP From jean-philippe at evrard.me Mon Apr 6 07:49:11 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 06 Apr 2020 09:49:11 +0200 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: On Mon, 2020-04-06 at 09:41 +0200, Jean-Philippe Evrard wrote: > Question: Why would we want another competing project? How do you > intend to work with Kolla? Do you want to have this image building in > the projects, and use another tooling to deploy those images? Did you > start collaborating/discussing with non-TripleO projects on this? Maybe I should rephrase. How do you want to make this work with Kolla, Triple-O, and other deployment projects outside those two? Do we distribute and collaborate (each project got a way to build its images), or do we centralize (LOCI/kolla - way)? From dtantsur at redhat.com Mon Apr 6 07:51:21 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 6 Apr 2020 09:51:21 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <20200402211834.mxr7tsdoofpriase@firewall> Message-ID: On Fri, Apr 3, 2020 at 2:48 AM Lingxian Kong wrote: > I see we are talking about another "Gnocchi", when Gnocchi moved out of > OpenStack, people said they could run Gnocchi in standalone mode without > installing the other OpenStack services, then they changed default > dependency of some other projects (Ceilometer, Panko, etc) to Gnocchi. > As a result, they are all dead (or almost dead). > I'd be very careful comparing Ironic to Gnocchi/Telemetry. I think the fate that Telemetry met was largely due to staffing problems, more specifically, all large contributors pulling away from it. It would end up the same inside or outside of OpenStack. > > Another example is a long time ago in one OpenStack project, there was a > demand for secret management, people said, Barbican is not mature and > not production ready yet, we shouldn't dependent on Barbican but could > make it optional, as a result, Barbican never adopted in the project in > real deployment. > I don't know much about the Barbican situation, but there may be other explanations. Some operators are against deploying any new service unless absolutely necessary, because any new service is a maintenance burden. At the Denver PTG we were talking about non-Keystone authentication in Ironic. Keystone is arguably very trivial to install, and still it meets some resistance. > > I have been involved in OpenStack community since 2013, I see people > came and left, I see projects created and died, until now, there are > only a few of projects alive and actively maintained. IMHO, as a > community, we should try our best to integrate projects with each other, > no project can live well without some others help, projects rarely > stand or fall alone. > To be clear, my proposal does not affect this. Specifically: 1) I don't suggest reducing the number of integration points. 2) Integration points with OpenStack services are already optional in Ironic. What exactly is your concern? Ironic dropping integration points altogether? We don't plan on that. Dmitry > > Well, I'm not part of TC, I'm not the person or team can decide how > Ironic project goes in this situation. But as a developer who is trying > very hard to maintain several OpenStack projects, that what I'm > thinking. > > My 0.02. > > - > Best regards, > Lingxian Kong > Catalyst Cloud > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Mon Apr 6 07:55:37 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 06 Apr 2020 09:55:37 +0200 Subject: [all][summary] Curating the openstack org on GitHub In-Reply-To: References: <1b08c425-8cd9-f7fc-9865-5efe9a44fcef@openstack.org> Message-ID: On Thu, 2020-04-02 at 13:39 +0100, Graham Hayes wrote: > > If only to avoid someone squatting on the org, I think we should keep > them. Not sure if we need them for the redirects, but I think it is > safer to keep an empty org. > I don't think it hurts to keep them, and point to archive in their description/displayed name. Regards, JP From dtantsur at redhat.com Mon Apr 6 08:10:20 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 6 Apr 2020 10:10:20 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: Hi, On Fri, Apr 3, 2020 at 9:35 AM Jean-Philippe Evrard wrote: > Hello, > > On Thu, 2020-04-02 at 12:38 +0200, Dmitry Tantsur wrote: > > Hi, > > > > (snipped) > > > > > > People do get surprised when they hear that Ironic can be used > > standalone, yes. "A part of OpenStack" maps to "installed inside > > OpenStack" rather than "is developed on the OpenStack platform". > > That's indeed what we need to change. > > > > > > If you consider OpenStack taints this "standalone" part of Ironic, > > > do you think that putting it as a top project of the **OpenStack > > > Foundation ** will help? I don't think so. People will still see > > > it's an OpenStack _related_ technology, with a history of being an > > > OpenStack project, which is now a standalone project inside the > > > OpenStack foundation. At best, it confuses people which are not > > > really aware of all these details. > > > > > > > Time to rename the Foundation? :) How is the same problem solved for > > Zuul or Kata Containers? > > The first made me smile :) > > I would say that Kata and Zuul have a different history. To my eyes, > Kata started as completely separated. How Zuul will eventually manage to > detatch itself from the OpenStack name could be interesting. Please note > that I have received the same questions (Do I need openstack? Is it a part > of openstack?) when questioned about Zuul, in some events. > > > (snipped) > > > As an aside, I don't think gnocchi fell victim of their split, but > > rather shared the overall fate of the Telemetry project. > > I don't disagree. > > > I also think your suggestion goes against the idea of OpenDev, which > > to me is to embrace a fast collection of Open Infrastructure > > projects, related to OpenStack or not. If you say that anything going > > to OpenDev will be seen as an OpenStack project, it defeats the > > purpose of OpenDev. > > I wrongly worded this then. This is not my intent. OpenDev is a good > name/good branding IMO. It feels detached from OpenStack. I can totally > see many projects to be successful there without appearing to be > attached to the OpenStack name. For people searching a little bit, it > wouldn't take long to see that OpenStack was behind OpenDev, and > therefore people can still attach the name if they want. I think that > what matters is to be explicit in the project message. > > (snipped) > > > > Can't we work on the branding, naming, and message without the > > > move? Why the hassle of moving things? Does that really bring value > > > to your team? Before forging my final (personal) opinion, I would > > > like more information than just gut feelings. > > > > > > > It's not "just gut feelings", it's the summary of numerous > > conversations that Julia and I have to hold when advocating for > > Ironic outside of the Nova context. We do this "Ironic does not imply > > OpenStack" explanation over and over, often enough unsuccessfully. > > Let me rephrase this: Do you have feedback from people not active in > the project which would be happy to step up/in if Ironic was not an > OpenStack project anymore? What could be changed from the OpenStack > side to change that mindset? > In my case it's more about potential adoption rather than contributions. Then there is this problem: I probably don't hear from a lot of people who silently walk away assuming that "an OpenStack project" requires OpenStack. We can try to educate people on a case-by-case basis, but to solve the problem completely we may need to stop being "an OpenStack project" even if we don't actually quit the OpenStack umbrella. > > > And then some people just don't like OpenStack... > > I don't disagree, sadly. I know it's a hard task, but I prefer tackling > this. Make OpenStack (more) likeable. To me, that seems a better goal in > itself. But that's maybe me :) > > > Now, I do agree that there are steps that can be taken before we go > > all nuclear. We can definitely work on our own website, we can reduce > > reliance on oslo, start releasing independently, and so on. I'm > > wondering what will be left of our participation in OpenStack in the > > end. Thierry has suggested the role of the TC in ensuring > > integration. I'm of the opinion that if all stakeholders in Ironic > > lose interest in Ironic as part of OpenStack, no power will prevent > > the integration from slowly falling apart. > > I don't see it that way. I see this as an opportunity to make OpenStack > more clear, more reachable, more interesting. For me, Ironic, Cinder, > Manila (to only name a few), are very good example of datacenter/IaaS > software that could be completely independent in their consumption, and > additionally released together. For me, the strength of OpenStack was > always in the fact it had multiple small projects that work well > together, compared to a single big blob of software which does > everything. We just didn't bank enough on the standalone IMHO. But I am > sure we are aligned there... Wouldn't the next steps be instead to make > it easier to consume standalone? > For us one of the problems, as I've mentioned already, is producing releases more often. Now, the point of potential misunderstanding is this: we can (and do) release more often than once in 6 months. These releases, however, do not enjoy the same degree of support as the "blessed" releases, especially when it comes to upgrades and longer-term support. Partly related to that, most of the tooling to install and operate OpenStack services seemingly: 1) Don't support non-coordinated releases. 2) Don't support standalone installation at all or make it a 2nd class citizen. Ironic provides Bifrost to partly cover this gap. > > Also, how is the reliance on oslo a problem? Do you want to use another > library in the python ecosystem instead? If true, what about phasing > out that part of oslo, so we don't have to maintain it? Just curious. > The problem is that oslo libraries are OpenStack-specific. Imagine metal3, for example. When building our images, we can pull (most of) regular Python packages from the base OS, but everything with "oslo" in its name is on us. It's a maintenance burden. With absolutely no disrespect meant to the awesome Oslo team, I think the existence of Oslo libraries is a bad sign. I think as a strong FOSS community we shouldn't invest into libraries that are either useful only to us or at least are marketed this way. For example: 1) oslo.config is a fantastic piece of software that the whole python world could benefit from. Same for oslo.service, probably. 2) oslo.utils as a catch-all repository of utilities should IMO be either moved to existing python projects or decomposed into small generally reusable libraries (essentially, each sub-module could be a library of its own). Same for oslo.concurrency. 3) I'm genuinely not sure why oslo.log and oslo.i18n exist and which parts of their functionality cannot be moved upstream. > > > > (snipped) I'm referring to a very narrow sense of Nova+company. I.e. > > a solution for providing virtual machines booting from virtual > > volumes on virtual networks. Ironic does not clearly fit there, nor > > does, say, Zuul. > > Got it. That's not my understanding of what OpenStack is, but I concede > that I might have a different view than most. > It's not mine either. I wonder if it's a good thing. I wonder if OpenStack would be (using your own words) more likeable if it was clearer what OpenStack is. Dmitry > > > > > > )snipped) Please note that I am still writing an idea in our ideas > > > framework, proposing a change in the release cycles (that > > > conversation again! but with a little twist), which I guess you > > > might be interested in. > > > > > > > Please let me know when it's ready, I really am. > > Will do! > > Regards, > JP > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Mon Apr 6 08:15:59 2020 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 6 Apr 2020 10:15:59 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <20200402211834.mxr7tsdoofpriase@firewall> Message-ID: On 06.04.2020 09:51, Dmitry Tantsur wrote: > > > On Fri, Apr 3, 2020 at 2:48 AM Lingxian Kong > wrote: > > I see we are talking about another "Gnocchi", when Gnocchi moved out of > OpenStack, people said they could run Gnocchi in standalone mode without > installing the other OpenStack services, then they changed default > dependency of some other projects (Ceilometer, Panko, etc) to Gnocchi. > As a result, they are all dead (or almost dead). > > > I'd be very careful comparing Ironic to Gnocchi/Telemetry. I think the > fate that Telemetry met was largely due to staffing problems, more > specifically, all large contributors pulling away from it. It would end > up the same inside or outside of OpenStack. > > > Another example is a long time ago in one OpenStack project, there was a > demand for secret management, people said, Barbican is not mature and > not production ready yet, we shouldn't dependent on Barbican but could > make it optional, as a result, Barbican never adopted in the project in > real deployment. > > > I don't know much about the Barbican situation, but there may be other > explanations. Some operators are against deploying any new service > unless absolutely necessary, because any new service is a maintenance > burden. > > At the Denver PTG we were talking about non-Keystone authentication in > Ironic. Keystone is arguably very trivial to install, and still it meets > some resistance. > > > I have been involved in OpenStack community since 2013, I see people > came and left, I see projects created and died, until now, there are > only a few of projects alive and actively maintained. IMHO, as a > community, we should try our best to integrate projects with each other, > no project can live well without some others help, projects rarely > stand or fall alone. > > > To be clear, my proposal does not affect this. Specifically: > 1) I don't suggest reducing the number of integration points. But having *more* integration points and functional duplication, like internal project's authorization, coordination (placement/messaging), shared libraries, indirectly reduce the integration points in OpenStack and pulls off contributors by spreading its focus on that otherwise would have been shipped and maintained "out of box" (or out of big tent). Not ranting, I understand that it is pointless to complain against inevitability. > 2) Integration points with OpenStack services are already optional in > Ironic. > > What exactly is your concern? Ironic dropping integration points > altogether? We don't plan on that. > > Dmitry > > > Well, I'm not part of TC, I'm not the person or team can decide how > Ironic project goes in this situation. But as a developer who is trying > very hard to maintain several OpenStack projects, that what I'm > thinking. > > My 0.02. > > - > Best regards, > Lingxian Kong > Catalyst Cloud > -- Best regards, Bogdan Dobrelya, Irc #bogdando From dtantsur at redhat.com Mon Apr 6 08:40:30 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 6 Apr 2020 10:40:30 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <20200402211834.mxr7tsdoofpriase@firewall> Message-ID: On Mon, Apr 6, 2020 at 10:18 AM Bogdan Dobrelya wrote: > On 06.04.2020 09:51, Dmitry Tantsur wrote: > > > > > > On Fri, Apr 3, 2020 at 2:48 AM Lingxian Kong > > wrote: > > > > I see we are talking about another "Gnocchi", when Gnocchi moved out > of > > OpenStack, people said they could run Gnocchi in standalone mode > without > > installing the other OpenStack services, then they changed default > > dependency of some other projects (Ceilometer, Panko, etc) to > Gnocchi. > > As a result, they are all dead (or almost dead). > > > > > > I'd be very careful comparing Ironic to Gnocchi/Telemetry. I think the > > fate that Telemetry met was largely due to staffing problems, more > > specifically, all large contributors pulling away from it. It would end > > up the same inside or outside of OpenStack. > > > > > > Another example is a long time ago in one OpenStack project, there > was a > > demand for secret management, people said, Barbican is not mature and > > not production ready yet, we shouldn't dependent on Barbican but > could > > make it optional, as a result, Barbican never adopted in the project > in > > real deployment. > > > > > > I don't know much about the Barbican situation, but there may be other > > explanations. Some operators are against deploying any new service > > unless absolutely necessary, because any new service is a maintenance > > burden. > > > > At the Denver PTG we were talking about non-Keystone authentication in > > Ironic. Keystone is arguably very trivial to install, and still it meets > > some resistance. > > > > > > I have been involved in OpenStack community since 2013, I see people > > came and left, I see projects created and died, until now, there are > > only a few of projects alive and actively maintained. IMHO, as a > > community, we should try our best to integrate projects with each > other, > > no project can live well without some others help, projects rarely > > stand or fall alone. > > > > > > To be clear, my proposal does not affect this. Specifically: > > 1) I don't suggest reducing the number of integration points. > > But having *more* integration points and functional duplication, like > internal project's authorization, coordination (placement/messaging), > shared libraries, indirectly reduce the integration points in OpenStack > and pulls off contributors by spreading its focus on that otherwise > would have been shipped and maintained "out of box" (or out of big > tent). Not ranting, I understand that it is pointless to complain > against inevitability. > On the other hand, not having some of these prevents adoption (for example, the requirement of RabbitMQ has been a huge deal for standalone adoption and was considered a blocker for metal3). Dmitry > > > 2) Integration points with OpenStack services are already optional in > > Ironic. > > > > What exactly is your concern? Ironic dropping integration points > > altogether? We don't plan on that. > > > > Dmitry > > > > > > Well, I'm not part of TC, I'm not the person or team can decide how > > Ironic project goes in this situation. But as a developer who is > trying > > very hard to maintain several OpenStack projects, that what I'm > > thinking. > > > > My 0.02. > > > > - > > Best regards, > > Lingxian Kong > > Catalyst Cloud > > > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Apr 6 09:57:14 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 6 Apr 2020 11:57:14 +0200 Subject: [largescale-sig] Next meeting: Apr 8, 8utc Message-ID: <3e8142ac-713d-c1b8-0f48-c8b02d374732@openstack.org> Hi everyone, The Large Scale SIG will have a meeting this week on Wednesday, April 8 at 8 UTC[1] in #openstack-meeting on IRC. The meeting time was changed at the last meeting to be more friendly to our Asian participants, so please doublecheck how that translates for you at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200408T08 Feel free to add topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting A reminder of the TODOs we had from last meeting, in case you have time to make progress on them: - ttx to push for oslo.metric spec approval - oneswig to contribute a scaling story on bare metal cluster scaling - amorin to create a wiki page for large scale documentation - amorin to propose patch against Nova doc Talk to you all on Wednesday, -- Thierry Carrez From elod.illes at est.tech Mon Apr 6 10:22:45 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Mon, 6 Apr 2020 12:22:45 +0200 Subject: [octavia] dropping support for stable/queens In-Reply-To: References: Message-ID: <1399f8c5-899b-d81b-50a1-3cd0bb72f417@est.tech> Hi, First of all, sorry for the late response. I've checked what is the current state of Octavia according to *openstack/releases* repository and it seems even Ocata is not yet EOL'd. As far as I see from the discussion, the Team wants to move Queens and Pike to EOL. I suggest to do it branch by branch, so as a first step please EOL Ocata with the steps described in Stable Branches page [1]. If that is ready, then do the same for Pike, and then Queens. If you have any question feel free to ping me on IRC (elod @ #openstack-stable or #openstack-releases) and I'll try to help you. Note: Extended Maintenance helps operators / interested parties with having a common place where they can cooperate and push (backport) bugfixes to ease everyone's work. As long as there are someone (in best case: more people) who comes and fixes bugs and maintains the CI, the "EM" branches can be kept open. (To keep the CI functional, it is usually easier if it is done regularly with small fixes, when something failure comes in). Of course, if there are no volunteer, who fixes the gate failures and a branch becomes "Unmaintained" for ~6 months, then it should be EOL'd. Sorry if this was obvious, just wanted to have it here :) Thanks, Előd [1] https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life On 2020. 04. 05. 13:19, Carlos Goncalves wrote: > Hey folks, > > Apologies for my late response in this thread. > > This topic was also discussed on IRC #openstack-lbaas on March 19 [1]. > I had some initial reservations that were clarified as we discussed > the topic further. I am supportive now of declaring the Queens release > EOL. > > Carlos > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-lbaas/%23openstack-lbaas.2020-03-19.log.html > > On Fri, Mar 20, 2020 at 2:44 AM Michael Johnson > wrote: > > Hi Erik, > > The Queens branch entered Extended Maintenance (EM) on August 25th, > 2019 (later extended to October 25th, 2019 to accommodate release team > resources[1]) and was tagged EM on November 6th, 2019. > > This proposal is to exit Extended Maintenance and mark the Queens > release as End-of-Life(EOL). This is following the guidelines for > stable branch releases[2]. > > Currently the Octavia team is maintaining five branches and supporting > queens is becoming more difficult as the test jobs are still based on > the Ubuntu Xenial release (released April 21. 2016). The python 2.7 > end-of-life combined with the older OS version has made it hard to > keep reliable test jobs[3]. Backporting patches is also becoming more > difficult as the Octavia code base has rapidly evolved in the two > years since it was released. > > If you or others are willing to help us with fixing the queens jobs > and backporting patches, that would be great and we may be able to > leave the queens branch in Extended Maintenance. If not, Adam is > proposing we move it into EOL. > > Personally, I agree with Adam that it is time to mark the Queens > release as EOL. Since you can run newer releases of Octavia on older > clouds (as many do), the branch cannot release under EM, and patches > on that branch are few. > > Michael > > [1] > https://github.com/openstack/releases/commit/be797b61677b1eafc0162bd664f09db54287ac81#diff-8fa0a5ef8165646609d2f3d50646c597 > [2] > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > [3] > https://zuul.openstack.org/builds?project=openstack%2Foctavia&branch=stable%2Fqueens&pipeline=check&job_name=octavia-v2-dsvm-scenario > > On Thu, Mar 19, 2020 at 11:40 AM Erik McCormick > > > wrote: > > > > > > On Thu, Mar 19, 2020, 11:57 AM Adam Harwell > wrote: > >> > >> I'm officially proposing dropping support for the queens stable > branch of Octavia. > >> Given a quorum of cores, we will move forward with marking it > EOL and all that entails. > > > > > > Adam, > > > > Why can't it go into EM Instead of EOL? > > > > Thanks > > Erik > > > >> > >> Thanks, > >>    --Adam > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Apr 6 11:03:43 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 12:03:43 +0100 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: On Mon, 2020-04-06 at 10:10 +0200, Dmitry Tantsur wrote: > The problem is that oslo libraries are OpenStack-specific. Imagine metal3, > for example. When building our images, we can pull (most of) regular Python > packages from the base OS, but everything with "oslo" in its name is on us. > It's a maintenance burden. what distros dont ship oslo libs? RHEL ships them via the OSP repos CentOS ship it via RDO Ubunutu has them in the cloud archive SUSE also shiped them via there openstack product although sicne they are nolonger maintaining that goign forward and moveing the k8s based cloud offerings it might be a valid concern there. they are also directly installable via pip. building rpms in the first place is a mangaince burden you do not need if you are doing containerised delivery they only add value if you are supporting non containerised delivery via standard package manages. so for metal3 i dont see that as a vaild argument as in your case redhat is already going to be doing the packageing for the OSP product line irrispective of the supprot/sale of metal3 in a product so using olso wont have any additional downstream cost. form a distro/downstream perpective we still need to maintain the python libs so using oslo is no larger burden the using a different pip lib that is not already packaged in rhel. From smooney at redhat.com Mon Apr 6 11:12:58 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 12:12:58 +0100 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <20200402211834.mxr7tsdoofpriase@firewall> Message-ID: <6ee90323cda0be866b442f710e29a2eb1a798086.camel@redhat.com> On Mon, 2020-04-06 at 10:40 +0200, Dmitry Tantsur wrote: > On Mon, Apr 6, 2020 at 10:18 AM Bogdan Dobrelya wrote: > > > On 06.04.2020 09:51, Dmitry Tantsur wrote: > > > > > > > > > On Fri, Apr 3, 2020 at 2:48 AM Lingxian Kong > > > wrote: > > > > > > I see we are talking about another "Gnocchi", when Gnocchi moved out > > > > of > > > OpenStack, people said they could run Gnocchi in standalone mode > > > > without > > > installing the other OpenStack services, then they changed default > > > dependency of some other projects (Ceilometer, Panko, etc) to > > > > Gnocchi. > > > As a result, they are all dead (or almost dead). > > > > > > > > > I'd be very careful comparing Ironic to Gnocchi/Telemetry. I think the > > > fate that Telemetry met was largely due to staffing problems, more > > > specifically, all large contributors pulling away from it. It would end > > > up the same inside or outside of OpenStack. > > > > > > > > > Another example is a long time ago in one OpenStack project, there > > > > was a > > > demand for secret management, people said, Barbican is not mature and > > > not production ready yet, we shouldn't dependent on Barbican but > > > > could > > > make it optional, as a result, Barbican never adopted in the project > > > > in > > > real deployment. > > > > > > > > > I don't know much about the Barbican situation, but there may be other > > > explanations. Some operators are against deploying any new service > > > unless absolutely necessary, because any new service is a maintenance > > > burden. > > > > > > At the Denver PTG we were talking about non-Keystone authentication in > > > Ironic. Keystone is arguably very trivial to install, and still it meets > > > some resistance. > > > > > > > > > I have been involved in OpenStack community since 2013, I see people > > > came and left, I see projects created and died, until now, there are > > > only a few of projects alive and actively maintained. IMHO, as a > > > community, we should try our best to integrate projects with each > > > > other, > > > no project can live well without some others help, projects rarely > > > stand or fall alone. > > > > > > > > > To be clear, my proposal does not affect this. Specifically: > > > 1) I don't suggest reducing the number of integration points. > > > > But having *more* integration points and functional duplication, like > > internal project's authorization, coordination (placement/messaging), > > shared libraries, indirectly reduce the integration points in OpenStack > > and pulls off contributors by spreading its focus on that otherwise > > would have been shipped and maintained "out of box" (or out of big > > tent). Not ranting, I understand that it is pointless to complain > > against inevitability. > > > > On the other hand, not having some of these prevents adoption (for example, > the requirement of RabbitMQ has been a huge deal for standalone adoption > and was considered a blocker for metal3). > > Dmitry you could use other messaging busses for a long time with oslo. e.g. zeromq,activemq qpid not just rabbit if you or someone else was to add supprot for NATS https://nats.io/ or ectd i would love to try using that in other projects like nova as honestly i think that is the only true alternitve that has emerged in the last 3-4 years that could propely replace rabbitmq https://blueprints.launchpad.net/oslo.messaging/+spec/nats-transport that said there is nothing that requires ironic to use rabbit today none of the other services should even know if you are using rabbit. they interact via the restapi only. so ironic is free to use something else if it chooses too. i belive there are several projects already using etcd. so if a distibupted state model works better for ironic there is nothing stopping it evloving in that direction. i would argue that the only reason most service use rabbitmq is that that means there are less supproting services that need to be maintained by the sys admin. > > > > > > > 2) Integration points with OpenStack services are already optional in > > > Ironic. > > > > > > What exactly is your concern? Ironic dropping integration points > > > altogether? We don't plan on that. > > > > > > Dmitry > > > > > > > > > Well, I'm not part of TC, I'm not the person or team can decide how > > > Ironic project goes in this situation. But as a developer who is > > > > trying > > > very hard to maintain several OpenStack projects, that what I'm > > > thinking. > > > > > > My 0.02. > > > > > > - > > > Best regards, > > > Lingxian Kong > > > Catalyst Cloud > > > > > > > > > -- > > Best regards, > > Bogdan Dobrelya, > > Irc #bogdando > > > > > > From dtantsur at redhat.com Mon Apr 6 11:14:53 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 6 Apr 2020 13:14:53 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: On Mon, Apr 6, 2020 at 1:03 PM Sean Mooney wrote: > On Mon, 2020-04-06 at 10:10 +0200, Dmitry Tantsur wrote: > > The problem is that oslo libraries are OpenStack-specific. Imagine > metal3, > > for example. When building our images, we can pull (most of) regular > Python > > packages from the base OS, but everything with "oslo" in its name is on > us. > > It's a maintenance burden. > > what distros dont ship oslo libs? > > RHEL ships them via the OSP repos > As part of OpenStack, right. > CentOS ship it via RDO > Ubunutu has them in the cloud archive > SUSE also shiped them via there openstack product although sicne they are > nolonger > maintaining that goign forward and moveing the k8s based cloud offerings > it might be > a valid concern there. > All the same here: oslo libs are parts of OpenStack distributions/offerings. Meaning that to install Ironic you need to at least enable OpenStack repositories, even if you package Ironic yourself. > > they are also directly installable via pip. > building rpms in the first place is a mangaince burden you do not need if > you are > doing containerised delivery they only add value if you are supporting non > containerised > delivery via standard package manages. > Packages do not lose any of their value when used inside containers, and the same arguments apply to this case. And no, let's not seriously talk about pip install. > > so for metal3 i dont see that as a vaild argument as in your case redhat > is already going > to be doing the packageing for the OSP product line irrispective of the > supprot/sale > of metal3 in a product so using olso wont have any additional downstream > cost. > Metal3 is OpenShift, not OpenStack. You're suggesting exactly the thing that turns people against Ironic: require OpenStack. Dmitry > form a distro/downstream perpective we still need to maintain the python > libs so using oslo > is no larger burden the using a different pip lib that is not already > packaged in rhel. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Apr 6 11:26:29 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 12:26:29 +0100 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: <6f5b832cf6f7d10356176db9c59e3864f9117c06.camel@redhat.com> On Mon, 2020-04-06 at 13:14 +0200, Dmitry Tantsur wrote: > On Mon, Apr 6, 2020 at 1:03 PM Sean Mooney wrote: > > > On Mon, 2020-04-06 at 10:10 +0200, Dmitry Tantsur wrote: > > > The problem is that oslo libraries are OpenStack-specific. Imagine > > > > metal3, > > > for example. When building our images, we can pull (most of) regular > > > > Python > > > packages from the base OS, but everything with "oslo" in its name is on > > > > us. > > > It's a maintenance burden. > > > > what distros dont ship oslo libs? > > > > RHEL ships them via the OSP repos > > > > As part of OpenStack, right. > > > > CentOS ship it via RDO > > Ubunutu has them in the cloud archive > > SUSE also shiped them via there openstack product although sicne they are > > nolonger > > maintaining that goign forward and moveing the k8s based cloud offerings > > it might be > > a valid concern there. > > > > All the same here: oslo libs are parts of OpenStack > distributions/offerings. Meaning that to install Ironic you need to at > least enable OpenStack repositories, even if you package Ironic yourself. ya that is true although i think oslo is also a good candiate for standablone reuse outside of openstack. like placment keystone and ironic are. so in my perfered world i would love to see oslo in the base os repos. > > > > > > they are also directly installable via pip. > > building rpms in the first place is a mangaince burden you do not need if > > you are > > doing containerised delivery they only add value if you are supporting non > > containerised > > delivery via standard package manages. > > > > Packages do not lose any of their value when used inside containers, and > the same arguments apply to this case. And no, let's not seriously talk > about pip install. we can agree to disagree on that point. as someone who prefers to install python packages from souce/pip or viewpoint will difer on there value. > > > > > > so for metal3 i dont see that as a vaild argument as in your case redhat > > is already going > > to be doing the packageing for the OSP product line irrispective of the > > supprot/sale > > of metal3 in a product so using olso wont have any additional downstream > > cost. > > > > Metal3 is OpenShift, not OpenStack. You're suggesting exactly the thing > that turns people against Ironic: require OpenStack. well im suggestion we also woudl not want ironci to require openshift. and not really i was more thinking we should work to get oslo packaged in base rhel so that its nolong only shiped in the openstack distro repo and instead availabel for anywoe to use. i kind of feel sad that oslo is not used that much outside of openstack as the lib are good quality. > > Dmitry > > > > form a distro/downstream perpective we still need to maintain the python > > libs so using oslo > > is no larger burden the using a different pip lib that is not already > > packaged in rhel. > > > > > > From noonedeadpunk at ya.ru Mon Apr 6 11:37:42 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Mon, 06 Apr 2020 14:37:42 +0300 Subject: [openstack-ansible] Retirement of repo_build and pip_install roles Message-ID: <7983991586168665@iva1-b50b8ed859e3.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Apr 6 12:05:22 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 13:05:22 +0100 Subject: [openstack-ansible] Retirement of repo_build and pip_install roles In-Reply-To: <7983991586168665@iva1-b50b8ed859e3.qloud-c.yandex.net> References: <7983991586168665@iva1-b50b8ed859e3.qloud-c.yandex.net> Message-ID: On Mon, 2020-04-06 at 14:37 +0300, Dmitriy Rabotyagov wrote: > Hi, > > OpenStack-Ansible has fully migrated from usage of repo_build [1] and pip_install [2] roles to python_install_venv [3] > in Stein. Since then these 2 roles are not used anymore by OSA, and it's probably time we deprecated them not to > release them for Ussuri. im not sure if this applies to roles but for most "features" in openstack we deprecate at least one cycle before removal so if you deprecte them now you woudl still release them in ussuri but remove them in victoria to allow time for consumers of the role that you dont know about to move to python_install_venv. if you ar confident no one outside of osa uses thos then you could remove them early. > > Corresponding patches to retire projects are: [4] for repo_build and [5] for pip_install. If someone needs these > roles, they can be either used with `git checkout HEAD^1` or with reverting mentioned patches. > > > [1] https://opendev.org/openstack/openstack-ansible-repo_build > [2] https://opendev.org/openstack/openstack-ansible-pip_install > [3] https://opendev.org/openstack/ansible-role-python_venv_build > [4] https://review.opendev.org/716389 > [5] https://review.opendev.org/717717 > -- > Kind regards, > Dmitriy Rabotyagov > From hberaud at redhat.com Mon Apr 6 12:48:48 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 6 Apr 2020 14:48:48 +0200 Subject: [barbican][release] List cycle-with-intermediary deliverables that have not been released yet Message-ID: Hello barbican team, Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Ussuri cycle: - barbican-ui Those should be released ASAP, and in all cases before R-3 week (RC1 deadline Apr 20 - Apr 24) , so that we have a release to include in the final Ussuri release. Cheers, -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Apr 6 12:49:47 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 6 Apr 2020 14:49:47 +0200 Subject: [ironic][release] List cycle-with-intermediary deliverables that have not been released yet Message-ID: Hello ironic team, Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Ussuri cycle: - bifrost - ironic-prometheus-exporter - ironic-ui Those should be released ASAP, and in all cases before R-3 week (RC1 deadline Apr 20 - Apr 24) , so that we have a release to include in the final Ussuri release. Cheers, -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Apr 6 12:50:42 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 6 Apr 2020 14:50:42 +0200 Subject: [cloudkitty][release] List cycle-with-intermediary deliverables that have not been released yet Message-ID: Hello cloudkitty team, Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Ussuri cycle: - cloudkitty Those should be released ASAP, and in all cases before R-3 week (RC1 deadline Apr 20 - Apr 24) , so that we have a release to include in the final Ussuri release. Cheers, -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Apr 6 12:52:26 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 6 Apr 2020 14:52:26 +0200 Subject: [heat][release] List cycle-with-intermediary deliverables that have not been released yet Message-ID: Hello heat team, Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Ussuri cycle: - heat-agents Those should be released ASAP, and in all cases before R-3 week (RC1 deadline Apr 20 - Apr 24) , so that we have a release to include in the final Ussuri release. Cheers, -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Apr 6 12:53:25 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 6 Apr 2020 14:53:25 +0200 Subject: [magnum][release] List cycle-with-intermediary deliverables that have not been released yet Message-ID: Hello magnum team, Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Ussuri cycle: - magnum-ui Those should be released ASAP, and in all cases before R-3 week (RC1 deadline Apr 20 - Apr 24) , so that we have a release to include in the final Ussuri release. Cheers, -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Apr 6 12:54:16 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 6 Apr 2020 14:54:16 +0200 Subject: [manila][release] List cycle-with-intermediary deliverables that have not been released yet Message-ID: Hello manila team, Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Ussuri cycle: - manila-ui Those should be released ASAP, and in all cases before R-3 week (RC1 deadline Apr 20 - Apr 24) , so that we have a release to include in the final Ussuri release. Cheers, -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Apr 6 12:55:53 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 6 Apr 2020 14:55:53 +0200 Subject: [monasca][release] List cycle-with-intermediary deliverables that have not been released yet Message-ID: Hello monasca team, Quick reminder that we'll need a release very soon for a number of deliverables following a cycle-with-intermediary release model but which have not done *any* release yet in the Ussuri cycle: - monasca-thresh - monasca-ui Those should be released ASAP, and in all cases before R-3 week (RC1 deadline Apr 20 - Apr 24) , so that we have a release to include in the final Ussuri release. Cheers, -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Apr 6 13:10:13 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 6 Apr 2020 15:10:13 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: <6f5b832cf6f7d10356176db9c59e3864f9117c06.camel@redhat.com> References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> <6f5b832cf6f7d10356176db9c59e3864f9117c06.camel@redhat.com> Message-ID: <2d085eb5-2100-478c-7ee9-adcd13f860db@openstack.org> Sean Mooney wrote: > On Mon, 2020-04-06 at 13:14 +0200, Dmitry Tantsur wrote: >> On Mon, Apr 6, 2020 at 1:03 PM Sean Mooney wrote: >> >>> On Mon, 2020-04-06 at 10:10 +0200, Dmitry Tantsur wrote: >>>> The problem is that oslo libraries are OpenStack-specific. Imagine >>> >>> metal3, >>>> for example. When building our images, we can pull (most of) regular >>> >>> Python >>>> packages from the base OS, but everything with "oslo" in its name is on >>> >>> us. >>>> It's a maintenance burden. >>> >>> what distros dont ship oslo libs? >>> >>> RHEL ships them via the OSP repos >>> >> >> As part of OpenStack, right. >> >> >>> CentOS ship it via RDO >>> Ubunutu has them in the cloud archive >>> SUSE also shiped them via there openstack product although sicne they are >>> nolonger >>> maintaining that goign forward and moveing the k8s based cloud offerings >>> it might be >>> a valid concern there. >>> >> >> All the same here: oslo libs are parts of OpenStack >> distributions/offerings. Meaning that to install Ironic you need to at >> least enable OpenStack repositories, even if you package Ironic yourself. > ya that is true although i think oslo is also a good candiate for standablone reuse > outside of openstack. like placment keystone and ironic are. > so in my perfered world i would love to see oslo in the base os repos. What's preventing that from happening ? What is distro policy around general-purpose but openstack-community-maintained Python libraries like stevedore or tooz ? FWIW in Ubuntu all oslo libraries are packaged as part of the "base OS repos", and therefore indistinguishable from other Python libraries in terms of reuse. The 'cloud archive' is just an additive repository that allows older LTS users to use the most recent OpenStack releases. -- Thierry Carrez (ttx) From dtantsur at redhat.com Mon Apr 6 13:16:23 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 6 Apr 2020 15:16:23 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: <2d085eb5-2100-478c-7ee9-adcd13f860db@openstack.org> References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> <6f5b832cf6f7d10356176db9c59e3864f9117c06.camel@redhat.com> <2d085eb5-2100-478c-7ee9-adcd13f860db@openstack.org> Message-ID: On Mon, Apr 6, 2020 at 3:13 PM Thierry Carrez wrote: > Sean Mooney wrote: > > On Mon, 2020-04-06 at 13:14 +0200, Dmitry Tantsur wrote: > >> On Mon, Apr 6, 2020 at 1:03 PM Sean Mooney wrote: > >> > >>> On Mon, 2020-04-06 at 10:10 +0200, Dmitry Tantsur wrote: > >>>> The problem is that oslo libraries are OpenStack-specific. Imagine > >>> > >>> metal3, > >>>> for example. When building our images, we can pull (most of) regular > >>> > >>> Python > >>>> packages from the base OS, but everything with "oslo" in its name is > on > >>> > >>> us. > >>>> It's a maintenance burden. > >>> > >>> what distros dont ship oslo libs? > >>> > >>> RHEL ships them via the OSP repos > >>> > >> > >> As part of OpenStack, right. > >> > >> > >>> CentOS ship it via RDO > >>> Ubunutu has them in the cloud archive > >>> SUSE also shiped them via there openstack product although sicne they > are > >>> nolonger > >>> maintaining that goign forward and moveing the k8s based cloud > offerings > >>> it might be > >>> a valid concern there. > >>> > >> > >> All the same here: oslo libs are parts of OpenStack > >> distributions/offerings. Meaning that to install Ironic you need to at > >> least enable OpenStack repositories, even if you package Ironic > yourself. > > ya that is true although i think oslo is also a good candiate for > standablone reuse > > outside of openstack. like placment keystone and ironic are. > > so in my perfered world i would love to see oslo in the base os repos. > > What's preventing that from happening ? What is distro policy around > general-purpose but openstack-community-maintained Python libraries like > stevedore or tooz ? > I don't think such a policy exists. I think it's based on the actual usage by non-OpenStack consumers. I can only speculate what prevents them from using e.g. oslo.config or stevedore. Maybe they see that the source and documentation are hosted on openstack.org and assume they're only for OpenStack or somehow require OpenStack (i.e. same problem)? Maybe it's something that docs.opendev.org/stevedore and opendev.org/stevedore (no openstack) could help fixing? Dmitry > > FWIW in Ubuntu all oslo libraries are packaged as part of the "base OS > repos", and therefore indistinguishable from other Python libraries in > terms of reuse. The 'cloud archive' is just an additive repository that > allows older LTS users to use the most recent OpenStack releases. > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Apr 6 13:20:02 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 6 Apr 2020 14:20:02 +0100 Subject: [ironic][release] List cycle-with-intermediary deliverables that have not been released yet In-Reply-To: References: Message-ID: On Mon, 6 Apr 2020 at 13:50, Herve Beraud wrote: > > Hello ironic team, > > Quick reminder that we'll need a release very soon for a number of > deliverables following a cycle-with-intermediary release model but which > have not done *any* release yet in the Ussuri cycle: > > - bifrost > - ironic-prometheus-exporter > - ironic-ui > > Those should be released ASAP, and in all cases before R-3 week (RC1 deadline Apr 20 - Apr 24) , so that we have a release to include in the final Ussuri release. Can we please move to the cycle-with-optional-intermediary release model? I realise it doesn't exist. We (ironic) would like the option to release features outside the normal cadence of the OpenStack release cycle, without being forced to do it. It would save a lot of time, and avoid emails and autogenerated patches each cycle. I don't have a strong opinion on the Ironic top-level OpenDev project discussion, but I think this is one example of where the team sees unnecessary inflexibility in OpenStack's processes. Perhaps I'm missing something. Mark > > Cheers, > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From smooney at redhat.com Mon Apr 6 13:33:46 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 14:33:46 +0100 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: <2d085eb5-2100-478c-7ee9-adcd13f860db@openstack.org> References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> <6f5b832cf6f7d10356176db9c59e3864f9117c06.camel@redhat.com> <2d085eb5-2100-478c-7ee9-adcd13f860db@openstack.org> Message-ID: <290f01b6babb902dbd96d4bdb38c0877c59cf4ec.camel@redhat.com> On Mon, 2020-04-06 at 15:10 +0200, Thierry Carrez wrote: > Sean Mooney wrote: > > On Mon, 2020-04-06 at 13:14 +0200, Dmitry Tantsur wrote: > > > On Mon, Apr 6, 2020 at 1:03 PM Sean Mooney wrote: > > > > > > > On Mon, 2020-04-06 at 10:10 +0200, Dmitry Tantsur wrote: > > > > > The problem is that oslo libraries are OpenStack-specific. Imagine > > > > > > > > metal3, > > > > > for example. When building our images, we can pull (most of) regular > > > > > > > > Python > > > > > packages from the base OS, but everything with "oslo" in its name is on > > > > > > > > us. > > > > > It's a maintenance burden. > > > > > > > > what distros dont ship oslo libs? > > > > > > > > RHEL ships them via the OSP repos > > > > > > > > > > As part of OpenStack, right. > > > > > > > > > > CentOS ship it via RDO > > > > Ubunutu has them in the cloud archive > > > > SUSE also shiped them via there openstack product although sicne they are > > > > nolonger > > > > maintaining that goign forward and moveing the k8s based cloud offerings > > > > it might be > > > > a valid concern there. > > > > > > > > > > All the same here: oslo libs are parts of OpenStack > > > distributions/offerings. Meaning that to install Ironic you need to at > > > least enable OpenStack repositories, even if you package Ironic yourself. > > > > ya that is true although i think oslo is also a good candiate for standablone reuse > > outside of openstack. like placment keystone and ironic are. > > so in my perfered world i would love to see oslo in the base os repos. > > What's preventing that from happening ? What is distro policy around > general-purpose but openstack-community-maintained Python libraries like > stevedore or tooz ? > > FWIW in Ubuntu all oslo libraries are packaged as part of the "base OS > repos", and therefore indistinguishable from other Python libraries in > terms of reuse. The 'cloud archive' is just an additive repository that > allows older LTS users to use the most recent OpenStack releases. i think fedora ships them in the base os too https://src.fedoraproject.org/rpms/python-oslo-config and if im reading this rgiht i think opensuse does too https://build.opensuse.org/repositories/openSUSE:Factory/python-oslo.config i suspect from a redhat perspective it was just a reflection our org stucture or there may have been a business decision at play. through the use of modules in the new buidl system on rhel8 its perfectly possible for RHEL to provide a version of a package and the openstack product to provide an different version on a different delivery cadence. we do this for libvirt via the advnced virt module but form a technicall perspective i dont think there is fundamentally an issue with that other then manpower and this is how its currently done. RDO certenly is a factor as if we move too packaging them in centos main repos instead of rdo there would have to be some collaberation there to enable it to work smothly. currently tooz and the rest of oslo are deliver as part of the centos cloud sig https://cbs.centos.org/koji/buildinfo?buildID=25866 consuming the output of the rdo project. https://wiki.centos.org/SpecialInterestGroup/Cloud i would expect kubernets/openshift orgin to be part of that too although apparently it is not. > From fungi at yuggoth.org Mon Apr 6 13:34:23 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 6 Apr 2020 13:34:23 +0000 Subject: [openstack-ansible] Retirement of repo_build and pip_install roles In-Reply-To: References: <7983991586168665@iva1-b50b8ed859e3.qloud-c.yandex.net> Message-ID: <20200406133423.nlyfr5kxzgab42va@yuggoth.org> On 2020-04-06 13:05:22 +0100 (+0100), Sean Mooney wrote: [...] > im not sure if this applies to roles but for most "features" in > openstack we deprecate at least one cycle before removal [...] This is a governance tag projects can choose to assert: https://governance.openstack.org/tc/reference/tags/assert_follows-standard-deprecation.html It's service specific and talks about API features, though I suppose the concepts there could be adapted to things like orchestration and deployment tooling. Also note that OpenStackAnsible doesn't assert this tag on any deliverables so are free to disregard the guidance therein. That said, it's for sure a bit of an error in terminology to say you're "deprecating" something and removing it in the span of the same release. That's not deprecating, it's just plain removing. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Mon Apr 6 13:45:52 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 6 Apr 2020 08:45:52 -0500 Subject: [ironic][release] List cycle-with-intermediary deliverables that have not been released yet In-Reply-To: References: Message-ID: On 4/6/20 8:20 AM, Mark Goddard wrote: > On Mon, 6 Apr 2020 at 13:50, Herve Beraud wrote: >> Hello ironic team, >> >> Quick reminder that we'll need a release very soon for a number of >> deliverables following a cycle-with-intermediary release model but which >> have not done *any* release yet in the Ussuri cycle: >> >> - bifrost >> - ironic-prometheus-exporter >> - ironic-ui >> >> Those should be released ASAP, and in all cases before R-3 week (RC1 deadline Apr 20 - Apr 24) , so that we have a release to include in the final Ussuri release. > Can we please move to the cycle-with-optional-intermediary release > model? I realise it doesn't exist. We (ironic) would like the option > to release features outside the normal cadence of the OpenStack > release cycle, without being forced to do it. It would save a lot of > time, and avoid emails and autogenerated patches each cycle. > > I don't have a strong opinion on the Ironic top-level OpenDev project > discussion, but I think this is one example of where the team sees > unnecessary inflexibility in OpenStack's processes. Perhaps I'm > missing something. > > Mark The "independent" release model is basically a cycle-with-optional-intermediary release model, or at least it can be used that way. If there is nothing to be released for an entire cycle, then that deliverable really isn't cycle based. If the last release from the last cycle is still relevant to the next release, you don't do another release and you just keep using the old, existing release. So that would be my first suggestion. The other option that Thierry pointed out: "As swift has proven in the past, you can totally have a product strategy with the cycle-with-intermediary model. You can even maintain your own extra branches (think stable/2.0) if you want. The only extra limitations AFAIK in the integrated release are that (1) you also maintain stable branches at openstack common release points (like stable/train), and (2) that the openstack release management team has an opportunity to check the release numbering for semver sanity." The benefit of sticking with the cycle based model and making sure at least one release is done per cycle is that we can catch hidden bit rot that gets introduced and causes errors during the release process. This happens, sadly, more often than you might expect. Sticking with a cycle based model, it can either be cycle-with-intermediary, where we just want to make sure we get some updates along the way, or cycle-with-rc, where we at least always make sure there is a final release at the end of the cycle that picks up any changes. Additional beta releases can be done during the cycle if needed to test and validate changes before everyone is in the freeze at the end. The important thing is being able to communicate to downstream packagers and other consumers what to expect with these deliverables. If something is "cycle-*" based, but then we don't release anything, that makes it look like that deliverable got dropped. At least if we communicate it is "independent" then this would look normal and packagers know to just keep using that last version. If it is just an occasional thing that a package doesn't end up with any meaningful updates (I would be kind of suprised, even with low activity deliverables. There are always at least the usual cycle goal and other updates), then it is possible to tag a new version number on the same commit. This isn't great, but it allows us to test out the release process and make sure that is all working. And it allows a new point on which to branch, so should the need ever arise in the future, bug fixes can be backported to distinct branch points for each cycle that needs it. So we really don't want to introduce a release model that says "we follow the development cycle, maybe we'll release something this cycle, maybe we won't". That would be confusing and introduce uncertainty downstream. Sean From mnaser at vexxhost.com Mon Apr 6 14:45:38 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 6 Apr 2020 10:45:38 -0400 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: Hi Dmitry, Thank you for raising this. I think what you're looking for makes sense, I don't think splitting outside OpenStack is the right solution for this. There are many logistical issues in doing this. First of all, it promotes even more bureaucracy within our community which is something that we're trying to split. "Ironic" and "OpenStack" becoming two separate pieces means that we've failed as a community to be able to deliver what OpenStack is. If we do this, we further promote the separation of our communities and that is not sustainable. With a dwindling contributor base, we'll find power in standing together in big groups, not by isolating ourselves to small islands. Arguably, you would say that "well, Ironic is picking up outside OpenStack and we want to capitalize on that". I agree with you on that, I think we should absolutely do that. However, I think simply just becoming a top-level project is not the way to go about this. It will introduce a lot more work to our (already overwhelmed) OSF staff, it means maintaining a new identity, it means applying to be a pilot project and going through the whole process. It means that all existing developer may need to have to revise the way they do work because they have signed the CCLA for OpenStack and not "Ironic". We're adding a whole lot of bureaucray when the problem is messaging. I've gone over your points below about what you think this will do and strongly suggest those alternatives. Regards, Mohammed On Wed, Apr 1, 2020 at 1:07 PM Dmitry Tantsur wrote: > > Hi everyone! > > This topic should not come as a huge surprise for many, since it has been raised numerous times in the past years. I have a feeling that the end of Ussuri, now that we’ve re-acquired our PTL and are on the verge of selecting new TC members, may be a good time to propose it for a formal discussion. > > TL;DR I’m proposing to make Ironic a top-level project under opendev.org and the OpenStack Foundation, following the same model as Zuul. I don’t propose severing current relationships with other OpenStack projects, nor making substantial changes in how the project is operated. > > (And no, it’s not an April 1st joke) > > Background > ========= > > Ironic was born as a Nova plugin, but has grown way beyond this single case since then. The first commit in Bifrost dates to February 2015. During these 5 years (hey, we forgot to celebrate!) it has developed into a commonly used data center management tool - and still based on standalone Ironic! The Metal3 project uses standalone Ironic as its hardware management backend. We haven’t been “just” a component of OpenStack for a while now, I think it’s time to officially recognize it. > > And before you ask: in no case do I suggest scaling down our invaluable integration with Nova. We’re observing a solid growth of deployments using Ironic as an addition to their OpenStack clouds, and this proposal doesn’t try to devalue this use case. The intention is to accept publicly and officially that it’s not the only or the main use case, but one of the main use cases. I don’t think it comes as a surprise to the Nova team. > > Okay, so why? > =========== > > The first and the main reason is the ambiguity in our positioning. We do see prospective operators and users confused by the perception that Ironic is a part of OpenStack, especially when it comes to the standalone use case. “But what if I don’t need OpenStack” is a question that I hear in most of these conversations. Changing from “a part of OpenStack” to “a FOSS tool that can integrate with OpenStack” is critical for our project to keep growing into new fields. To me personally it feels in line with how OpenDev itself is reaching into new areas beyond just the traditional IaaS. The next OpenDev even will apparently have a bare metal management track, so why not a top-level project for it? > > Another reason is release cadence. We have repeatedly expressed the desire to release Ironic and its sub-projects more often than we do now. Granted, *technically* we can release often even now. We can even abandon the current release model and switch to “independent”, but it doesn’t entirely solve the issue at hand. First, we don’t want to lose the notion of stable branches. One way or another, we need to support consumers with bug fix releases. Second, to become truly “independent” we’ll need to remove any tight coupling with any projects that do integrated releases. Which is, essentially, what I’m proposing here. > > Finally, I believe that our independence (can I call it “Irexit” please?) has already happened in reality, we just shy away from recognizing it. Look: > 1. All integration points with other OpenStack projects are optional. > 2. We can work fully standalone and even provide a project for that. > 3. Many new features (RAID, BIOS to name a few) are exposed to standalone users much earlier than to those going through Nova. > 4. We even have our own mini-scheduler (although its intention is not and has not been to replace the Placement service). > 5. We make releases more often than the “core” OpenStack projects (but see above). > > What we will do > ============ > > This proposal involves in the short term: > * Creating a new git namespace: opendev.org/ironic We could totally do this for all existing projects honestly. I think the TC could probably be okay with this. > * Creating a new website (name TBD, bare metal puns are welcome). > * If we can have https://docs.opendev.org/ironic/, it may be just fine though. Who's going to work on this website? It's important to not only have a website but keep it maintained, add more content, update it. The website will have absolutely zero traction initially and we'll miss out on all the "traffic" that OpenStack.org gets. I think what we should actually do is redesign OpenStack.org so that it's a focused about the OpenStack projects and move all the foundation stuff to osf.dev -- In there, we can nail down the messaging of "you don't need all of OpenStack". > * Keeping the same governance model, only adjusted to the necessary extent. This is not easy, you'll have to come up with a whole governance, run elections, manage people. We already have volunteers that help do this inside OpenStack, why add all that extra layer? > * Keeping the same policies (reviews, CI, stable). That seems reasonable to me > * Defining a new release cadence and stable branch support schedule. If there is anything in the current model that doesn't suit you, please bring it up, and let's revise it. I've heard this repeated a lot as a complaint from the Ironic team and I've unfortunately not seen any proposal about an ideal alternative. We need to hear things to change things. > In the long term we will consider (not necessary do): > * Reshaping our CI to rely less on devstack and grenade (only use them for jobs involving OpenStack). That seems reasonable to have more Ironic "standalone" jobs. It is important that _the_ biggest consumers *are* the OpenStack ones, let's not alienate them so we end up in a world of nothing new. > * Reducing or removing reliance on oslo libraries. Why? > * Stopping using rabbitmq for messaging (we’ve already made it optional). Please. Please. Whatever you replace it with, just update oslo.messaging and make all of us happy to stop using it. It's hell. > * Integrating with non-OpenStack services (kubernetes?) and providing lighter alternatives (think, built-in authentication). I support this, and I think there's nothing stopping you from doing that today. If there is, let's bring it up. > What we will NOT do > ================ > > At least this proposal does NOT involve: > * Stopping maintaining the Ironic virt driver in Nova. > * Stopping running voting CI jobs with OpenStack services. > * Dropping optional integration with OpenStack services. > * Leaving OpenDev completely. > > What do you think? > =============== > > Please let us know what you think about this proposal. Any hints on how to proceed with it, in case we reach a consensus, are also welcome. > > Cheers, > Dmitry -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com From doug at doughellmann.com Mon Apr 6 14:47:38 2020 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 6 Apr 2020 10:47:38 -0400 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: <46429DAC-BE6D-4D84-93E8-B5A9723173ED@doughellmann.com> > On Apr 6, 2020, at 4:10 AM, Dmitry Tantsur wrote: > > Hi, > > On Fri, Apr 3, 2020 at 9:35 AM Jean-Philippe Evrard > wrote: > > > Also, how is the reliance on oslo a problem? Do you want to use another > library in the python ecosystem instead? If true, what about phasing > out that part of oslo, so we don't have to maintain it? Just curious. > > The problem is that oslo libraries are OpenStack-specific. Imagine metal3, for example. When building our images, we can pull (most of) regular Python packages from the base OS, but everything with "oslo" in its name is on us. It's a maintenance burden. > > With absolutely no disrespect meant to the awesome Oslo team, I think the existence of Oslo libraries is a bad sign. I think as a strong FOSS community we shouldn't invest into libraries that are either useful only to us or at least are marketed this way. For example: > 1) oslo.config is a fantastic piece of software that the whole python world could benefit from. Same for oslo.service, probably. I would love to have oslo.config more widely adopted outside of OpenStack. At the same time, it has a very opinionated way of managing config than most applications need, so I can understand why it might not be. > 2) oslo.utils as a catch-all repository of utilities should IMO be either moved to existing python projects or decomposed into small generally reusable libraries (essentially, each sub-module could be a library of its own). Same for oslo.concurrency. When I did the original analysis for breaking oslo-incubator up into separate libraries, I had to balance the work to create and manage individual libraries and their cross-dependencies with the desire to have everything nice and neat. The utils library ended up as a catch all for things that didn’t seem reusable outside of OpenStack so that at a time before all of our release automation was in place the team only had to manage ~30 libraries instead of ~50. Of course that can evolve, if someone feels strongly enough to do the work. It’s certainly easier today than it was back then. > 3) I'm genuinely not sure why oslo.log and oslo.i18n exist and which parts of their functionality cannot be moved upstream. The log library is there to make it simple for all OpenStack services to configure their logging in the same way. It’s glue between oslo.config and Python’s logging module. Similarly, the i18n library is the glue between oslo.config and Python’s gettext module. The hint is in their names. The “oslo" prefix was reserved for libraries that are more likely to be seen as only useful within OpenStack. In a project as large as OpenStack, there’s a fair amount of code that can be reused but that wouldn’t make sense for use in any other code bases. For libraries that we thought would be useful by other communities, we chose more generic names. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Apr 6 14:48:02 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 6 Apr 2020 10:48:02 -0400 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: On Thu, Apr 2, 2020 at 6:30 AM Jean-Philippe Evrard wrote: > > Hello, > > I read your nominations, and as usual I will ask what do you > _technically_ will do during your mandate, what do you _actively_ want > to change in OpenStack? I'm aiming to simplify the overall deployment of OpenStack overall and this would happen by us starting to rechange our core architecture to using relevant technologies. We've always claimed to be a cloud operating system, the space has changed and we need to be so we can continue to be one. > This can be a change in governance, in the projects, in the current > structure... it can be really anything. I am just hoping to see > practical OpenStack-wide changes here. It doesn't need to be a fully > refined idea, but something that can be worked upon. Regardless of my election or not, I intend to push on adding Kubernetes as a base service to OpenStack. This will be the first thing to help enable projects to leverage things like custom resources, scheduling features of Kubernetes to be able to deliver IaaS components easily > Thanks for your time. > > Regards, > Jean-Philippe Evrard > > > > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com From sean.mcginnis at gmx.com Mon Apr 6 14:49:50 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 6 Apr 2020 09:49:50 -0500 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: <725b0f10-6edc-e2c2-4b3c-bd00ad22a537@gmx.com> On 4/6/20 9:45 AM, Mohammed Naser wrote: > Hi Dmitry, > > Thank you for raising this. I think what you're looking for makes > sense, I don't think splitting outside OpenStack is the right solution > for this. There are many logistical issues in doing this. > > First of all, it promotes even more bureaucracy within our community > which is something that we're trying to split. "Ironic" and > "OpenStack" becoming two separate pieces means that we've failed as a > community to be able to deliver what OpenStack is. If we do this, we > further promote the separation of our communities and that is not > sustainable. With a dwindling contributor base, we'll find power in > standing together in big groups, not by isolating ourselves to small > islands. > > Arguably, you would say that "well, Ironic is picking up outside > OpenStack and we want to capitalize on that". I agree with you on > that, I think we should absolutely do that. However, I think simply > just becoming a top-level project is not the way to go about this. It > will introduce a lot more work to our (already overwhelmed) OSF staff, > it means maintaining a new identity, it means applying to be a pilot > project and going through the whole process. It means that all > existing developer may need to have to revise the way they do work > because they have signed the CCLA for OpenStack and not "Ironic". > We're adding a whole lot of bureaucray when the problem is messaging. > > I've gone over your points below about what you think this will do and > strongly suggest those alternatives. > > Regards, > Mohammed Cinder has been useful stand alone for several years now, but I have also seen the reaction of "why would I use that, I don't need all of that OpenStack stuff". I wonder if we need to do something better to highlight and message that there are certain components of OpenStack that are useful as independent components that can exist on their own. Sean From mnaser at vexxhost.com Mon Apr 6 14:49:47 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 6 Apr 2020 10:49:47 -0400 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann wrote: > > This topic is a very important and critical area to solve in the OpenStack community. > I personally feel and keep raising this issue wherever I get the opportunity. > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > Without enough developers (either open or closed source), none of the software can survive. > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > project is facing the less contributor issue as compare to requirements and incoming requests. Few > projects already dead or going to be if we do not solve the less contributors issue now. > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > the way who can solve this. > > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > raise this issue? or any new crazy Idea? > I think this has unfortunately become a self-inflicted wound by us sticking to our ways and refusing to adopt change. The space and landscape has changed so much over the past few years and we've stuck to our ways and refused to adopt these technologies. I think by adding more things that natively run on top of Kubernetes, we add a whole set of potential contributors who can use those OpenStack components that want to run Kubernetes only. > > -gmann > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com From mnaser at vexxhost.com Mon Apr 6 14:50:45 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 6 Apr 2020 10:50:45 -0400 Subject: [tc][election] campaign discussion: what we can improve in TC & how? In-Reply-To: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> References: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> Message-ID: On Sat, Apr 4, 2020 at 8:56 PM Ghanshyam Mann wrote: > > As we are in the campaigning phase of the TC election, where we > start the debate on few topics. This is one of the topics where I would like > to start the debate. > > First off, I'd like to thank all the candidates for showing interest to > be part of or continuing as TC. > > What you think we should and must improve in TC ? This can be > the involvement of TC in the process from the governance point of view or technical > help for each project. Few of the question is below but feel free to add your improvement > points. Meetings. Seriously, I have no idea how I still can't convince people to meet more than one a month. We're a key part of the OpenStack success and we should be meeting just as often as we can to be able to continue to drive things A month is too long of a cycle to drive things out. > - Do we have too much restriction on project sides and not giving them a free hand? If yes, what > we can improve and how? > > - Is there less interaction from TC with projects? I am sure few projects/members even > do not know even what TC is for? What's your idea to solve this. > > -gmann > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com From mnaser at vexxhost.com Mon Apr 6 14:52:41 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 6 Apr 2020 10:52:41 -0400 Subject: [openstack-ansible] Retirement of repo_build and pip_install roles In-Reply-To: <20200406133423.nlyfr5kxzgab42va@yuggoth.org> References: <7983991586168665@iva1-b50b8ed859e3.qloud-c.yandex.net> <20200406133423.nlyfr5kxzgab42va@yuggoth.org> Message-ID: Thanks for reaching out. For what it's worth, you can think of these roles as internal APIs that we're removed that we replaced by other ones. They were never meant to be directly consumed by our users, they only exist to be able to "refactor" our code On Mon, Apr 6, 2020 at 9:37 AM Jeremy Stanley wrote: > > On 2020-04-06 13:05:22 +0100 (+0100), Sean Mooney wrote: > [...] > > im not sure if this applies to roles but for most "features" in > > openstack we deprecate at least one cycle before removal > [...] > > This is a governance tag projects can choose to assert: > > https://governance.openstack.org/tc/reference/tags/assert_follows-standard-deprecation.html > > It's service specific and talks about API features, though I suppose > the concepts there could be adapted to things like orchestration and > deployment tooling. Also note that OpenStackAnsible doesn't assert > this tag on any deliverables so are free to disregard the guidance > therein. That said, it's for sure a bit of an error in terminology > to say you're "deprecating" something and removing it in the span of > the same release. That's not deprecating, it's just plain removing. > -- > Jeremy Stanley -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com From sean.mcginnis at gmx.com Mon Apr 6 14:56:02 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 6 Apr 2020 09:56:02 -0500 Subject: [tc][election] campaign discussion: what we can improve in TC & how? In-Reply-To: References: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> Message-ID: >> What you think we should and must improve in TC ? This can be >> the involvement of TC in the process from the governance point of view or technical >> help for each project. Few of the question is below but feel free to add your improvement >> points. > Meetings. Seriously, I have no idea how I still can't convince people > to meet more than > one a month. We're a key part of the OpenStack success and we should be meeting > just as often as we can to be able to continue to drive things > > A month is too long of a cycle to drive things out. OK, sub-question for the TC then. Why do you feel the only place you can drive things is inside of a time restricted, geo-restrictive meeting time? From doug at doughellmann.com Mon Apr 6 14:57:05 2020 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 6 Apr 2020 10:57:05 -0400 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: <1A527AA3-71BE-4C37-A949-D9BD751FFA64@doughellmann.com> > On Apr 6, 2020, at 4:10 AM, Dmitry Tantsur wrote: > > Hi, > > On Fri, Apr 3, 2020 at 9:35 AM Jean-Philippe Evrard > wrote: > Hello, > > On Thu, 2020-04-02 at 12:38 +0200, Dmitry Tantsur wrote: > [snip] > > Now, I do agree that there are steps that can be taken before we go > > all nuclear. We can definitely work on our own website, we can reduce > > reliance on oslo, start releasing independently, and so on. I'm > > wondering what will be left of our participation in OpenStack in the > > end. Thierry has suggested the role of the TC in ensuring > > integration. I'm of the opinion that if all stakeholders in Ironic > > lose interest in Ironic as part of OpenStack, no power will prevent > > the integration from slowly falling apart. > > I don't see it that way. I see this as an opportunity to make OpenStack > more clear, more reachable, more interesting. For me, Ironic, Cinder, > Manila (to only name a few), are very good example of datacenter/IaaS > software that could be completely independent in their consumption, and > additionally released together. For me, the strength of OpenStack was > always in the fact it had multiple small projects that work well > together, compared to a single big blob of software which does > everything. We just didn't bank enough on the standalone IMHO. But I am > sure we are aligned there... Wouldn't the next steps be instead to make > it easier to consume standalone? > > For us one of the problems, as I've mentioned already, is producing releases more often. Now, the point of potential misunderstanding is this: we can (and do) release more often than once in 6 months. These releases, however, do not enjoy the same degree of support as the "blessed" releases, especially when it comes to upgrades and longer-term support. But that’s not because Ironic is part of OpenStack, right? Nothing stops teams from creating and managing additional branches. The rest of the OpenStack community doesn’t do that, so the Ironic team would have to manage the additional work (creating the branches, supporting them in CI, backports, filing more releases, etc.) but that wouldn’t change if Ironic was not part of OpenStack, would it? If anything, there would be less support because some of the automation we have for OpenStack projects wouldn’t be there to lean on. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Apr 6 14:59:38 2020 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 6 Apr 2020 10:59:38 -0400 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: > On Apr 6, 2020, at 7:14 AM, Dmitry Tantsur wrote: > > > > On Mon, Apr 6, 2020 at 1:03 PM Sean Mooney > wrote: > On Mon, 2020-04-06 at 10:10 +0200, Dmitry Tantsur wrote: > > The problem is that oslo libraries are OpenStack-specific. Imagine metal3, > > for example. When building our images, we can pull (most of) regular Python > > packages from the base OS, but everything with "oslo" in its name is on us. > > It's a maintenance burden. > > what distros dont ship oslo libs? > > RHEL ships them via the OSP repos > > As part of OpenStack, right. > > CentOS ship it via RDO > Ubunutu has them in the cloud archive > SUSE also shiped them via there openstack product although sicne they are nolonger > maintaining that goign forward and moveing the k8s based cloud offerings it might be > a valid concern there. > > All the same here: oslo libs are parts of OpenStack distributions/offerings. Meaning that to install Ironic you need to at least enable OpenStack repositories, even if you package Ironic yourself. This issue is internal to Red Hat and our choices in how we manage our resources to package things for distribution. I don’t think moving Ironic out of OpenStack at the community level is going to make the problem go away. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Apr 6 15:07:33 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 6 Apr 2020 11:07:33 -0400 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: <725b0f10-6edc-e2c2-4b3c-bd00ad22a537@gmx.com> References: <725b0f10-6edc-e2c2-4b3c-bd00ad22a537@gmx.com> Message-ID: On Mon, Apr 6, 2020 at 10:53 AM Sean McGinnis wrote: > > On 4/6/20 9:45 AM, Mohammed Naser wrote: > > Hi Dmitry, > > > > Thank you for raising this. I think what you're looking for makes > > sense, I don't think splitting outside OpenStack is the right solution > > for this. There are many logistical issues in doing this. > > > > First of all, it promotes even more bureaucracy within our community > > which is something that we're trying to split. "Ironic" and > > "OpenStack" becoming two separate pieces means that we've failed as a > > community to be able to deliver what OpenStack is. If we do this, we > > further promote the separation of our communities and that is not > > sustainable. With a dwindling contributor base, we'll find power in > > standing together in big groups, not by isolating ourselves to small > > islands. > > > > Arguably, you would say that "well, Ironic is picking up outside > > OpenStack and we want to capitalize on that". I agree with you on > > that, I think we should absolutely do that. However, I think simply > > just becoming a top-level project is not the way to go about this. It > > will introduce a lot more work to our (already overwhelmed) OSF staff, > > it means maintaining a new identity, it means applying to be a pilot > > project and going through the whole process. It means that all > > existing developer may need to have to revise the way they do work > > because they have signed the CCLA for OpenStack and not "Ironic". > > We're adding a whole lot of bureaucray when the problem is messaging. > > > > I've gone over your points below about what you think this will do and > > strongly suggest those alternatives. > > > > Regards, > > Mohammed > > Cinder has been useful stand alone for several years now, but I have > also seen the reaction of "why would I use that, I don't need all of > that OpenStack stuff". > > I wonder if we need to do something better to highlight and message that > there are certain components of OpenStack that are useful as independent > components that can exist on their own. I think that Sean here hit on the most critical point we need to drive. There's no amount of splitting that would resolve this. > Sean > > > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com From mnaser at vexxhost.com Mon Apr 6 15:11:32 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 6 Apr 2020 11:11:32 -0400 Subject: [tc][election] campaign discussion: what we can improve in TC & how? In-Reply-To: References: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> Message-ID: On Mon, Apr 6, 2020 at 10:59 AM Sean McGinnis wrote: > > > >> What you think we should and must improve in TC ? This can be > >> the involvement of TC in the process from the governance point of view or technical > >> help for each project. Few of the question is below but feel free to add your improvement > >> points. > > Meetings. Seriously, I have no idea how I still can't convince people > > to meet more than > > one a month. We're a key part of the OpenStack success and we should be meeting > > just as often as we can to be able to continue to drive things > > > > A month is too long of a cycle to drive things out. > > OK, sub-question for the TC then. > > Why do you feel the only place you can drive things is inside of a time > restricted, geo-restrictive meeting time? > I think it's mostly a follow-up thing and a place to drive discussion. The office hours have largely just become a quiet area where not much happens these days. The "meetings" we have are simply to check the box that is given to us from the foundation to quickly glance over the things we're dealing with. We haven't been very successful in driving mailing list only things, an example is that discussion spirals out for long threads and it becomes quite exhausting to keep up with it all. It would be so much easier if we can meet together, drive some of the efforts that are happening and reconvene often to keep track of our progress, IMHO. -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com From zigo at debian.org Mon Apr 6 15:12:30 2020 From: zigo at debian.org (Thomas Goirand) Date: Mon, 6 Apr 2020 17:12:30 +0200 Subject: [all] Please remove all external resources from docs Message-ID: <86292630-29e6-3584-8649-970b0c71aa3b@debian.org> Hi, I've wrote about this earlier I guess, but I believe I need to do it once more. Often I see in the docs things like this: .. image:: https://governance.openstack.org/tc/badges/.svg :target: https://governance.openstack.org/tc/reference/tags/index.html [note: this is only one simple example, there's many of other types linking to external images, I've even seen stuff linking to CDNs...] I'd like to see these go away. Indeed, distributions like mine (ie: Debian) are packaging the doc. And it's *very* annoying when one browses the doc to have such an external link, and the browser going on the internet to fetch an external resource. So, in Debian, I'm patching these away. Often, the most ugly way where I don't care about the resulting document (as long as the doc is there). This is both annoying, and a loss of my time. The solution is simple: have the resource being *LOCAL* (ie: stored in the project's doc), not stored on an external site. Thanks to anyone who will follow this advice. Cheers, Thomas Goirand (zigo) From alifshit at redhat.com Mon Apr 6 15:17:59 2020 From: alifshit at redhat.com (Artom Lifshitz) Date: Mon, 6 Apr 2020 11:17:59 -0400 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann wrote: > > This topic is a very important and critical area to solve in the OpenStack community. > I personally feel and keep raising this issue wherever I get the opportunity. > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > Without enough developers (either open or closed source), none of the software can survive. > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > project is facing the less contributor issue as compare to requirements and incoming requests. Few > projects already dead or going to be if we do not solve the less contributors issue now. > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > the way who can solve this. I'm not running for TC, but I figured I could chime in with some thoughts, and maybe get TC candidates to react. > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > raise this issue? or any new crazy Idea? To my knowledge, the vast majority of contributors to OpenStack are corporate contributors - meaning, they contribute to the community because it's their job. As companies have dropped out, the contributor count has diminished. Therefore, the obvious solution to the contributor dearth would be to recruit new companies that use or sell OpenStack. However, as far as I know, Red Hat is the only company remaining that still makes money from selling OpenStack as a product. So if we're looking for new contributor companies, we would have to look to those that use OpenStack, and try to make the case that it makes sense for them to get involved in the community. I'm not sure what this kind of advocacy would look like, or towards which companies, or what kind of companies, it would be directed. Perhaps the TC candidates could have suggestions here. And if I've made any wrong assumptions, by all means correct me. > > -gmann > From mark at stackhpc.com Mon Apr 6 15:30:06 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 6 Apr 2020 16:30:06 +0100 Subject: [ironic][release] List cycle-with-intermediary deliverables that have not been released yet In-Reply-To: References: Message-ID: On Mon, 6 Apr 2020 at 14:46, Sean McGinnis wrote: > > On 4/6/20 8:20 AM, Mark Goddard wrote: > > On Mon, 6 Apr 2020 at 13:50, Herve Beraud wrote: > >> Hello ironic team, > >> > >> Quick reminder that we'll need a release very soon for a number of > >> deliverables following a cycle-with-intermediary release model but which > >> have not done *any* release yet in the Ussuri cycle: > >> > >> - bifrost > >> - ironic-prometheus-exporter > >> - ironic-ui > >> > >> Those should be released ASAP, and in all cases before R-3 week (RC1 deadline Apr 20 - Apr 24) , so that we have a release to include in the final Ussuri release. > > Can we please move to the cycle-with-optional-intermediary release > > model? I realise it doesn't exist. We (ironic) would like the option > > to release features outside the normal cadence of the OpenStack > > release cycle, without being forced to do it. It would save a lot of > > time, and avoid emails and autogenerated patches each cycle. > > > > I don't have a strong opinion on the Ironic top-level OpenDev project > > discussion, but I think this is one example of where the team sees > > unnecessary inflexibility in OpenStack's processes. Perhaps I'm > > missing something. > > > > Mark > > The "independent" release model is basically a > cycle-with-optional-intermediary release model, or at least it can be > used that way. > > If there is nothing to be released for an entire cycle, then that > deliverable really isn't cycle based. If the last release from the last > cycle is still relevant to the next release, you don't do another > release and you just keep using the old, existing release. > > So that would be my first suggestion. The other option that Thierry > pointed out: > > "As swift has proven in the past, you can totally have a product > strategy with the cycle-with-intermediary model. You can even maintain > your own extra branches (think stable/2.0) if you want. The only extra > limitations AFAIK in the integrated release are that (1) you also > maintain stable branches at openstack common release points (like > stable/train), and (2) that the openstack release management team has an > opportunity to check the release numbering for semver sanity." > > The benefit of sticking with the cycle based model and making sure at > least one release is done per cycle is that we can catch hidden bit rot > that gets introduced and causes errors during the release process. This > happens, sadly, more often than you might expect. > > Sticking with a cycle based model, it can either be > cycle-with-intermediary, where we just want to make sure we get some > updates along the way, or cycle-with-rc, where we at least always make > sure there is a final release at the end of the cycle that picks up any > changes. Additional beta releases can be done during the cycle if needed > to test and validate changes before everyone is in the freeze at the end. > > The important thing is being able to communicate to downstream packagers > and other consumers what to expect with these deliverables. If something > is "cycle-*" based, but then we don't release anything, that makes it > look like that deliverable got dropped. At least if we communicate it is > "independent" then this would look normal and packagers know to just > keep using that last version. > > If it is just an occasional thing that a package doesn't end up with any > meaningful updates (I would be kind of suprised, even with low activity > deliverables. There are always at least the usual cycle goal and other > updates), then it is possible to tag a new version number on the same > commit. This isn't great, but it allows us to test out the release > process and make sure that is all working. And it allows a new point on > which to branch, so should the need ever arise in the future, bug fixes > can be backported to distinct branch points for each cycle that needs it. > > So we really don't want to introduce a release model that says "we > follow the development cycle, maybe we'll release something this cycle, > maybe we won't". That would be confusing and introduce uncertainty > downstream. Thanks for the detailed response Sean. I don't have an issue with the cycle model - Ironic is still tied to the cyclical release model. The part that I disagree with is the requirement to create an intermediary release. It shouldn't be a problem if bifrost doesn't make a feature release between Train and Ussuri, we'll just do a final Ussuri release. It's the intermediary I'd like to be optional, rather than the final cycle release. > > Sean > > From donny at fortnebula.com Mon Apr 6 15:36:49 2020 From: donny at fortnebula.com (Donny Davis) Date: Mon, 6 Apr 2020 11:36:49 -0400 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: On Mon, Apr 6, 2020 at 11:22 AM Artom Lifshitz wrote: > On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann > wrote: > > > > This topic is a very important and critical area to solve in the > OpenStack community. > > I personally feel and keep raising this issue wherever I get the > opportunity. > > > > To develop or maintain any software, the very first thing we need is to > have enough developer resources. > > Without enough developers (either open or closed source), none of the > software can survive. > > > > OpenStack current situation on contributors is not the same as it was > few years back. Almost every > > project is facing the less contributor issue as compare to requirements > and incoming requests. Few > > projects already dead or going to be if we do not solve the less > contributors issue now. > > > > I know, TC is not directly responsible to solve this issue but we should > do something or at least find > > the way who can solve this. > > I'm not running for TC, but I figured I could chime in with some > thoughts, and maybe get TC candidates to react. > > > What do you think about what role TC can play to solve this? What > platform or entity can be used by TC to > > raise this issue? or any new crazy Idea? > > To my knowledge, the vast majority of contributors to OpenStack are > corporate contributors - meaning, they contribute to the community > because it's their job. As companies have dropped out, the contributor > count has diminished. Therefore, the obvious solution to the > contributor dearth would be to recruit new companies that use or sell > OpenStack. However, as far as I know, Red Hat is the only company > remaining that still makes money from selling OpenStack as a product. > So if we're looking for new contributor companies, we would have to > look to those that use OpenStack, and try to make the case that it > makes sense for them to get involved in the community. I'm not sure > what this kind of advocacy would look like, or towards which > companies, or what kind of companies, it would be directed. Perhaps > the TC candidates could have suggestions here. And if I've made any > wrong assumptions, by all means correct me. > > > > > -gmann > > > > > I don't think you are too far off. I used to work in a place where my job was to help sell Openstack (among other products) and enable the use of it with customers. Customers drive everything vendors do. Things that sell are easy to use. Customers don't buy the best products, they buy what they can understand fastest. If customers are asking for a product, it's because they understand its value. Vendors in turn contribute to projects because they make money from their investment. Now think about the perception and reality of Openstack as a whole. We have spent the last decade or so writing bleeding edge features. We have spent very little time on documenting what we do have in layman's terms. The intended audience of our docs would seem to me to be other developers. I hope people don't take that as a jab, it's just the truth. If someone cannot understand how to use this amazing technology, it won't sell. If it doesn't sell, vendors leave, if vendors leave the number of contributors goes down. If we don't start working at making Openstack easier to consume, then no amount of technical change will make an impactful difference. -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Apr 6 15:43:33 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 6 Apr 2020 10:43:33 -0500 Subject: [ironic][release] List cycle-with-intermediary deliverables that have not been released yet In-Reply-To: References: Message-ID: <4733205b-ae20-90af-490b-ce56434f22e4@gmx.com> > Thanks for the detailed response Sean. I don't have an issue with the > cycle model - Ironic is still tied to the cyclical release model. The > part that I disagree with is the requirement to create an intermediary > release. It shouldn't be a problem if bifrost doesn't make a feature > release between Train and Ussuri, we'll just do a final Ussuri > release. It's the intermediary I'd like to be optional, rather than > the final cycle release. > I would suggest switching these to cycle-with-rc then. There is one release candidate that has to happen just before the final release for the cycle, but that's mainly to make sure everything is in good shape before we declare it done. That sounds like it might fit better with what the team wants to do here. From iurygregory at gmail.com Mon Apr 6 16:10:36 2020 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 6 Apr 2020 18:10:36 +0200 Subject: [ironic] Next weekly meeting canceled (April 13th) Message-ID: Hello Ironicers, During our weekly meeting today, we decided to cancel the next weekly meeting (April 13th), since it's a holiday for most of the contributors. Best regards, -- *Att[]'sIury Gregory Melo Ferreira **Software Engineer at Red Hat Czech* *MSc in Computer Science at UFCG* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From knikolla at bu.edu Mon Apr 6 16:23:10 2020 From: knikolla at bu.edu (Nikolla, Kristi) Date: Mon, 6 Apr 2020 16:23:10 +0000 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: <753836DB-A994-485A-8640-43FD52158CA8@bu.edu> Hi Ghanshyam, Unfortunately, OpenStack is still for the most part corporate in terms of developer resources. It sort of makes sense, it's a cloud platform, and you need a certain scale to justify the costs for learning, adopting and operating. I probably wouldn't be contributing now if my first introduction to OpenStack wasn't as part of my job operating and developing a cloud. I don't see a clear path to solve that, but I see a few potential ways to help. 1. Advertising and marketing the viability of specific OpenStack projects as standalone tools. I can see value for someone needing a volume service, and if Cinder: a) fits the requirements b) is easy to deploy and learn (eg., well documented standalone use case and tested) c) brings a minimum set of cruft with it. This might encourage more people to use it and encourage wider adoption of the other OpenStack projects if their experience is a good one, with OpenStack becoming a trusted toolbox. 2. Making sure we invest more time and effort on documentation. Especially with regards to information on getting started, best practices in terms of architecture, configuration and deployment, and of course contributors guides. We're already a very friendly and welcoming community. 3. Investigating and working on integrating OpenStack much more closely with other cloud tools. We're great for IaaS, but clouds today are not only IaaS and we need to evolve and play nice with everything else that someone might encounter in a datacenter. Mohammed brings a great point about integrating with Kubernetes. All these integrations need to be well documented, including best practices, and part of our testing infrastructure. To summarize, I would like to see OpenStack scale better. From homelabbers or small businesses who only need a few services, to large datacenters who may be running multiple OpenStacks, multiple Kuberneteses/OpenShifts, monitoring tools, billing systems. This may result in an increase in adoption, which in turn, should result in an increase in contributions. I can see the above becoming community goals and the TC doing outreach to document the process and help out. > On Apr 4, 2020, at 9:09 PM, Ghanshyam Mann wrote: > > This topic is a very important and critical area to solve in the OpenStack community. > I personally feel and keep raising this issue wherever I get the opportunity. > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > Without enough developers (either open or closed source), none of the software can survive. > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > project is facing the less contributor issue as compare to requirements and incoming requests. Few > projects already dead or going to be if we do not solve the less contributors issue now. > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > the way who can solve this. > > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > raise this issue? or any new crazy Idea? > > > -gmann > From haleyb.dev at gmail.com Mon Apr 6 16:26:18 2020 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 6 Apr 2020 12:26:18 -0400 Subject: [neutron] Bug deputy report for week of March 30th Message-ID: Hi, I was Neutron bug deputy last week. Below is a short summary about reported bugs. -Brian Critical bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1869862 - neutron-tempest-plugin-designate-scenario failes frequently with imageservice doesn't have supported version - https://review.opendev.org/#/c/715835/ merged (devstack issue) * https://bugs.launchpad.net/neutron/+bug/1870110 - neutron-rally-task fails in rally_openstack.task.scenarios.neutron.trunk.CreateAndListTrunks - https://review.opendev.org/#/c/716562/ proposed - revert of https://review.opendev.org/#/c/477286/ * https://bugs.launchpad.net/neutron/+bug/1870302 - [VPNaaS]: test_migrations_sync failed with alembic 1.4.2 - gate failure for vpnaas - Dongcan Ye took ownership High bugs --------- * https://bugs.launchpad.net/neutron/+bug/1869887 - L3 DVR ARP population gets incorrect MAC address in some cases - https://review.opendev.org/#/c/716302/ proposed * https://bugs.launchpad.net/neutron/+bug/1870313 - "send_ip_addr_adv_notif" can't use eventlet when called from "keepalived_state_change" - https://review.opendev.org/#/c/716944/ proposed * https://bugs.launchpad.net/neutron/+bug/1870352 - "ctypes.CDLL" C functions could release the GIL during the execution call - https://review.opendev.org/#/c/717017/ proposed * https://bugs.launchpad.net/neutron/+bug/1870569 - Unable to create network without default network_segment_range - Possible regression - https://review.opendev.org/#/c/717754/ proposed Medium bugs ----------- * https://bugs.launchpad.net/neutron/+bug/1870114 - Trunk subports aren't treated as dvr serviced ports - https://review.opendev.org/#/c/716642/ proposed * https://bugs.launchpad.net/neutron/+bug/1870228 - cloud-init metadata fallback broken - https://review.opendev.org/#/c/600421/ changed the bind address - asked if using the metadata "force" config option will help * https://bugs.launchpad.net/bugs/1870296 - [neutron-tempest-plugin]:fwaas Operation cannot be performed since associated firewall group is in PENDING UPDATE - https://review.opendev.org/716891 proposed * https://bugs.launchpad.net/neutron/+bug/1870400 - "ml2_vlan_allocations.vlan_id" value should be checked in the DB backend - https://review.opendev.org/#/c/717083/ proposed Low bugs -------- * https://bugs.launchpad.net/neutron/+bug/1869808 - reboot neutron-ovs-agent introduces a short interrupt of vlan traffic - asked if it could be reproduced on a later release Wishlist bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1869877 - Segment doesn't exist network info - https://review.opendev.org/#/c/716275/ proposed (nova) - https://review.opendev.org/#/c/715156/ proposed (neutron) * https://bugs.launchpad.net/neutron/+bug/1870319 - [RFE] Network cascade deletion API call - Slawek already added rfe-triaged tag Invalid bugs ------------ * https://bugs.launchpad.net/neutron/+bug/1869768 - devstack failed when Q_AGENT=ovn - Could not reproduce, suspect invalid local.conf * https://bugs.launchpad.net/neutron/+bug/1870002 - The operating_status value of loadbalancer is abnormal - LBaaS bug, moved to storyboard Further triage required ----------------------- * None From jeremyfreudberg at gmail.com Mon Apr 6 16:34:36 2020 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 6 Apr 2020 12:34:36 -0400 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: People generally contribute to something that they want to use. It is really as simple as that. - Make sure that people understand what OpenStack is/does - Make OpenStack easier to use - Make OpenStack do more things - Make OpenStack components reusable For the first three points: few people want to use something that they don't understand or that they find difficult to use or that doesn't do the things they want. For the last point: ok fine, maybe some people don't need OpenStack but one of our components might be just what they need to build their own cool thing, so why not mutually benefit? On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann wrote: > > This topic is a very important and critical area to solve in the OpenStack community. > I personally feel and keep raising this issue wherever I get the opportunity. > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > Without enough developers (either open or closed source), none of the software can survive. > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > project is facing the less contributor issue as compare to requirements and incoming requests. Few > projects already dead or going to be if we do not solve the less contributors issue now. > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > the way who can solve this. > > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > raise this issue? or any new crazy Idea? > > > -gmann > From jeremyfreudberg at gmail.com Mon Apr 6 16:42:23 2020 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 6 Apr 2020 12:42:23 -0400 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: <1c7f1f490722a992283539553d2e78c62fc866e7.camel@evrard.me> References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> <1c7f1f490722a992283539553d2e78c62fc866e7.camel@evrard.me> Message-ID: On Mon, Apr 6, 2020 at 3:19 AM Jean-Philippe Evrard wrote: > [...] > I like this. Should this be self-asserted by the teams, or should we > provide some kind of validation? For teams that are very close but have > other openstack services projects dependencies, should the TC work on > helping removing those dependencies? > [...] Yes the TC should support some kind of initiative to encourage standalone/reusability. I think self-assertion is fine at the start (I think this is really important info to publicize) ... but eventually there should be some kind of reference doc with clear criteria for what "standalone/reusable" actually means. Of course I'm really just thinking of services... libraries are another matter... From smooney at redhat.com Mon Apr 6 16:47:34 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 17:47:34 +0100 Subject: [tc][election] campaign discussion: what we can improve in TC & how? In-Reply-To: References: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> Message-ID: <3a3242bc2f78557a84208af478da9791f440a776.camel@redhat.com> On Mon, 2020-04-06 at 11:11 -0400, Mohammed Naser wrote: > On Mon, Apr 6, 2020 at 10:59 AM Sean McGinnis wrote: > > > > > > > > What you think we should and must improve in TC ? This can be > > > > the involvement of TC in the process from the governance point of view or technical > > > > help for each project. Few of the question is below but feel free to add your improvement > > > > points. > > > > > > Meetings. Seriously, I have no idea how I still can't convince people > > > to meet more than > > > one a month. We're a key part of the OpenStack success and we should be meeting > > > just as often as we can to be able to continue to drive things > > > > > > A month is too long of a cycle to drive things out. > > > > OK, sub-question for the TC then. > > > > Why do you feel the only place you can drive things is inside of a time > > restricted, geo-restrictive meeting time? > > > > I think it's mostly a follow-up thing and a place to drive discussion. The > office hours have largely just become a quiet area where not much happens > these days. The "meetings" we have are simply to check the box that is > given to us from the foundation to quickly glance over the things we're dealing > with. > > We haven't been very successful in driving mailing list only things, an example > is that discussion spirals out for long threads and it becomes quite exhausting > to keep up with it all. yep i find it quite hard to fully keep track of long mailing list discussion for this reason its very easy for thread to split and comments to be missed. (this one porbaly will be) which is why i always prefer to do disucssion via gerrit, irc meetings or in person at meetups or ptg style event with mailing list used to sumerise options and gather input but not for the main discussion ideas are bing brain stormed or inially disscused. i know that i partly down to the workflow i am use to and partly down to my own abiliy follow mailing list but i think mailing list only activties are very easy to miss and hard to engage with in many cases although not all. > > It would be so much easier if we can meet together, drive some of the efforts > that are happening and reconvene often to keep track of our progress, IMHO. > > From smooney at redhat.com Mon Apr 6 16:53:41 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 17:53:41 +0100 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: <28645d29c84950330cdd8d549c3897cd03c9730a.camel@redhat.com> On Mon, 2020-04-06 at 11:17 -0400, Artom Lifshitz wrote: > On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann wrote: > > > > This topic is a very important and critical area to solve in the OpenStack community. > > I personally feel and keep raising this issue wherever I get the opportunity. > > > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > > Without enough developers (either open or closed source), none of the software can survive. > > > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > > project is facing the less contributor issue as compare to requirements and incoming requests. Few > > projects already dead or going to be if we do not solve the less contributors issue now. > > > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > > the way who can solve this. > > I'm not running for TC, but I figured I could chime in with some > thoughts, and maybe get TC candidates to react. > > > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > > raise this issue? or any new crazy Idea? > > To my knowledge, the vast majority of contributors to OpenStack are > corporate contributors - meaning, they contribute to the community > because it's their job. As companies have dropped out, the contributor > count has diminished. Therefore, the obvious solution to the > contributor dearth would be to recruit new companies that use or sell > OpenStack. However, as far as I know, Red Hat is the only company > remaining that still makes money from selling OpenStack as a product. am... canonical is a thing :) and im pretty sure they make money form selling product support even if its availabel free in there repos by default. i was sad to see SUSE move away form openstack but there other smaller distributions of openstack that have commersal supprot. i know stackhpc support openstack via kayobe for some hpc customers for example but im sure there are others outside of the cloud providres. > So if we're looking for new contributor companies, we would have to > look to those that use OpenStack, and try to make the case that it > makes sense for them to get involved in the community. I'm not sure > what this kind of advocacy would look like, or towards which > companies, or what kind of companies, it would be directed. Perhaps > the TC candidates could have suggestions here. And if I've made any > wrong assumptions, by all means correct me. The bigest users of openstack are outside of cloud operators are telcos, banks, and goverment organisations upstream contibution is not in the dna of many of those groups so it can be hard to get the ones that are not already engaging to do so. > > > > > -gmann > > > > From smooney at redhat.com Mon Apr 6 17:11:20 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 18:11:20 +0100 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> <1c7f1f490722a992283539553d2e78c62fc866e7.camel@evrard.me> Message-ID: On Mon, 2020-04-06 at 12:42 -0400, Jeremy Freudberg wrote: > On Mon, Apr 6, 2020 at 3:19 AM Jean-Philippe Evrard > wrote: > > [...] > > I like this. Should this be self-asserted by the teams, or should we > > provide some kind of validation? For teams that are very close but have > > other openstack services projects dependencies, should the TC work on > > helping removing those dependencies? > > [...] > > Yes the TC should support some kind of initiative to encourage > standalone/reusability. I think self-assertion is fine at the start (I > think this is really important info to publicize) ... but eventually > there should be some kind of reference doc with clear criteria for > what "standalone/reusable" actually means. Of course I'm really just > thinking of services... libraries are another matter... isnt that partly what constellations were ment to do most of them unfrotnetely build on the compute kit https://www.openstack.org/software/sample-configs/#compute-starter-kit but it would be greate to have version that did not need nova or neutorn for example a storage kit that was just swift, keystone, cinder, glance and manila or something similar. a baremetal constalation that could be bifrost + ironic + optionally keystone and metal3? i cant rememebr the name of the project that wanted to have a group of openstack compnets that were useful for use in kubernetes but a constallation that showcased what comments were useful in that context would also be great. > From mark at stackhpc.com Mon Apr 6 17:13:16 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 6 Apr 2020 18:13:16 +0100 Subject: [ironic][release] List cycle-with-intermediary deliverables that have not been released yet In-Reply-To: <4733205b-ae20-90af-490b-ce56434f22e4@gmx.com> References: <4733205b-ae20-90af-490b-ce56434f22e4@gmx.com> Message-ID: On Mon, 6 Apr 2020 at 16:43, Sean McGinnis wrote: > > > > Thanks for the detailed response Sean. I don't have an issue with the > > cycle model - Ironic is still tied to the cyclical release model. The > > part that I disagree with is the requirement to create an intermediary > > release. It shouldn't be a problem if bifrost doesn't make a feature > > release between Train and Ussuri, we'll just do a final Ussuri > > release. It's the intermediary I'd like to be optional, rather than > > the final cycle release. > > > I would suggest switching these to cycle-with-rc then. There is one > release candidate that has to happen just before the final release for > the cycle, but that's mainly to make sure everything is in good shape > before we declare it done. That sounds like it might fit better with > what the team wants to do here. But what if we want to create a feature release mid-cycle? Some cycles we do, some we don't. From radoslaw.piliszek at gmail.com Mon Apr 6 17:44:50 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 6 Apr 2020 19:44:50 +0200 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: > > I am thinking something Kolla-esque > > as LOCI does not seem that alive nowadays (and does not seem to be > > getting that much level of testing Kolla has from Kolla Ansible and > > TripleO as both deployment projects consume Kolla images). > > Question: Why would we want another competing project? How do you > intend to work with Kolla? Do you want to have this image building in > the projects, and use another tooling to deploy those images? Did you > start collaborating/discussing with non-TripleO projects on this? > > (snip, continued) > > Maybe I should rephrase. How do you want to make this work with Kolla, > Triple-O, and other deployment projects outside those two? Do we > distribute and collaborate (each project got a way to build its > images), or do we centralize (LOCI/kolla - way)? Let me answer both the versions. I feel like I put it in bad words, I did not mean to start a completely separate project. What I wanted to say is that I see it best to build possible common containerization solution off Kolla (both as in deliverable and project). The fact is Kolla has some shortcomings that would likely cripple its usage as possible DevStack replacement/booster in gate in its current state. My idea is to keep this centralized but with more visibility and ask projects to officially support this method of deliverable distribution. The whole undertaking stems from the fact that I perceive modern software distribution as based on containers - you have the recipe, you have the image, you can use it - you have the insight in how it got to be and also are able to reduce repeatability of deployment steps regarding building/installation, all by "official" means. Since TC is about _technical_ _governance_, I'd say this project fits as it deals with the technical part of organizing proper tooling for the job and promothing it in the OpenStack community far and wide. > > It would be easy to design deployment scenarios of subsets of > > OpenStack > > projects that work together to achieve some cloud-imagined goal other > > than plain IaaS. Want some baremetal provisioning? Here you go. > > Need a safe place for credentials? Be my guest! > I am not sure what you mean there? Do you want to map OpenStack "sample > configs" with gate jobs? More or less - yes. They seem to need some refresh in the first place (I guess this is where Ironic could shine as well). Currently "sample configs" are vague descriptions of possibilities with listings of projects and links. They should really be captured by deployment scenarios we can offer and have them tested in the gate. This is an interesting matter in its own right. We at Kolla Ansible started some discussion based also on user feedback (which we want to enhance with further with the Kolla Klub) [1]. The goal is to give good coverage of scenarios users are currently interested in while keeping CI resource usage low. You might notice these hardly align with "sample configs". One reason for that is catering for real service coverage, rather than very specific use case. Another is likely different audience than marketing site. [1] https://etherpad.openstack.org/p/KollaAnsibleScenarios -yoctozepto From cboylan at sapwetik.org Mon Apr 6 17:49:16 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 06 Apr 2020 10:49:16 -0700 Subject: New OpenDev Communication Channels Message-ID: <69a3980b-9e07-4b20-a45c-952c006c69c9@www.fastmail.com> Hello All, Recently, we've transitioned to using #opendev on Freenode for our synchronous IRC communications. Since then we've spun up a new service-discuss at lists.opendev.org mailing list [1] where we'll plan changes to services, notify of meetings, answer questions about services and usage, and otherwise communicate about OpenDev. service-announce at lists.opendev.org [2] will remain for important announcements. If you're interested in the developer infrastructure we run please join us on these mailing lists. We encourage every developer to at least subscribe to the service-announce[2] list. It will be very low traffic, and used only for important announcements with wide impact. We are also going to start having our weekly team meeting in #opendev-meeting. The time will continue to be 19:00 UTC Tuesdays. See you there, Clark [1] http://lists.opendev.org/cgi-bin/mailman/listinfo/service-discuss [2] http://lists.opendev.org/cgi-bin/mailman/listinfo/service-announce From openstack at nemebean.com Mon Apr 6 17:58:29 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 6 Apr 2020 12:58:29 -0500 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> <6f5b832cf6f7d10356176db9c59e3864f9117c06.camel@redhat.com> <2d085eb5-2100-478c-7ee9-adcd13f860db@openstack.org> Message-ID: On 4/6/20 8:16 AM, Dmitry Tantsur wrote: > > > On Mon, Apr 6, 2020 at 3:13 PM Thierry Carrez > wrote: > > Sean Mooney wrote: > > On Mon, 2020-04-06 at 13:14 +0200, Dmitry Tantsur wrote: > >> On Mon, Apr 6, 2020 at 1:03 PM Sean Mooney > wrote: > >> > >>> On Mon, 2020-04-06 at 10:10 +0200, Dmitry Tantsur wrote: > >>>> The problem is that oslo libraries are OpenStack-specific. Imagine > >>> > >>> metal3, > >>>> for example. When building our images, we can pull (most of) > regular > >>> > >>> Python > >>>> packages from the base OS, but everything with "oslo" in its > name is on > >>> > >>> us. > >>>> It's a maintenance burden. > >>> > >>> what distros dont ship oslo libs? > >>> > >>> RHEL ships them via the OSP repos > >>> > >> > >> As part of OpenStack, right. > >> > >> > >>> CentOS ship it via RDO > >>> Ubunutu has them in the cloud archive > >>> SUSE also shiped them via there openstack product although > sicne they are > >>> nolonger > >>> maintaining that goign forward and moveing the k8s based cloud > offerings > >>> it might be > >>> a valid concern there. > >>> > >> > >> All the same here: oslo libs are parts of OpenStack > >> distributions/offerings. Meaning that to install Ironic you need > to at > >> least enable OpenStack repositories, even if you package Ironic > yourself. > > ya that is true although i think oslo is also a good candiate for > standablone reuse > > outside of openstack. like placment keystone and ironic are. > > so in my perfered world i would love to see oslo in the base os > repos. > > What's preventing that from happening ? What is distro policy around > general-purpose but openstack-community-maintained Python libraries > like > stevedore or tooz ? > > > I don't think such a policy exists. > > I think it's based on the actual usage by non-OpenStack consumers. I can > only speculate what prevents them from using e.g. oslo.config or > stevedore. Maybe they see that the source and documentation are hosted > on openstack.org and assume they're only for > OpenStack or somehow require OpenStack (i.e. same problem)? I want to interject here and mention that the Oslo team actually differentiates between things named oslo* and things not named oslo that are maintained by the Oslo team. The former are for OpenStack-specific logic and are not necessarily designed for broader use. The latter are intended as general purpose libraries to be used outside of OpenStack. This obviously does not help here at all since it just muddies the water about what is and is not "for OpenStack", but I think it's important to note that this is not an either-or question for Oslo. > > Maybe it's something that docs.opendev.org/stevedore > and opendev.org/stevedore > (no openstack) could help fixing? > > Dmitry > > > FWIW in Ubuntu all oslo libraries are packaged as part of the "base OS > repos", and therefore indistinguishable from other Python libraries in > terms of reuse. The 'cloud archive' is just an additive repository that > allows older LTS users to use the most recent OpenStack releases. > > -- > Thierry Carrez (ttx) > From smooney at redhat.com Mon Apr 6 18:10:36 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 19:10:36 +0100 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: <7f36997aa8cbdd0ac77fa4fdaae6cdb02beb4a6a.camel@redhat.com> On Mon, 2020-04-06 at 19:44 +0200, Radosław Piliszek wrote: > > > I am thinking something Kolla-esque > > > as LOCI does not seem that alive nowadays (and does not seem to be > > > getting that much level of testing Kolla has from Kolla Ansible and > > > TripleO as both deployment projects consume Kolla images). > > > > Question: Why would we want another competing project? How do you > > intend to work with Kolla? Do you want to have this image building in > > the projects, and use another tooling to deploy those images? Did you > > start collaborating/discussing with non-TripleO projects on this? > > > > (snip, continued) > > > > Maybe I should rephrase. How do you want to make this work with Kolla, > > Triple-O, and other deployment projects outside those two? Do we > > distribute and collaborate (each project got a way to build its > > images), or do we centralize (LOCI/kolla - way)? > > Let me answer both the versions. I feel like I put it in bad words, > I did not mean to start a completely separate project. What I wanted > to say is that I see it best to build possible common containerization > solution off Kolla (both as in deliverable and project). The fact is > Kolla has some shortcomings that would likely cripple its usage as > possible DevStack replacement/booster in gate in its current state. for what its worth i rememebr talking with clark about if using the kolla pre built image for other project would help speed up the gate in anyway. the conclution we came too at the time was due to the way we prestage git repos ince the vm and we do not expect using kolla image in the gate to have any really speed up and it actully has a dissadvatage which is we loose any validation that service are co installable on the same host outside of contianers. e.g. we dont ahve validation that deps of different project dont conflcit sice they are all in different contianers so without adressing that i think it would actully be a regression. not that i dont like kolla imagess i do but it would not be good to use them in the gate an use kolla ansibel instead of devstack in my view if we consider co installablity to be important. the same goes for any contaienised soltution for that matter. > My idea is to keep this centralized but with more visibility and ask > projects to officially support this method of deliverable distribution. > The whole undertaking stems from the fact that I perceive modern > software distribution as based on containers - you have the recipe, > you have the image, you can use it - you have the insight in how > it got to be and also are able to reduce repeatability of deployment > steps regarding building/installation, all by "official" means. > Since TC is about _technical_ _governance_, I'd say this project fits > as it deals with the technical part of organizing proper tooling for > the job and promothing it in the OpenStack community far and wide. > > > > It would be easy to design deployment scenarios of subsets of > > > OpenStack > > > projects that work together to achieve some cloud-imagined goal other > > > than plain IaaS. Want some baremetal provisioning? Here you go. > > > Need a safe place for credentials? Be my guest! > > I am not sure what you mean there? Do you want to map OpenStack "sample > > configs" with gate jobs? > > More or less - yes. They seem to need some refresh in the first place > (I guess this is where Ironic could shine as well). > Currently "sample configs" are vague descriptions of possibilities > with listings of projects and links. They should really be captured by > deployment scenarios we can offer and have them tested in the gate. > This is an interesting matter in its own right. We at Kolla Ansible > started some discussion based also on user feedback (which we want to > enhance with further with the Kolla Klub) [1]. The goal is to give good > coverage of scenarios users are currently interested in while keeping > CI resource usage low. You might notice these hardly align with "sample > configs". One reason for that is catering for real service coverage, > rather than very specific use case. Another is likely different audience > than marketing site. > > [1] https://etherpad.openstack.org/p/KollaAnsibleScenarios > > -yoctozepto > From smooney at redhat.com Mon Apr 6 18:12:59 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 19:12:59 +0100 Subject: New OpenDev Communication Channels In-Reply-To: <69a3980b-9e07-4b20-a45c-952c006c69c9@www.fastmail.com> References: <69a3980b-9e07-4b20-a45c-952c006c69c9@www.fastmail.com> Message-ID: <8b6fb08caad67c5a22f209703b2d751eda38798c.camel@redhat.com> On Mon, 2020-04-06 at 10:49 -0700, Clark Boylan wrote: > Hello All, > > Recently, we've transitioned to using #opendev on Freenode for our synchronous IRC communications. Since then we've > spun up a new service-discuss at lists.opendev.org mailing list [1] where we'll plan changes to services, notify of > meetings, answer questions about services and usage, and otherwise communicate about OpenDev. > service-announce at lists.opendev.org [2] will remain for important announcements. If you're interested in the developer > infrastructure we run please join us on these mailing lists. just so i understand these are list for the common infra serivce provide by opendev and not for the indiviugal project disccusion. so gate failure/planned upgrdes but not nova or kata containers topics right > > We encourage every developer to at least subscribe to the service-announce[2] list. It will be very low traffic, and > used only for important announcements with wide impact. > > We are also going to start having our weekly team meeting in #opendev-meeting. The time will continue to be 19:00 UTC > Tuesdays. > > See you there, > Clark > > [1] http://lists.opendev.org/cgi-bin/mailman/listinfo/service-discuss > [2] http://lists.opendev.org/cgi-bin/mailman/listinfo/service-announce > From aj at suse.com Mon Apr 6 18:15:26 2020 From: aj at suse.com (Andreas Jaeger) Date: Mon, 6 Apr 2020 20:15:26 +0200 Subject: [stackalytics] x/stackalytics repo still used? Message-ID: <3d12cc71-b587-11a2-9d02-509e28e993ab@suse.com> Hi Sergey and stackalytics experts, the wiki at https://wiki.openstack.org/wiki/Stackalytics#Company_affiliation points to using the github site for stackalytics - but I see that there are still submissions to the opendev repo at https://review.opendev.org/#/q/project:x/stackalytics so, how are those repos used? Is it time to retire the x/stackalytics repository so that nobody can submit there? Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From openstack at nemebean.com Mon Apr 6 18:16:58 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 6 Apr 2020 13:16:58 -0500 Subject: [release][oslo] FFE request for Oslo Message-ID: Like, all of it. :-) Okay, _most_ of it. Specifically the projects in https://review.opendev.org/#/c/717816/ These all had changes in them which normally don't require release, like doc and test changes. However, if we don't release them before the stable branches are cut, the stable branches won't include any of those doc and test changes. This means the docs will be out of date, and tests may break completely (it would not be the first time). We could backport, but there are kind of a lot of patches involved so it would be much easier to just do some trivial releases now and not have to mess with it later. Apologies for dropping the ball on this since it should have been proposed last week, but I hope you will take pity on me and grant an FFE to undo my mistake. :-) Thanks. -Ben From radoslaw.piliszek at gmail.com Mon Apr 6 18:22:05 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 6 Apr 2020 20:22:05 +0200 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: <7f36997aa8cbdd0ac77fa4fdaae6cdb02beb4a6a.camel@redhat.com> References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> <7f36997aa8cbdd0ac77fa4fdaae6cdb02beb4a6a.camel@redhat.com> Message-ID: Just to clarify - I'm actually in favor of dropping the plain coinstallability rule. I wrote this before: we have different ways to ensure separation: virtual envs and OCI images. People can and do use them. I don't think this rule brings any benefits nowadays, very diverse projects can be used together even if they decide they need different versions of some internal library - for whatever reason. What counts are well-defined interfaces - this is the only way to ensure cofunctionality, other measures are simply workarounds. ;-) Dropping py2 deserves dropping old mindset. :-) -yoctozepto pon., 6 kwi 2020 o 20:10 Sean Mooney napisał(a): > > On Mon, 2020-04-06 at 19:44 +0200, Radosław Piliszek wrote: > > > > I am thinking something Kolla-esque > > > > as LOCI does not seem that alive nowadays (and does not seem to be > > > > getting that much level of testing Kolla has from Kolla Ansible and > > > > TripleO as both deployment projects consume Kolla images). > > > > > > Question: Why would we want another competing project? How do you > > > intend to work with Kolla? Do you want to have this image building in > > > the projects, and use another tooling to deploy those images? Did you > > > start collaborating/discussing with non-TripleO projects on this? > > > > > > (snip, continued) > > > > > > Maybe I should rephrase. How do you want to make this work with Kolla, > > > Triple-O, and other deployment projects outside those two? Do we > > > distribute and collaborate (each project got a way to build its > > > images), or do we centralize (LOCI/kolla - way)? > > > > Let me answer both the versions. I feel like I put it in bad words, > > I did not mean to start a completely separate project. What I wanted > > to say is that I see it best to build possible common containerization > > solution off Kolla (both as in deliverable and project). The fact is > > Kolla has some shortcomings that would likely cripple its usage as > > possible DevStack replacement/booster in gate in its current state. > for what its worth i rememebr talking with clark about if using the kolla > pre built image for other project would help speed up the gate in anyway. > the conclution we came too at the time was due to the way we prestage > git repos ince the vm and we do not expect using kolla image in the gate > to have any really speed up and it actully has a dissadvatage > > which is we loose any validation that service are co installable on the > same host outside of contianers. > > e.g. we dont ahve validation that deps of different project dont conflcit > sice they are all in different contianers so without adressing that i think > it would actully be a regression. > > not that i dont like kolla imagess i do but it would not be good to use > them in the gate an use kolla ansibel instead of devstack in my view > if we consider co installablity to be important. > the same goes for any contaienised soltution for that matter. > > My idea is to keep this centralized but with more visibility and ask > > projects to officially support this method of deliverable distribution. > > The whole undertaking stems from the fact that I perceive modern > > software distribution as based on containers - you have the recipe, > > you have the image, you can use it - you have the insight in how > > it got to be and also are able to reduce repeatability of deployment > > steps regarding building/installation, all by "official" means. > > Since TC is about _technical_ _governance_, I'd say this project fits > > as it deals with the technical part of organizing proper tooling for > > the job and promothing it in the OpenStack community far and wide. > > > > > > It would be easy to design deployment scenarios of subsets of > > > > OpenStack > > > > projects that work together to achieve some cloud-imagined goal other > > > > than plain IaaS. Want some baremetal provisioning? Here you go. > > > > Need a safe place for credentials? Be my guest! > > > I am not sure what you mean there? Do you want to map OpenStack "sample > > > configs" with gate jobs? > > > > More or less - yes. They seem to need some refresh in the first place > > (I guess this is where Ironic could shine as well). > > Currently "sample configs" are vague descriptions of possibilities > > with listings of projects and links. They should really be captured by > > deployment scenarios we can offer and have them tested in the gate. > > This is an interesting matter in its own right. We at Kolla Ansible > > started some discussion based also on user feedback (which we want to > > enhance with further with the Kolla Klub) [1]. The goal is to give good > > coverage of scenarios users are currently interested in while keeping > > CI resource usage low. You might notice these hardly align with "sample > > configs". One reason for that is catering for real service coverage, > > rather than very specific use case. Another is likely different audience > > than marketing site. > > > > [1] https://etherpad.openstack.org/p/KollaAnsibleScenarios > > > > -yoctozepto > > > From sean.mcginnis at gmx.com Mon Apr 6 18:23:34 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 6 Apr 2020 13:23:34 -0500 Subject: [release][oslo] FFE request for Oslo In-Reply-To: References: Message-ID: <59b7ffef-0961-6daa-6e5d-66c265da3e34@gmx.com> On 4/6/20 1:16 PM, Ben Nemec wrote: > Like, all of it. :-) > > Okay, _most_ of it. Specifically the projects in > https://review.opendev.org/#/c/717816/ > > These all had changes in them which normally don't require release, > like doc and test changes. However, if we don't release them before > the stable branches are cut, the stable branches won't include any of > those doc and test changes. This means the docs will be out of date, > and tests may break completely (it would not be the first time). > > We could backport, but there are kind of a lot of patches involved so > it would be much easier to just do some trivial releases now and not > have to mess with it later. > > Apologies for dropping the ball on this since it should have been > proposed last week, but I hope you will take pity on me and grant an > FFE to undo my mistake. :-) > > Thanks. > > -Ben > This looks like a low risk release to do, and we still have some time to shake out any unintended side effects (if there's even a chance of that) with these. I think it's worthwhile to pick up these administrative changes to avoid the extra work of backporting all of them to the stable/ussuri branch. So FEE makes sense me. Sean From sean.mcginnis at gmx.com Mon Apr 6 18:28:38 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 6 Apr 2020 13:28:38 -0500 Subject: [ironic][release] List cycle-with-intermediary deliverables that have not been released yet In-Reply-To: References: <4733205b-ae20-90af-490b-ce56434f22e4@gmx.com> Message-ID: >>> Thanks for the detailed response Sean. I don't have an issue with the >>> cycle model - Ironic is still tied to the cyclical release model. The >>> part that I disagree with is the requirement to create an intermediary >>> release. It shouldn't be a problem if bifrost doesn't make a feature >>> release between Train and Ussuri, we'll just do a final Ussuri >>> release. It's the intermediary I'd like to be optional, rather than >>> the final cycle release. >>> >> I would suggest switching these to cycle-with-rc then. There is one >> release candidate that has to happen just before the final release for >> the cycle, but that's mainly to make sure everything is in good shape >> before we declare it done. That sounds like it might fit better with >> what the team wants to do here. > But what if we want to create a feature release mid-cycle? Some cycles > we do, some we don't. > With cycle-with-rc, that does allow *beta* releases to be done at any point during the cycle. But those would be marked as b1, b2, etc. releases. This allows those that want to try out upcoming features to grab them if they want them, but would prevent anyone else from accidentally picking up something before it is quite ready. I'm guessing this might not be what you are looking for though. We do have another release model called cycle-automatic. This was introduced for tempest plugins to just do a release at the end of the cycle to make sure there is a tag to denote the tempest version the plugin was originally developed for. Since some plugins are being picked up more often downstream, this model does allow for additional releases to be proposed at any point during the development cycle. We will need to discuss this as a team to see if this makes sense for non-tempest plugins. It was intended only for those types of deliverables. I just mention it here as something that we do have in place that might be adapted to fit what the team needs. But we also need to consider what we are communicating to downstream consumers of our releases, so I'm not entirely sure at this point if it makes sense, or would be a good thing, to allow other types of deliverables to use this model. Sean From allison at openstack.org Mon Apr 6 18:57:01 2020 From: allison at openstack.org (Allison Price) Date: Mon, 6 Apr 2020 13:57:01 -0500 Subject: [all] OSF Community Meeting Presentation & Recordings Message-ID: Hi everyone, Thank you to those who attended any of the community meetings last week! If you missed one of the calls last week, below are links to the presentation [1], recording in English [2], and recording in Mandarin [3]. A lot of links were discussed and you can find those in the meeting section of the notes as well as the chat from the recordings. We are planning to do this format with project and community updates and OSF event updates quarterly. Stay tuned for the next meeting, and let me know if you have any questions in the meantime. In addition to the quarterly meetings, we would like to hear from the project communities on topics you would like to cover in community meetings or have covered. If you have an idea you would like to share, please drop it in this etherpad [4]. If you see a topic already shared that you would like to collaborate on, please share your name, email and IRC nick next to the topic. Cheers, Allison [1] https://docs.google.com/presentation/d/1l05skj_BCfF8fgYWu4n0b1rQmbNhHp8sMeYcb-v-rdA/edit?usp=sharing [2] https://zoom.us/rec/share/7vVXdIvopzxIYbPztF7SVpAKXYnbX6a82iMaqfZfmEl1b0Fqb6j3Zh47qPSV_ar2 [3] https://zoom.us/rec/share/zudNaIrQ6mNLR9Lvr2Xxf5QQIL7geaa8hCcf_vsIxEwoHkn0b8k_fEoYuXLsNi8C?startTime=1585879294000 [4] https://etherpad.openstack.org/p/OSF_Community_Meetings Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From zaitcev at redhat.com Mon Apr 6 19:04:10 2020 From: zaitcev at redhat.com (Pete Zaitcev) Date: Mon, 6 Apr 2020 14:04:10 -0500 Subject: choosing object storage In-Reply-To: <51cb6461-dfa6-3b2b-c053-0318bc280c19@catalyst.net.nz> References: <215ab035-fc25-c7ff-dc74-657c0343b3ae@plum.ovh> <51cb6461-dfa6-3b2b-c053-0318bc280c19@catalyst.net.nz> Message-ID: <20200406140410.1ac82bcc@suzdal.zaitcev.lan> On Mon, 6 Apr 2020 18:47:36 +1200 Mark Kirkwood wrote: > There are a number of considerations (disclaimer we run Ceph block and > Swift object storage): [...] Mark, could you provide some numbers for cluster size, number of objects and the rate of change for both Ceph and Swift? I imagine some of it may be proprietary, but perhaps the rate of ingestion is available? E.g. Swift is NN% today, Ceph is MM%, rate of growth is XX%? Last time we have Summit presentation on the topic, it was by San Diego Supercomputing Center. Last time I saw anyone publish their Swift data at all, it was Turkcell who had a 36PB cluster and planned to grow it to 50PB by end of 2019. They started that cluster in Icehouse release with 250GB drives! Thanks, -- Pete From gmann at ghanshyammann.com Mon Apr 6 19:35:20 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Apr 2020 14:35:20 -0500 Subject: [tc][election] campaign discussion: what we can improve in TC & how? In-Reply-To: References: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> Message-ID: <17150fd759f.dce6261f212355.4338523042950219152@ghanshyammann.com> ---- On Sun, 05 Apr 2020 15:07:21 -0500 Radosław Piliszek wrote ---- > Hello Ghanshyam, > > > What you think we should and must improve in TC ? This can be > > the involvement of TC in the process from the governance point of view > > or technical help for each project. > > > > - Do we have too much restriction on project sides and not giving them > > a free hand? If yes, what we can improve and how? > > From my current point of view, OpenStack TC is very liberal. > I base my opinion on some discussions of yours I read on ML and IRC and > also the non-observability of TC influences in Kolla. :-) > I think the current level of control is just right for many projects. > But maybe not all. I guess this is a good question to ask all > OpenStackers rather than just us. > > That said, I believe it is wise to consider this broad topic in the > context of the recent Ironic thread on this ML. > The not-so-well-defined goal of TC for the upcoming times would be to > redefine OpenStack as something more (or maybe even "else") than open > source platform for doing IaaS. OpenStack is, as the name gladly > suggests, a stack, a pile of open source software projects, mostly in > Python, sharing common quality standards (or at least trying to!) under > TC guidance. It should be considered laudable to be part of OpenStack > rather than seek a way to escape it. If it is not, then we might as > well disband this and go home (btw, #stayhome). > > As for simpler matters, TC might assume and advertise its role as > coordinator of cross-project efforts. And I don't mean current > community goals. I am thinking: if someone sees that by using project X > and project Y one could potentially achieve great thing Z, TC should be > offering its guidance on how to best approach this, in coordination > with cores from the relevant projects, and not in a way that enforces > TC to always intervene. Note this idea aligns with possible upcoming > TC-UC merger. True. This is good point and mering TC-UC is one of the good steps on this. Evaluation of use case with all possible solution (as there might be more than one way to solve the things in openstack) is something really need to make openstack easy to understand and use. Do you think SIG can play big role here working more closely with TC? -gmann > > > - Is there less interaction from TC with projects? I am sure few > > projects/members even do not know even what TC is for? What's your > > idea to solve this. > > I think this is partly because OpenStack core projects are considered > very mature. Continuing on the thought of control, quality and prestige > associated with OpenStack, a good short-term goal would be to revisit > the OpenStack projects and possibly restructure/deprecate some that > need this - considering both integral usability as well as standalone. > > I don't think TC transparency needs 'fixing'. This is actually good > thing (TM) - as long as projects deliver quality we expect, that is. > > -yoctozepto > From smooney at redhat.com Mon Apr 6 19:35:36 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 20:35:36 +0100 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> <7f36997aa8cbdd0ac77fa4fdaae6cdb02beb4a6a.camel@redhat.com> Message-ID: On Mon, 2020-04-06 at 20:22 +0200, Radosław Piliszek wrote: > Just to clarify - I'm actually in favor of dropping the plain > coinstallability rule. > > I wrote this before: we have different ways to ensure separation: > virtual envs and OCI images. > People can and do use them. > > I don't think this rule brings any benefits nowadays, very diverse > projects can be used together even if they decide they need different > versions of some internal library - for whatever reason. > What counts are well-defined interfaces - this is the only way to > ensure cofunctionality, other measures are simply workarounds. ;-) im not sure i agree. from a downstream perspecivit i think that will make packaging more difficult and from an upstrem perspeictive my two favorite installer are devstack and kolla but while i have tried to use kolla for dev in the past i still prefer devstack and im not sure i would vote for a move that may make that impossibel unless kollas dev mode has advanced signifcatly form wehn i last used it. > > Dropping py2 deserves dropping old mindset. :-) it deserves revaluting them sure but dropping for the sake fo it without considering the costs and beinfits no. > > -yoctozepto > > pon., 6 kwi 2020 o 20:10 Sean Mooney napisał(a): > > > > On Mon, 2020-04-06 at 19:44 +0200, Radosław Piliszek wrote: > > > > > I am thinking something Kolla-esque > > > > > as LOCI does not seem that alive nowadays (and does not seem to be > > > > > getting that much level of testing Kolla has from Kolla Ansible and > > > > > TripleO as both deployment projects consume Kolla images). > > > > > > > > Question: Why would we want another competing project? How do you > > > > intend to work with Kolla? Do you want to have this image building in > > > > the projects, and use another tooling to deploy those images? Did you > > > > start collaborating/discussing with non-TripleO projects on this? > > > > > > > > (snip, continued) > > > > > > > > Maybe I should rephrase. How do you want to make this work with Kolla, > > > > Triple-O, and other deployment projects outside those two? Do we > > > > distribute and collaborate (each project got a way to build its > > > > images), or do we centralize (LOCI/kolla - way)? > > > > > > Let me answer both the versions. I feel like I put it in bad words, > > > I did not mean to start a completely separate project. What I wanted > > > to say is that I see it best to build possible common containerization > > > solution off Kolla (both as in deliverable and project). The fact is > > > Kolla has some shortcomings that would likely cripple its usage as > > > possible DevStack replacement/booster in gate in its current state. > > > > for what its worth i rememebr talking with clark about if using the kolla > > pre built image for other project would help speed up the gate in anyway. > > the conclution we came too at the time was due to the way we prestage > > git repos ince the vm and we do not expect using kolla image in the gate > > to have any really speed up and it actully has a dissadvatage > > > > which is we loose any validation that service are co installable on the > > same host outside of contianers. > > > > e.g. we dont ahve validation that deps of different project dont conflcit > > sice they are all in different contianers so without adressing that i think > > it would actully be a regression. > > > > not that i dont like kolla imagess i do but it would not be good to use > > them in the gate an use kolla ansibel instead of devstack in my view > > if we consider co installablity to be important. > > the same goes for any contaienised soltution for that matter. > > > My idea is to keep this centralized but with more visibility and ask > > > projects to officially support this method of deliverable distribution. > > > The whole undertaking stems from the fact that I perceive modern > > > software distribution as based on containers - you have the recipe, > > > you have the image, you can use it - you have the insight in how > > > it got to be and also are able to reduce repeatability of deployment > > > steps regarding building/installation, all by "official" means. > > > Since TC is about _technical_ _governance_, I'd say this project fits > > > as it deals with the technical part of organizing proper tooling for > > > the job and promothing it in the OpenStack community far and wide. > > > > > > > > It would be easy to design deployment scenarios of subsets of > > > > > OpenStack > > > > > projects that work together to achieve some cloud-imagined goal other > > > > > than plain IaaS. Want some baremetal provisioning? Here you go. > > > > > Need a safe place for credentials? Be my guest! > > > > > > > > I am not sure what you mean there? Do you want to map OpenStack "sample > > > > configs" with gate jobs? > > > > > > More or less - yes. They seem to need some refresh in the first place > > > (I guess this is where Ironic could shine as well). > > > Currently "sample configs" are vague descriptions of possibilities > > > with listings of projects and links. They should really be captured by > > > deployment scenarios we can offer and have them tested in the gate. > > > This is an interesting matter in its own right. We at Kolla Ansible > > > started some discussion based also on user feedback (which we want to > > > enhance with further with the Kolla Klub) [1]. The goal is to give good > > > coverage of scenarios users are currently interested in while keeping > > > CI resource usage low. You might notice these hardly align with "sample > > > configs". One reason for that is catering for real service coverage, > > > rather than very specific use case. Another is likely different audience > > > than marketing site. > > > > > > [1] https://etherpad.openstack.org/p/KollaAnsibleScenarios > > > > > > -yoctozepto > > > > > From gr at ham.ie Mon Apr 6 19:35:48 2020 From: gr at ham.ie (Graham Hayes) Date: Mon, 6 Apr 2020 20:35:48 +0100 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: <135d20dd-4288-d27c-2627-51b7fb6b2272@ham.ie> On 02/04/2020 11:26, Jean-Philippe Evrard wrote: > Hello, > > I read your nominations, and as usual I will ask what do you > _technically_ will do during your mandate, what do you _actively_ want > to change in OpenStack? > > This can be a change in governance, in the projects, in the current > structure... it can be really anything. I am just hoping to see > practical OpenStack-wide changes here. It doesn't need to be a fully > refined idea, but something that can be worked upon. > > Thanks for your time. > > Regards, > Jean-Philippe Evrard > I think modernizing some of our tooling and practices is important. Helping projects by looking at how we send data around OpenStack, and what transport layers we use is important, along side updating things like metrics and tracing to more up to date standards.[1] I think the ideas repo is a good place for people to put up basic outlines for this [2]. In short, some of the ideas bouncing around my head for this: - consolidate agents on nodes - see how we can leverage HTTP / gRPC / random point to point standard for control plane -> node (both hypervisor and VM) communications - OpenTracing - Metrics I know I will be lucky to get one of these complete in my term, but getting the ground work in place for even one or two of these would be great. On top of that - I think that something like Project Teapot [3] has a real future, and is something we should pursue. (disclosure: I was one of the people who was there for the initial formation of the idea, so I do have a bias). 1 - Yes, I know people have issues with prometheus, and how it works, but for better or worse, in the new era, it is a standard of sorts. 2 - Everyone doesn't have to do a Zane on it and upload 1400 lines :) 3 - https://governance.openstack.org/ideas/ideas/teapot/index.html From gmann at ghanshyammann.com Mon Apr 6 19:42:57 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Apr 2020 14:42:57 -0500 Subject: [tc][election] campaign discussion: what we can improve in TC & how? In-Reply-To: References: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> Message-ID: <171510470b5.df6ebf4d212482.8312390132510784732@ghanshyammann.com> ---- On Mon, 06 Apr 2020 10:11:32 -0500 Mohammed Naser wrote ---- > On Mon, Apr 6, 2020 at 10:59 AM Sean McGinnis wrote: > > > > > > >> What you think we should and must improve in TC ? This can be > > >> the involvement of TC in the process from the governance point of view or technical > > >> help for each project. Few of the question is below but feel free to add your improvement > > >> points. > > > Meetings. Seriously, I have no idea how I still can't convince people > > > to meet more than > > > one a month. We're a key part of the OpenStack success and we should be meeting > > > just as often as we can to be able to continue to drive things > > > > > > A month is too long of a cycle to drive things out. > > > > OK, sub-question for the TC then. > > > > Why do you feel the only place you can drive things is inside of a time > > restricted, geo-restrictive meeting time? > > > > I think it's mostly a follow-up thing and a place to drive discussion. The > office hours have largely just become a quiet area where not much happens > these days. The "meetings" we have are simply to check the box that is > given to us from the foundation to quickly glance over the things we're dealing > with. > > We haven't been very successful in driving mailing list only things, an example > is that discussion spirals out for long threads and it becomes quite exhausting > to keep up with it all. > > It would be so much easier if we can meet together, drive some of the efforts > that are happening and reconvene often to keep track of our progress, IMHO. Initially, I was not in favour of more meetings compare to office hours idea but considering your very valid point that office hours are almost quiet, I think its time to try the more frequent meeting and speed up or increase the TC activities. As per Human nature, more the time we spend together more the progress and improvements we do :). -gmann > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. https://vexxhost.com > > From gr at ham.ie Mon Apr 6 19:45:30 2020 From: gr at ham.ie (Graham Hayes) Date: Mon, 6 Apr 2020 20:45:30 +0100 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: On 05/04/2020 02:09, Ghanshyam Mann wrote: > This topic is a very important and critical area to solve in the OpenStack community. > I personally feel and keep raising this issue wherever I get the opportunity. > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > Without enough developers (either open or closed source), none of the software can survive. > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > project is facing the less contributor issue as compare to requirements and incoming requests. Few > projects already dead or going to be if we do not solve the less contributors issue now. > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > the way who can solve this. > > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > raise this issue? or any new crazy Idea? > > > -gmann > This has been my hobby horse for a while :) Honestly, we need to highlight how people *using* OpenStack can contribute. As Artom correctly noted in a message above, we are pretty reliant on paid contributors, and with some of the traditional vendors pulling back we should be diversifying our reach to people who have traditionally felt it was too difficult to contribute. Also - (and this is Graham with his personal hat on, not his TC hat) I think that any company that is a high level sponsor of the foundation, and derives value from OpenStack should be pushing resources back upstream, and not just in sponsorship money. Some of the smaller foundation members donate not just CI resources to opendev, but also developers, and upstream contributions. This should be the done thing across all members. As part of this, we (the TC) should be able to say to these companies where the most value would be for contribution, which is something I think we have gotten a lot better at. From gmann at ghanshyammann.com Mon Apr 6 19:48:11 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Apr 2020 14:48:11 -0500 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: <17151093b41.10b0c61ce212568.1472073217395542033@ghanshyammann.com> ---- On Mon, 06 Apr 2020 09:49:47 -0500 Mohammed Naser wrote ---- > On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann wrote: > > > > This topic is a very important and critical area to solve in the OpenStack community. > > I personally feel and keep raising this issue wherever I get the opportunity. > > > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > > Without enough developers (either open or closed source), none of the software can survive. > > > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > > project is facing the less contributor issue as compare to requirements and incoming requests. Few > > projects already dead or going to be if we do not solve the less contributors issue now. > > > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > > the way who can solve this. > > > > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > > raise this issue? or any new crazy Idea? > > > > I think this has unfortunately become a self-inflicted wound by us > sticking to our ways and refusing to > adopt change. The space and landscape has changed so much over the > past few years and we've stuck > to our ways and refused to adopt these technologies. > > I think by adding more things that natively run on top of Kubernetes, > we add a whole set of potential > contributors who can use those OpenStack components that want to run > Kubernetes only. This is an interesting idea but I am thinking it another way around. If contributors and requirements from those adjacent technologies come together then we can adopt or integrate those in OpenStack. For example, k8s SIG starts gathering requirements and *resources* to fill the 'Z gap' between OpenStack's X component and k8s. Otherwise, it end up with "yes this is good to do but who will do it ?" -gmann > > > > > -gmann > > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. https://vexxhost.com > > From melwittt at gmail.com Mon Apr 6 19:49:51 2020 From: melwittt at gmail.com (melanie witt) Date: Mon, 6 Apr 2020 12:49:51 -0700 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: <760bc17d-f6a3-285c-5040-d4b25020f7c2@gmail.com> On 4/6/20 08:36, Donny Davis wrote: > On Mon, Apr 6, 2020 at 11:22 AM Artom Lifshitz > wrote: > > On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann > > wrote: > > > > This topic is a very important and critical area to solve in the > OpenStack community. > > I personally feel and keep raising this issue wherever I get the > opportunity. > > > > To develop or maintain any software, the very first thing we need > is to have enough developer resources. > > Without enough developers (either open or closed source), none of > the software can survive. > > > > OpenStack current situation on contributors is not the same as it > was few years back.  Almost every > > project is facing the less contributor issue as compare to > requirements and incoming requests. Few > > projects already dead or going to be if we do not solve the less > contributors issue now. > > > > I know, TC is not directly responsible to solve this issue but we > should do something or at least find > > the way who can solve this. > > I'm not running for TC, but I figured I could chime in with some > thoughts, and maybe get TC candidates to react. > > > What do you think about what role TC can play to solve this? What > platform or entity can be used by TC to > > raise this issue? or any new crazy Idea? > > To my knowledge, the vast majority of contributors to OpenStack are > corporate contributors - meaning, they contribute to the community > because it's their job. As companies have dropped out, the contributor > count has diminished. Therefore, the obvious solution to the > contributor dearth would be to recruit new companies that use or sell > OpenStack. However, as far as I know, Red Hat is the only company > remaining that still makes money from selling OpenStack as a product. > So if we're looking for new contributor companies, we would have to > look to those that use OpenStack, and try to make the case that it > makes sense for them to get involved in the community. I'm not sure > what this kind of advocacy would look like, or towards which > companies, or what kind of companies, it would be directed. Perhaps > the TC candidates could have suggestions here. And if I've made any > wrong assumptions, by all means correct me. > > I don't think you are too far off.  I used to work in a place where my > job was to help sell Openstack (among other products) and > enable the use of it with customers. > > Customers drive everything vendors do. Things that sell are easy to use. > Customers don't buy the best products, they buy what they > can understand fastest. If customers are asking for a product, it's > because they understand its value. Vendors in turn contribute > to projects because they make money from their investment. > > Now think about the perception and reality of Openstack as a whole. We > have spent the last decade or so writing bleeding edge features. > We have spent very little time on documenting what we do have in > layman's terms. The intended audience of our docs would seem > to me to be other developers. I hope people don't take that as a jab, > it's just the truth. If someone cannot understand how to use > this amazing technology, it won't sell. If it doesn't sell, vendors > leave, if vendors leave the number of contributors goes down. > > If we don't start working at making Openstack easier to consume, then no > amount of technical change will make an impactful difference. I'm not running for the TC either but wanted say Donny's reply here resonates with me. When I first started working on OpenStack, I was at Yahoo (now Verizon Media), a company who consumes OpenStack and depends on it for a (now) large portion of their infrastructure. At the time I joined the OpenStack community in 2012, the docs about contributing and the docs about each component were dead simple. I was up and running in under a day and started my first contributions upstream shortly after. Fast forward to now, I find the docs are hard to read and navigate. There's not much layman's terms. And most of all, at least in Nova, is that the docs are in dire need of being organized. They used to be simple but when docs moved in-tree things were hastily cobbled together because as you mentioned, we're always already stretched trying to deliver bleeding edge features. And, there are also differences in opinion about how docs should be organized and how verbose they are. I have seen docs evolve from simple to complicated because for example: someone thought they were making an improvement, whereas I might think they were making the docs less usable. I'm not aware that there is any guideline or reference documentation that is to be used as a design goal. Such as, "this is what your landing page should look like", "here's how docs should be organized", "you should have these sections", etc. Sometimes I have thought about proposing a bunch of changes to how our docs are organized. But, full disclosure, I worry that if I do that and if it gets accepted/merged, someone else will completely change all of it later and then all the organization and work I did goes out the window. And I think this worry highlights the fact that there is no "right way" of doing the docs. It's just opinion and everyone has a different opinion. I'm not sure whether that's solvable. I mentioned a guideline or design goal to aspire to, but at the same time, we don't want to be so rigid that projects can't do docs the way they want. So then what? Per project design goals and guidelines I guess? Or is that too much process? I have wondered how other communities have managed success in the docs department. So, back to the contributors point. I was lucky because by the time docs got hard to consume, I already knew the ropes. I don't know how hard it has been for newer contributors to join since then and how much of the difficulty is related to docs. I'm not sure I've said anything useful, so apologies for derailing the discussion if I've done that. -melanie From smooney at redhat.com Mon Apr 6 19:50:07 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 06 Apr 2020 20:50:07 +0100 Subject: [ironic][release] List cycle-with-intermediary deliverables that have not been released yet In-Reply-To: References: <4733205b-ae20-90af-490b-ce56434f22e4@gmx.com> Message-ID: <5060ef3758919eec6727c9c38ca9bff749b73530.camel@redhat.com> On Mon, 2020-04-06 at 13:28 -0500, Sean McGinnis wrote: > > > > Thanks for the detailed response Sean. I don't have an issue with the > > > > cycle model - Ironic is still tied to the cyclical release model. The > > > > part that I disagree with is the requirement to create an intermediary > > > > release. It shouldn't be a problem if bifrost doesn't make a feature > > > > release between Train and Ussuri, we'll just do a final Ussuri > > > > release. It's the intermediary I'd like to be optional, rather than > > > > the final cycle release. for what its worth i was suprised with the requirement to make more then one release with the cycle-with-intermediary cycle too. i orginally tought when os-vif moved to it we could make intermediat release if we wanted too not that we were required to make them at each milestone. looking at the text https://github.com/openstack/releases/blob/35595343ddba5db598a80ce9238df28f27c43a14/doc/source/reference/release_models.rst#cycle-with-intermediary The "cycle-with-intermediary" model describes projects that produce multiple full releases during the development cycle, with a final release to match the end of the cycle. "cycle-with-intermediary" projects commit to produce a release near the end of the 6-month development cycle to be used with projects using the other cycle-based release models that are required to produce a release at that time. Release tags for deliverables using this tag are reviewed and applied by the Release Management team. im not actully sure where that requirem of 1 relase per miles stone came from but we were told to do it by the release team so we do. releaes are relitvely cheap so its not a burden to do this but as we did between m2 and non clinet lib freeze we often have period where we dont need to do a release yet or we would prefer to wait for a patch to merge which we expect to merge a few days out after a milestone and its has been annoying to have to release a milestone release under the cycle-with-intermediary release model in those cases. if there was a corss poject depency i have just asked for a release a day or two later when the patch i was waiting for merged if waited to the next milestoen if it was an internal fix. its also possible that the reqirements for a release at each milestone was misscomunicated for cycle-with-intermediary but since it has happened at 2 or 3 times i now just assume its required even if its not documented as such. > > > > > > > > > > I would suggest switching these to cycle-with-rc then. There is one > > > release candidate that has to happen just before the final release for > > > the cycle, but that's mainly to make sure everything is in good shape > > > before we declare it done. That sounds like it might fit better with > > > what the team wants to do here. > > > > But what if we want to create a feature release mid-cycle? Some cycles > > we do, some we don't. > > > > With cycle-with-rc, that does allow *beta* releases to be done at any > point during the cycle. But those would be marked as b1, b2, etc. > releases. This allows those that want to try out upcoming features to > grab them if they want them, but would prevent anyone else from > accidentally picking up something before it is quite ready. > > I'm guessing this might not be what you are looking for though. > > We do have another release model called cycle-automatic. This was > introduced for tempest plugins to just do a release at the end of the > cycle to make sure there is a tag to denote the tempest version the > plugin was originally developed for. Since some plugins are being picked > up more often downstream, this model does allow for additional releases > to be proposed at any point during the development cycle. > > We will need to discuss this as a team to see if this makes sense for > non-tempest plugins. It was intended only for those types of > deliverables. I just mention it here as something that we do have in > place that might be adapted to fit what the team needs. But we also need > to consider what we are communicating to downstream consumers of our > releases, so I'm not entirely sure at this point if it makes sense, or > would be a good thing, to allow other types of deliverables to use this > model. > > Sean > > From gr at ham.ie Mon Apr 6 19:55:54 2020 From: gr at ham.ie (Graham Hayes) Date: Mon, 6 Apr 2020 20:55:54 +0100 Subject: [tc][election] campaign discussion: what we can improve in TC & how? In-Reply-To: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> References: <17147d34c5d.f446588e159165.6785329609446396330@ghanshyammann.com> Message-ID: On 05/04/2020 01:52, Ghanshyam Mann wrote: > As we are in the campaigning phase of the TC election, where we > start the debate on few topics. This is one of the topics where I would like > to start the debate. > > First off, I'd like to thank all the candidates for showing interest to > be part of or continuing as TC. > > What you think we should and must improve in TC ? This can be > the involvement of TC in the process from the governance point of view or technical > help for each project. Few of the question is below but feel free to add your improvement > points. > > - Do we have too much restriction on project sides and not giving them a free hand? If yes, what > we can improve and how? Controversial, but I think we did give too much freedom, but the cat is out of the bag now, and it is never going back in. If we look at the discussions recently about what projects don't like they are things that unfortunately impact on users, and give the impression that OpenStack is more disjointed than it really is. Would I love to have a unified CLI that didn't need a decoder ring?[1] 100% Yes. Do I understand why projects push back on using it? Yes (and I once pushed back against the project I was working on using it for other reasons.) Where we are now, we don't have the developers to massive restructure all of OpenStack inside a 6 month release, so we need to find balances and I think this is where the TC can add value, and help push the community and project forward. > - Is there less interaction from TC with projects? I am sure few projects/members even > do not know even what TC is for? What's your idea to solve this. Yes, but I think that is a sign of the times. We have less rapid evolution in the projects that causes the TC to help out, which drops interaction. Worryingly, I think we are also seeing projects slowing down interaction with the community as a whole - we are in a weird situation globally, so that could have caused it, but if we look at the amount of projects that missed the deadline for PTL nominations, it shows a worrying trend. > -gmann > From gmann at ghanshyammann.com Mon Apr 6 20:03:06 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Apr 2020 15:03:06 -0500 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: <1715116e36a.12332ed93212825.320970824006575198@ghanshyammann.com> ---- On Mon, 06 Apr 2020 10:36:49 -0500 Donny Davis wrote ---- > On Mon, Apr 6, 2020 at 11:22 AM Artom Lifshitz wrote: > On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann wrote: > > > > This topic is a very important and critical area to solve in the OpenStack community. > > I personally feel and keep raising this issue wherever I get the opportunity. > > > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > > Without enough developers (either open or closed source), none of the software can survive. > > > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > > project is facing the less contributor issue as compare to requirements and incoming requests. Few > > projects already dead or going to be if we do not solve the less contributors issue now. > > > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > > the way who can solve this. > > I'm not running for TC, but I figured I could chime in with some > thoughts, and maybe get TC candidates to react. > > > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > > raise this issue? or any new crazy Idea? > > To my knowledge, the vast majority of contributors to OpenStack are > corporate contributors - meaning, they contribute to the community > because it's their job. As companies have dropped out, the contributor > count has diminished. Therefore, the obvious solution to the > contributor dearth would be to recruit new companies that use or sell > OpenStack. However, as far as I know, Red Hat is the only company > remaining that still makes money from selling OpenStack as a product. > So if we're looking for new contributor companies, we would have to > look to those that use OpenStack, and try to make the case that it > makes sense for them to get involved in the community. I'm not sure > what this kind of advocacy would look like, or towards which > companies, or what kind of companies, it would be directed. Perhaps > the TC candidates could have suggestions here. And if I've made any > wrong assumptions, by all means correct me. > > > > > -gmann > > > > > > I don't think you are too far off. I used to work in a place where my job was to help sell Openstack (among other products) andenable the use of it with customers. > Customers drive everything vendors do. Things that sell are easy to use. Customers don't buy the best products, they buy what theycan understand fastest. If customers are asking for a product, it's because they understand its value. Vendors in turn contributeto projects because they make money from their investment. > Now think about the perception and reality of Openstack as a whole. We have spent the last decade or so writing bleeding edge features.We have spent very little time on documenting what we do have in layman's terms. The intended audience of our docs would seemto me to be other developers. I hope people don't take that as a jab, it's just the truth. If someone cannot understand how to usethis amazing technology, it won't sell. If it doesn't sell, vendors leave, if vendors leave the number of contributors goes down. > If we don't start working at making Openstack easier to consume, then no amount of technical change will make an impactful difference. Ok, this is one of the key things and I am 100% agree on your point - "our docs would seem to me to be other developers". Does this include 'feature doc & how to use them' or overall usage of OpenStack like "Project X and Y can solve the use case Z" (what Radosław mentioned in this reply) ? As you might know, our documentation team is lacking active contributors and moving towards SIG and almost all the documents are moved/maintain on project side. Do you think that is the issue and can make the current doc more worst (with the typical developer nature)? Again the same question is here, 'How to get the documents contributors ?' Can operators help here with their use cases, best practice etc ? If so then how to convince them to participate and contribute in the community? -gmann > > -- > ~/DonnyDC: 805 814 6800"No mission too difficult. No sacrifice too great. Duty First" From donny at fortnebula.com Mon Apr 6 20:03:35 2020 From: donny at fortnebula.com (Donny Davis) Date: Mon, 6 Apr 2020 16:03:35 -0400 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <760bc17d-f6a3-285c-5040-d4b25020f7c2@gmail.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> <760bc17d-f6a3-285c-5040-d4b25020f7c2@gmail.com> Message-ID: On Mon, Apr 6, 2020 at 3:49 PM melanie witt wrote: > On 4/6/20 08:36, Donny Davis wrote: > > On Mon, Apr 6, 2020 at 11:22 AM Artom Lifshitz > > wrote: > > > > On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann > > > wrote: > > > > > > This topic is a very important and critical area to solve in the > > OpenStack community. > > > I personally feel and keep raising this issue wherever I get the > > opportunity. > > > > > > To develop or maintain any software, the very first thing we need > > is to have enough developer resources. > > > Without enough developers (either open or closed source), none of > > the software can survive. > > > > > > OpenStack current situation on contributors is not the same as it > > was few years back. Almost every > > > project is facing the less contributor issue as compare to > > requirements and incoming requests. Few > > > projects already dead or going to be if we do not solve the less > > contributors issue now. > > > > > > I know, TC is not directly responsible to solve this issue but we > > should do something or at least find > > > the way who can solve this. > > > > I'm not running for TC, but I figured I could chime in with some > > thoughts, and maybe get TC candidates to react. > > > > > What do you think about what role TC can play to solve this? What > > platform or entity can be used by TC to > > > raise this issue? or any new crazy Idea? > > > > To my knowledge, the vast majority of contributors to OpenStack are > > corporate contributors - meaning, they contribute to the community > > because it's their job. As companies have dropped out, the > contributor > > count has diminished. Therefore, the obvious solution to the > > contributor dearth would be to recruit new companies that use or sell > > OpenStack. However, as far as I know, Red Hat is the only company > > remaining that still makes money from selling OpenStack as a product. > > So if we're looking for new contributor companies, we would have to > > look to those that use OpenStack, and try to make the case that it > > makes sense for them to get involved in the community. I'm not sure > > what this kind of advocacy would look like, or towards which > > companies, or what kind of companies, it would be directed. Perhaps > > the TC candidates could have suggestions here. And if I've made any > > wrong assumptions, by all means correct me. > > > > I don't think you are too far off. I used to work in a place where my > > job was to help sell Openstack (among other products) and > > enable the use of it with customers. > > > > Customers drive everything vendors do. Things that sell are easy to use. > > Customers don't buy the best products, they buy what they > > can understand fastest. If customers are asking for a product, it's > > because they understand its value. Vendors in turn contribute > > to projects because they make money from their investment. > > > > Now think about the perception and reality of Openstack as a whole. We > > have spent the last decade or so writing bleeding edge features. > > We have spent very little time on documenting what we do have in > > layman's terms. The intended audience of our docs would seem > > to me to be other developers. I hope people don't take that as a jab, > > it's just the truth. If someone cannot understand how to use > > this amazing technology, it won't sell. If it doesn't sell, vendors > > leave, if vendors leave the number of contributors goes down. > > > > If we don't start working at making Openstack easier to consume, then no > > amount of technical change will make an impactful difference. > > I'm not running for the TC either but wanted say Donny's reply here > resonates with me. When I first started working on OpenStack, I was at > Yahoo (now Verizon Media), a company who consumes OpenStack and depends > on it for a (now) large portion of their infrastructure. > > At the time I joined the OpenStack community in 2012, the docs about > contributing and the docs about each component were dead simple. I was > up and running in under a day and started my first contributions > upstream shortly after. > > Fast forward to now, I find the docs are hard to read and navigate. > There's not much layman's terms. And most of all, at least in Nova, is > that the docs are in dire need of being organized. They used to be > simple but when docs moved in-tree things were hastily cobbled together > because as you mentioned, we're always already stretched trying to > deliver bleeding edge features. > > And, there are also differences in opinion about how docs should be > organized and how verbose they are. I have seen docs evolve from simple > to complicated because for example: someone thought they were making an > improvement, whereas I might think they were making the docs less > usable. I'm not aware that there is any guideline or reference > documentation that is to be used as a design goal. Such as, "this is > what your landing page should look like", "here's how docs should be > organized", "you should have these sections", etc. > > Sometimes I have thought about proposing a bunch of changes to how our > docs are organized. But, full disclosure, I worry that if I do that and > if it gets accepted/merged, someone else will completely change all of > it later and then all the organization and work I did goes out the > window. And I think this worry highlights the fact that there is no > "right way" of doing the docs. It's just opinion and everyone has a > different opinion. > > I'm not sure whether that's solvable. I mentioned a guideline or design > goal to aspire to, but at the same time, we don't want to be so rigid > that projects can't do docs the way they want. So then what? Per project > design goals and guidelines I guess? Or is that too much process? I have > wondered how other communities have managed success in the docs department. > > So, back to the contributors point. I was lucky because by the time docs > got hard to consume, I already knew the ropes. I don't know how hard it > has been for newer contributors to join since then and how much of the > difficulty is related to docs. > > I'm not sure I've said anything useful, so apologies for derailing the > discussion if I've done that. > > -melanie > > > I have had a couple conversations about trying to put together "docs for mere mortals", but it comes down to time and the right place for it to go. I understand we as a community are not really supposed to have an "opinion" on how to best put together a cloud, but maybe it's time we take the collective wisdom from those who know what works and what doesn't... and put something together for all of us normal people out there. I am just a simple human, and I do not think I am alone. This is not meant to be a "our docs suck and we need to refactor them all"... it's more so a "we have the content and the wisdom, so let's build something that someone can put in prod and stand on it." >From my perspective this has literally been our barrier to adoption. -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Apr 6 20:13:26 2020 From: melwittt at gmail.com (melanie witt) Date: Mon, 6 Apr 2020 13:13:26 -0700 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> <760bc17d-f6a3-285c-5040-d4b25020f7c2@gmail.com> Message-ID: On 4/6/20 13:03, Donny Davis wrote: > > > On Mon, Apr 6, 2020 at 3:49 PM melanie witt > wrote: > > On 4/6/20 08:36, Donny Davis wrote: > > On Mon, Apr 6, 2020 at 11:22 AM Artom Lifshitz > > > >> wrote: > > > >     On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann > >      > >> > wrote: > >      > > >      > This topic is a very important and critical area to solve > in the > >     OpenStack community. > >      > I personally feel and keep raising this issue wherever I > get the > >     opportunity. > >      > > >      > To develop or maintain any software, the very first thing > we need > >     is to have enough developer resources. > >      > Without enough developers (either open or closed source), > none of > >     the software can survive. > >      > > >      > OpenStack current situation on contributors is not the > same as it > >     was few years back.  Almost every > >      > project is facing the less contributor issue as compare to > >     requirements and incoming requests. Few > >      > projects already dead or going to be if we do not solve > the less > >     contributors issue now. > >      > > >      > I know, TC is not directly responsible to solve this issue > but we > >     should do something or at least find > >      > the way who can solve this. > > > >     I'm not running for TC, but I figured I could chime in with some > >     thoughts, and maybe get TC candidates to react. > > > >      > What do you think about what role TC can play to solve > this? What > >     platform or entity can be used by TC to > >      > raise this issue? or any new crazy Idea? > > > >     To my knowledge, the vast majority of contributors to > OpenStack are > >     corporate contributors - meaning, they contribute to the > community > >     because it's their job. As companies have dropped out, the > contributor > >     count has diminished. Therefore, the obvious solution to the > >     contributor dearth would be to recruit new companies that use > or sell > >     OpenStack. However, as far as I know, Red Hat is the only company > >     remaining that still makes money from selling OpenStack as a > product. > >     So if we're looking for new contributor companies, we would > have to > >     look to those that use OpenStack, and try to make the case > that it > >     makes sense for them to get involved in the community. I'm > not sure > >     what this kind of advocacy would look like, or towards which > >     companies, or what kind of companies, it would be directed. > Perhaps > >     the TC candidates could have suggestions here. And if I've > made any > >     wrong assumptions, by all means correct me. > > > > I don't think you are too far off.  I used to work in a place > where my > > job was to help sell Openstack (among other products) and > > enable the use of it with customers. > > > > Customers drive everything vendors do. Things that sell are easy > to use. > > Customers don't buy the best products, they buy what they > > can understand fastest. If customers are asking for a product, it's > > because they understand its value. Vendors in turn contribute > > to projects because they make money from their investment. > > > > Now think about the perception and reality of Openstack as a > whole. We > > have spent the last decade or so writing bleeding edge features. > > We have spent very little time on documenting what we do have in > > layman's terms. The intended audience of our docs would seem > > to me to be other developers. I hope people don't take that as a > jab, > > it's just the truth. If someone cannot understand how to use > > this amazing technology, it won't sell. If it doesn't sell, vendors > > leave, if vendors leave the number of contributors goes down. > > > > If we don't start working at making Openstack easier to consume, > then no > > amount of technical change will make an impactful difference. > > I'm not running for the TC either but wanted say Donny's reply here > resonates with me. When I first started working on OpenStack, I was at > Yahoo (now Verizon Media), a company who consumes OpenStack and depends > on it for a (now) large portion of their infrastructure. > > At the time I joined the OpenStack community in 2012, the docs about > contributing and the docs about each component were dead simple. I was > up and running in under a day and started my first contributions > upstream shortly after. > > Fast forward to now, I find the docs are hard to read and navigate. > There's not much layman's terms. And most of all, at least in Nova, is > that the docs are in dire need of being organized. They used to be > simple but when docs moved in-tree things were hastily cobbled together > because as you mentioned, we're always already stretched trying to > deliver bleeding edge features. > > And, there are also differences in opinion about how docs should be > organized and how verbose they are. I have seen docs evolve from simple > to complicated because for example: someone thought they were making an > improvement, whereas I might think they were making the docs less > usable. I'm not aware that there is any guideline or reference > documentation that is to be used as a design goal. Such as, "this is > what your landing page should look like", "here's how docs should be > organized", "you should have these sections", etc. > > Sometimes I have thought about proposing a bunch of changes to how our > docs are organized. But, full disclosure, I worry that if I do that and > if it gets accepted/merged, someone else will completely change all of > it later and then all the organization and work I did goes out the > window. And I think this worry highlights the fact that there is no > "right way" of doing the docs. It's just opinion and everyone has a > different opinion. > > I'm not sure whether that's solvable. I mentioned a guideline or design > goal to aspire to, but at the same time, we don't want to be so rigid > that projects can't do docs the way they want. So then what? Per > project > design goals and guidelines I guess? Or is that too much process? I > have > wondered how other communities have managed success in the docs > department. > > So, back to the contributors point. I was lucky because by the time > docs > got hard to consume, I already knew the ropes. I don't know how hard it > has been for newer contributors to join since then and how much of the > difficulty is related to docs. > > I'm not sure I've said anything useful, so apologies for derailing the > discussion if I've done that. > > I have had a couple conversations about trying to put together "docs for > mere mortals", but it comes down to time and the right place for it to go. > I understand we as a community are not really supposed to have an > "opinion" on how to best put together a cloud, but maybe it's time we take > the collective wisdom from those who know what works and what doesn't... > and put something together for all of us normal people out there. To be clear, I don't think we have different opinions about how to best put together a cloud. When I say different opinions I mean different opinions about how to organize the doc pages, how much verbosity to have in the content, that sort of thing. I have a personal opinion that being too verbose and detailed in the "main" content page for a concept can make it needlessly hard to understand and not end up helping the reader. In a case like that I'd prefer "main" content to be very concise and then have a link to all the gory details if someone wants to read that as well. Sorry if I made it sound like we differ in opinion about how to put together the cloud. -melanie > I am just a simple human, and I do not think I am alone. This is not > meant to be a "our docs suck and we need to refactor them all"... it's > more so > a "we have the content and the wisdom, so let's build something that > someone can put in prod and stand on it." > > From my perspective this has literally been our barrier to adoption. > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" From gmann at ghanshyammann.com Mon Apr 6 20:16:21 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Apr 2020 15:16:21 -0500 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <753836DB-A994-485A-8640-43FD52158CA8@bu.edu> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> <753836DB-A994-485A-8640-43FD52158CA8@bu.edu> Message-ID: <17151230331.117e031c3213071.2369283601600121385@ghanshyammann.com> ---- On Mon, 06 Apr 2020 11:23:10 -0500 Nikolla, Kristi wrote ---- > Hi Ghanshyam, > > Unfortunately, OpenStack is still for the most part corporate in terms of developer resources. It sort of makes sense, it's a cloud platform, and you need a certain scale to justify the costs for learning, adopting and operating. I probably wouldn't be contributing now if my first introduction to OpenStack wasn't as part of my job operating and developing a cloud. > > I don't see a clear path to solve that, but I see a few potential ways to help. > > 1. Advertising and marketing the viability of specific OpenStack projects as standalone tools. I can see value for someone needing a volume service, and if Cinder: a) fits the requirements b) is easy to deploy and learn (eg., well documented standalone use case and tested) c) brings a minimum set of cruft with it. This might encourage more people to use it and encourage wider adoption of the other OpenStack projects if their experience is a good one, with OpenStack becoming a trusted toolbox. +1 on this. If someone seeing OpenStack as all 52 projects together then they will just go way from it. ONAP was one of good example to learn from it in term of modularity on use cases side. > > 2. Making sure we invest more time and effort on documentation. Especially with regards to information on getting started, best practices in terms of architecture, configuration and deployment, and of course contributors guides. We're already a very friendly and welcoming community. Contributors guide has been improved a lot and Upstream training, mentorship program, FC SIG doing the continuous effort for many years to get new people onboard but that only not solving this issue. Do you think we still lack on helping new contributors onboard or something more interesting idea to attract them? > > 3. Investigating and working on integrating OpenStack much more closely with other cloud tools. We're great for IaaS, but clouds today are not only IaaS and we need to evolve and play nice with everything else that someone might encounter in a datacenter. Mohammed brings a great point about integrating with Kubernetes. All these integrations need to be well documented, including best practices, and part of our testing infrastructure. > > To summarize, I would like to see OpenStack scale better. From homelabbers or small businesses who only need a few services, to large datacenters who may be running multiple OpenStacks, multiple Kuberneteses/OpenShifts, monitoring tools, billing systems. This may result in an increase in adoption, which in turn, should result in an increase in contributions. > > I can see the above becoming community goals and the TC doing outreach to document the process and help out. > > > > On Apr 4, 2020, at 9:09 PM, Ghanshyam Mann wrote: > > > > This topic is a very important and critical area to solve in the OpenStack community. > > I personally feel and keep raising this issue wherever I get the opportunity. > > > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > > Without enough developers (either open or closed source), none of the software can survive. > > > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > > project is facing the less contributor issue as compare to requirements and incoming requests. Few > > projects already dead or going to be if we do not solve the less contributors issue now. > > > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > > the way who can solve this. > > > > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > > raise this issue? or any new crazy Idea? > > > > > > -gmann > > > > > From pierre at stackhpc.com Mon Apr 6 20:19:01 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 6 Apr 2020 22:19:01 +0200 Subject: [doc][release][requirements] Issues with wsmeext.sphinxext Message-ID: Hello, A heads up for other projects using wsmeext.sphinxext: it seems to be broken following the release of Sphinx 3.0.0 yesterday. Our openstack-tox-docs job in blazar started to fail with: Extension error: Could not import extension wsmeext.sphinxext (exception: cannot import name 'l_' from 'sphinx.locale' Looks like it would affect aodh and cloudkitty too. Pierre Riteau (priteau) From gmann at ghanshyammann.com Mon Apr 6 20:21:05 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Apr 2020 15:21:05 -0500 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: <17151275ae9.10c76aba7213164.7712086881512900173@ghanshyammann.com> ---- On Mon, 06 Apr 2020 11:34:36 -0500 Jeremy Freudberg wrote ---- > People generally contribute to something that they want to use. It is > really as simple as that. > > - Make sure that people understand what OpenStack is/does > - Make OpenStack easier to use > - Make OpenStack do more things > - Make OpenStack components reusable > > For the first three points: few people want to use something that they > don't understand or that they find difficult to use or that doesn't do > the things they want. For the last point: ok fine, maybe some people > don't need OpenStack but one of our components might be just what they > need to build their own cool thing, so why not mutually benefit? I think all those points are valid but whats about existing users who are using them. How we can convince them to contribute back? Jay did a good job putting the clear pic on users vs contributors. - https://governance.openstack.org/tc/user_survey/analysis-12-2019.html#to-which-projects-does-your-organization-contribute-maintenance-resources-such-as-patches-for-bug-and-reviews-on-master-or-stable-branches -gmann > > On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann wrote: > > > > This topic is a very important and critical area to solve in the OpenStack community. > > I personally feel and keep raising this issue wherever I get the opportunity. > > > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > > Without enough developers (either open or closed source), none of the software can survive. > > > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > > project is facing the less contributor issue as compare to requirements and incoming requests. Few > > projects already dead or going to be if we do not solve the less contributors issue now. > > > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > > the way who can solve this. > > > > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > > raise this issue? or any new crazy Idea? > > > > > > -gmann > > > > From donny at fortnebula.com Mon Apr 6 20:22:17 2020 From: donny at fortnebula.com (Donny Davis) Date: Mon, 6 Apr 2020 16:22:17 -0400 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <1715116e36a.12332ed93212825.320970824006575198@ghanshyammann.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> <1715116e36a.12332ed93212825.320970824006575198@ghanshyammann.com> Message-ID: On Mon, Apr 6, 2020 at 4:03 PM Ghanshyam Mann wrote: > ---- On Mon, 06 Apr 2020 10:36:49 -0500 Donny Davis > wrote ---- > > On Mon, Apr 6, 2020 at 11:22 AM Artom Lifshitz > wrote: > > On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann > wrote: > > > > > > This topic is a very important and critical area to solve in the > OpenStack community. > > > I personally feel and keep raising this issue wherever I get the > opportunity. > > > > > > To develop or maintain any software, the very first thing we need is > to have enough developer resources. > > > Without enough developers (either open or closed source), none of the > software can survive. > > > > > > OpenStack current situation on contributors is not the same as it was > few years back. Almost every > > > project is facing the less contributor issue as compare to > requirements and incoming requests. Few > > > projects already dead or going to be if we do not solve the less > contributors issue now. > > > > > > I know, TC is not directly responsible to solve this issue but we > should do something or at least find > > > the way who can solve this. > > > > I'm not running for TC, but I figured I could chime in with some > > thoughts, and maybe get TC candidates to react. > > > > > What do you think about what role TC can play to solve this? What > platform or entity can be used by TC to > > > raise this issue? or any new crazy Idea? > > > > To my knowledge, the vast majority of contributors to OpenStack are > > corporate contributors - meaning, they contribute to the community > > because it's their job. As companies have dropped out, the contributor > > count has diminished. Therefore, the obvious solution to the > > contributor dearth would be to recruit new companies that use or sell > > OpenStack. However, as far as I know, Red Hat is the only company > > remaining that still makes money from selling OpenStack as a product. > > So if we're looking for new contributor companies, we would have to > > look to those that use OpenStack, and try to make the case that it > > makes sense for them to get involved in the community. I'm not sure > > what this kind of advocacy would look like, or towards which > > companies, or what kind of companies, it would be directed. Perhaps > > the TC candidates could have suggestions here. And if I've made any > > wrong assumptions, by all means correct me. > > > > > > > > -gmann > > > > > > > > > > > I don't think you are too far off. I used to work in a place where my > job was to help sell Openstack (among other products) andenable the use of > it with customers. > > Customers drive everything vendors do. Things that sell are easy to > use. Customers don't buy the best products, they buy what theycan > understand fastest. If customers are asking for a product, it's because > they understand its value. Vendors in turn contributeto projects because > they make money from their investment. > > Now think about the perception and reality of Openstack as a whole. We > have spent the last decade or so writing bleeding edge features.We have > spent very little time on documenting what we do have in layman's terms. > The intended audience of our docs would seemto me to be other developers. I > hope people don't take that as a jab, it's just the truth. If someone > cannot understand how to usethis amazing technology, it won't sell. If it > doesn't sell, vendors leave, if vendors leave the number of contributors > goes down. > > If we don't start working at making Openstack easier to consume, then > no amount of technical change will make an impactful difference. > > Ok, this is one of the key things and I am 100% agree on your point - "our > docs would seem to me to be other developers". > Does this include 'feature doc & how to use them' or overall usage of > OpenStack like "Project X and Y can solve the use case > Z" (what Radosław mentioned in this reply) ? > > As you might know, our documentation team is lacking active contributors > and moving towards SIG and almost all the documents > are moved/maintain on project side. Do you think that is the issue and can > make the current doc more worst (with the typical developer nature)? > > Again the same question is here, 'How to get the documents contributors ?' > Can operators help here with their use cases, best practice etc ? > If so then how to convince them to participate and contribute in the > community? > > -gmann > > > > > > -- > > ~/DonnyDC: 805 814 6800"No mission too difficult. No sacrifice too > great. Duty First" > Our docs team is completely overloaded. There is no way I see them having the capacity to come up with something. We need a team of operators who know how operators read, think and talk to put together a consumable document pointed directly at other would be operators / potential new adopters. We need to target new users who don't know anything about Openstack or in many cases cloud in general. These Operators don't need to know all the switches and knobs they *can* turn in Openstack when they are in the learning phase. The ones that do, already know how it all works anyways and the current doc set works great for them. How do we get contributors to this doc - well I don't really have the answer for that. It's not going to be easy to say the least. Maybe put out a marketing campaign to find Operators who are willing to contribute. It's also not just docs - because anyone who has been in IT for more than 12 seconds knows many people don't / won't read docs. Maybe a simple to understand video series would be super helpful to get people off the ground. I really don't have all the answers, but I do know for sure that building a cloud is hard - so let's make it easier. -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Apr 6 20:23:11 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Apr 2020 15:23:11 -0500 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: <17151294685.e4731801213221.6452506247777620017@ghanshyammann.com> ---- On Mon, 06 Apr 2020 10:17:59 -0500 Artom Lifshitz wrote ---- > On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann wrote: > > > > This topic is a very important and critical area to solve in the OpenStack community. > > I personally feel and keep raising this issue wherever I get the opportunity. > > > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > > Without enough developers (either open or closed source), none of the software can survive. > > > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > > project is facing the less contributor issue as compare to requirements and incoming requests. Few > > projects already dead or going to be if we do not solve the less contributors issue now. > > > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > > the way who can solve this. > > I'm not running for TC, but I figured I could chime in with some > thoughts, and maybe get TC candidates to react. > > > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > > raise this issue? or any new crazy Idea? > > To my knowledge, the vast majority of contributors to OpenStack are > corporate contributors - meaning, they contribute to the community > because it's their job. As companies have dropped out, the contributor > count has diminished. Therefore, the obvious solution to the > contributor dearth would be to recruit new companies that use or sell > OpenStack. However, as far as I know, Red Hat is the only company > remaining that still makes money from selling OpenStack as a product. > So if we're looking for new contributor companies, we would have to > look to those that use OpenStack, and try to make the case that it > makes sense for them to get involved in the community. I'm not sure > what this kind of advocacy would look like, or towards which > companies, or what kind of companies, it would be directed. Perhaps > the TC candidates could have suggestions here. And if I've made any > wrong assumptions, by all means correct me. But there are other companies making money out of it as product or support or base product etc. This is the Users vs Contributors table here, I hope all users making money less or more :) : - https://governance.openstack.org/tc/user_survey/analysis-12-2019.html#to-which-projects-does-your-organization-contribute-maintenance-resources-such-as-patches-for-bug-and-reviews-on-master-or-stable-branches -gmann > > > > > -gmann > > > > > From fungi at yuggoth.org Mon Apr 6 20:23:47 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 6 Apr 2020 20:23:47 +0000 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> <7f36997aa8cbdd0ac77fa4fdaae6cdb02beb4a6a.camel@redhat.com> Message-ID: <20200406202347.flv5yye7y2qkenty@yuggoth.org> On 2020-04-06 20:22:05 +0200 (+0200), Radosław Piliszek wrote: > Just to clarify - I'm actually in favor of dropping the plain > coinstallability rule. > > I wrote this before: we have different ways to ensure separation: > virtual envs and OCI images. > People can and do use them. > > I don't think this rule brings any benefits nowadays, very diverse > projects can be used together even if they decide they need different > versions of some internal library - for whatever reason. > What counts are well-defined interfaces - this is the only way to > ensure cofunctionality, other measures are simply workarounds. ;-) [...] Just to be clear, we didn't add rules about coinstallability to make *our* upstream lives easier. We did it so that downstream distros don't have to provide and support multiple versions of various dependencies. What you're in effect recommending is that we stop supporting installation within traditional GNU/Linux distributions, since only package management solutions like Docker or Nix are going to allow us to punt on coinstallability of our software. You're *also* saying that our libraries now have to start supporting users on a broader variety of their own old versions, and also support/test working with a variety of different versions of their own dependencies, or that we also give up on having any abstracted Python libraries and go back to re-embedding slightly different copies of the same code in all OpenStack services. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Mon Apr 6 20:30:52 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Apr 2020 15:30:52 -0500 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> Message-ID: <17151304eb4.b49c975e213342.1946273553865149529@ghanshyammann.com> ---- On Mon, 06 Apr 2020 14:45:30 -0500 Graham Hayes wrote ---- > On 05/04/2020 02:09, Ghanshyam Mann wrote: > > This topic is a very important and critical area to solve in the OpenStack community. > > I personally feel and keep raising this issue wherever I get the opportunity. > > > > To develop or maintain any software, the very first thing we need is to have enough developer resources. > > Without enough developers (either open or closed source), none of the software can survive. > > > > OpenStack current situation on contributors is not the same as it was few years back. Almost every > > project is facing the less contributor issue as compare to requirements and incoming requests. Few > > projects already dead or going to be if we do not solve the less contributors issue now. > > > > I know, TC is not directly responsible to solve this issue but we should do something or at least find > > the way who can solve this. > > > > What do you think about what role TC can play to solve this? What platform or entity can be used by TC to > > raise this issue? or any new crazy Idea? > > > > > > -gmann > > > > This has been my hobby horse for a while :) > > Honestly, we need to highlight how people *using* OpenStack can > contribute. As Artom correctly noted in a message above, we are pretty > reliant on paid contributors, and with some of the traditional vendors > pulling back we should be diversifying our reach to people who have > traditionally felt it was too difficult to contribute. > > Also - (and this is Graham with his personal hat on, not his TC hat) > I think that any company that is a high level sponsor of the foundation, > and derives value from OpenStack should be pushing resources back > upstream, and not just in sponsorship money. > > Some of the smaller foundation members donate not just CI resources > to opendev, but also developers, and upstream contributions. This > should be the done thing across all members. > > As part of this, we (the TC) should be able to say to these companies > where the most value would be for contribution, which is something I > think we have gotten a lot better at. Yeah, I think this is the answer (one of which can practically solve this) I was looking for :). Making users contribute resources as one of the mandatory things can solve this for sure. I do not know what is the side-effect of that. I also do not know why this cannot be implemented. Along with sponsor, we can make companies using OpenStack interop certification/logo to do the same. I am not sure TC can reach out to companies executive layer for asking resources, but I think as TC, we can put this requirement to BoD or foundation as one of the things to consider serious notes? -gmann > > From jeremyfreudberg at gmail.com Mon Apr 6 20:44:37 2020 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 6 Apr 2020 16:44:37 -0400 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <17151275ae9.10c76aba7213164.7712086881512900173@ghanshyammann.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> <17151275ae9.10c76aba7213164.7712086881512900173@ghanshyammann.com> Message-ID: Thanks Ghanshyam - very good point! I guess that is the role of outreach, e.g. events/meetups to bring together users and devs. Otherwise we can only lead by example, promoting stories of users who are more successful due to their active upstream participation (vs passive use). On Mon, Apr 6, 2020 at 4:21 PM Ghanshyam Mann wrote: > [...] > I think all those points are valid but whats about existing users who are > using them. How we can convince them to contribute back? > Jay did a good job putting the clear pic on users vs contributors. > [...] From cboylan at sapwetik.org Mon Apr 6 20:57:48 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 06 Apr 2020 13:57:48 -0700 Subject: New OpenDev Communication Channels In-Reply-To: <8b6fb08caad67c5a22f209703b2d751eda38798c.camel@redhat.com> References: <69a3980b-9e07-4b20-a45c-952c006c69c9@www.fastmail.com> <8b6fb08caad67c5a22f209703b2d751eda38798c.camel@redhat.com> Message-ID: <5fae6e04-ce47-4066-9fbf-d22dab050e7d@www.fastmail.com> On Mon, Apr 6, 2020, at 11:12 AM, Sean Mooney wrote: > On Mon, 2020-04-06 at 10:49 -0700, Clark Boylan wrote: > > Hello All, > > > > Recently, we've transitioned to using #opendev on Freenode for our synchronous IRC communications. Since then we've > > spun up a new service-discuss at lists.opendev.org mailing list [1] where we'll plan changes to services, notify of > > meetings, answer questions about services and usage, and otherwise communicate about OpenDev. > > service-announce at lists.opendev.org [2] will remain for important announcements. If you're interested in the developer > > infrastructure we run please join us on these mailing lists. > just so i understand these are list for the common infra serivce > provide by opendev and not for the indiviugal project > disccusion. > > so gate failure/planned upgrdes but not nova or kata containers topics right Correct. Discussion would be related to the opendev services themselves and project consumption of that. Gate failures, gerrit feature enhancements, service outages and upgrades, etc are all fair game. Project specific work like Nova's next set of features, or the release schedule, etc should remain in openstack-discuss as before. > > > > We encourage every developer to at least subscribe to the service-announce[2] list. It will be very low traffic, and > > used only for important announcements with wide impact. > > > > We are also going to start having our weekly team meeting in #opendev-meeting. The time will continue to be 19:00 UTC > > Tuesdays. > > > > See you there, > > Clark > > > > [1] http://lists.opendev.org/cgi-bin/mailman/listinfo/service-discuss > > [2] http://lists.opendev.org/cgi-bin/mailman/listinfo/service-announce > > > > From fungi at yuggoth.org Mon Apr 6 21:02:55 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 6 Apr 2020 21:02:55 +0000 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <1715116e36a.12332ed93212825.320970824006575198@ghanshyammann.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> <1715116e36a.12332ed93212825.320970824006575198@ghanshyammann.com> Message-ID: <20200406210254.xbxjooyra762ppuf@yuggoth.org> On 2020-04-06 15:03:06 -0500 (-0500), Ghanshyam Mann wrote: [...] > As you might know, our documentation team is lacking active > contributors and moving towards SIG [...] For a moment I thought I was watching a rerun. The Documentation team was officially disbanded by https://review.opendev.org/691277 roughly 5 months ago (you even voted on that change) in favor of a Technical Writing SIG created a month before with https://review.opendev.org/691277 . This has already happened. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Mon Apr 6 21:06:43 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 6 Apr 2020 16:06:43 -0500 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: At the risk of getting defensive, I do want to make some points relating to Oslo specifically here. TLDR at the bottom if your eyes glaze over at the wall of text. On 4/6/20 3:10 AM, Dmitry Tantsur wrote: > With absolutely no disrespect meant to the awesome Oslo team, I think > the existence of Oslo libraries is a bad sign. I think as a strong FOSS > community we shouldn't invest into libraries that are either useful only > to us or at least are marketed this way. For example: I think it's relevant to keep in mind that Oslo didn't start out as a collection of libraries, it started out as a bunch of code forklifted from Nova for use by other OpenStack services. It was a significant effort just to get the Nova-isms out of it, much less trying to sanitize everything OpenStack-specific. I also don't think it's fair to characterize Oslo as only focused on OpenStack today. In reality, many of our new libraries since the initial incubator split have been general purpose as often as not. Where they haven't, it's things like oslo.limit that are explicitly dependent on an OpenStack service (Keystone in this case). We believe in being good OSS citizens as much as anyone else. There's also a boil the ocean problem with trying to generalize every solution to every problem. It's a question we ask every time a new library is proposed, but in some cases it just doesn't make sense to write a library for an audience that may or may not exist. And in some cases when such an audience appears, we have refactored libraries to make them more general purpose, while often keeping an OpenStack-specific layer to ease integration with OpenStack services. See oslo.concurrency and fasteners. In fact, that kind of split is a pretty common pattern, even in cases where the underlying library didn't originate in Oslo/OpenStack. Think sqlalchemy/oslo.db, dogpile/oslo.cache, kombu/oslo.messaging. A lot of Oslo is glue code to make it easier to use something in a common way across OpenStack services. > 1) oslo.config is a fantastic piece of software that the whole python > world could benefit from. Same for oslo.service, probably. oslo.config is an interesting case because there is definitely interest outside OpenStack thanks to things like the Castellan config backend, but as Doug mentioned oslo.config is heavily opinionated (global config object, anyone?) and that's an issue for a lot of people. I will also point out that the only oslo.* dependency that oslo.config has is oslo.i18n, which itself has no other oslo dependencies, so there's not much barrier to entry to using it standalone today. I would not inflict oslo.service on anyone I liked. :-P Seriously though, I would advocate for using cotyledon if you're looking for a general purpose service library. It's not eventlet-based and provides a lot of the functionality of oslo.service, at least as I understand it. It was also written by an ex-Oslo contributor outside of OpenStack so maybe it's an interesting case study for this discussion. I don't know how much contribution it gets from other sources, but that would be good to look into. > 2) oslo.utils as a catch-all repository of utilities should IMO be > either moved to existing python projects or decomposed into small > generally reusable libraries (essentially, each sub-module could be a > library of its own). Same for oslo.concurrency. oslo.concurrency has already been decomposed into general purpose code (fasteners) and OpenStack-specific, at least for the most part. I'm sure the split isn't perfect, but it's not like we haven't recognized the need for better concurrency libraries in Python as a whole. Note that fasteners also lives outside Oslo in another ex-Osloers personal repo. Again, I'm not sure whether that was beneficial or not but it might be worth reaching out to Mehdi and Josh to see how they feel about it. I would probably -2 any attempt to split oslo.utils. You'd end up with a bunch of single file libraries and a metric ****-ton of administrative overhead. It's bad enough managing 40 some repos as it is. Also, that's another library with minimal cross-dependencies with the rest of Oslo (just oslo.i18n again), which was an intentional design decision to make it relatively painless to pull in. > 3) I'm genuinely not sure why oslo.log and oslo.i18n exist and which > parts of their functionality cannot be moved upstream. oslo.log basically provides convenience functions for OpenStack services, which is kind of what the oslo.* libraries should do. It provides built-in support for things like context objects, which are fairly domain-specific and would be difficult to generalize. It also provides a common set of configuration options so each project doesn't have to write their own. We don't need 20 different ways to enable debug logging. :-) oslo.i18n is mostly for lazy translation, which apparently is a thing people still use even though the company pushing for it in the first place no longer cares. We've also had calls to remove it completely because it's finicky and tends to break things if the consuming projects don't follow the best practices with translated strings. So it's in a weird limbo state. I did talk with JP in Shanghai about possibly making it optional because it pulls in a fair amount of translation files which can bloat minimal containers. I looked into it briefly and I think it would be possible, although I don't remember a lot of details because nobody was really pushing for it anymore so it's hard to justify spending a bunch of time on it. TLDR: I think Oslo provides value both in and out of OpenStack (shocking, I know!). I'm sure the separation isn't perfect, but what is? There are some projects that have been split out of it that might be interesting case studies for this discussion if anyone wants to follow up. Not it. ;-) -Ben From fungi at yuggoth.org Mon Apr 6 21:18:28 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 6 Apr 2020 21:18:28 +0000 Subject: [all] Please remove all external resources from docs In-Reply-To: <86292630-29e6-3584-8649-970b0c71aa3b@debian.org> References: <86292630-29e6-3584-8649-970b0c71aa3b@debian.org> Message-ID: <20200406211828.auwmvebaykgbkfek@yuggoth.org> On 2020-04-06 17:12:30 +0200 (+0200), Thomas Goirand wrote: > I've wrote about this earlier I guess, but I believe I need to do it > once more. Often I see in the docs things like this: > > .. image:: https://governance.openstack.org/tc/badges/.svg > :target: https://governance.openstack.org/tc/reference/tags/index.html [...] > The solution is simple: have the resource being *LOCAL* (ie: stored in > the project's doc), not stored on an external site. [...] I'm not a fan of this "feature" myself, but feel compelled to point out that it's intentionally dynamic content dependent on metadata from another repository which is intended to change at different times than the document itself changes, so including a copy of that file would make little sense. It's entirely there so that projects can have the same feel-good "badges" which they see displayed in the README files of other projects on GitHub. It exists purely so that real-time RST-to-HTML renderers will display the current state of a number of bits of governance metadata, so stripping those out of packaged documentation builds is the right thing to do. Maybe we could make it easier to have our documentation builds strip that content automatically so that distro package maintainers don't need to? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From knikolla at bu.edu Mon Apr 6 21:50:01 2020 From: knikolla at bu.edu (Nikolla, Kristi) Date: Mon, 6 Apr 2020 21:50:01 +0000 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <17151230331.117e031c3213071.2369283601600121385@ghanshyammann.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> <753836DB-A994-485A-8640-43FD52158CA8@bu.edu> <17151230331.117e031c3213071.2369283601600121385@ghanshyammann.com> Message-ID: <53618757-0AEB-40FB-9270-97400C832E74@bu.edu> > On Apr 6, 2020, at 4:16 PM, Ghanshyam Mann wrote: > > ---- On Mon, 06 Apr 2020 11:23:10 -0500 Nikolla, Kristi wrote ---- >> Hi Ghanshyam, >> >> Unfortunately, OpenStack is still for the most part corporate in terms of developer resources. It sort of makes sense, it's a cloud platform, and you need a certain scale to justify the costs for learning, adopting and operating. I probably wouldn't be contributing now if my first introduction to OpenStack wasn't as part of my job operating and developing a cloud. >> >> I don't see a clear path to solve that, but I see a few potential ways to help. >> >> 1. Advertising and marketing the viability of specific OpenStack projects as standalone tools. I can see value for someone needing a volume service, and if Cinder: a) fits the requirements b) is easy to deploy and learn (eg., well documented standalone use case and tested) c) brings a minimum set of cruft with it. This might encourage more people to use it and encourage wider adoption of the other OpenStack projects if their experience is a good one, with OpenStack becoming a trusted toolbox. > > +1 on this. If someone seeing OpenStack as all 52 projects together then they will just go way from it. ONAP was one of good example to learn from it in term of modularity on use cases side. > >> >> 2. Making sure we invest more time and effort on documentation. Especially with regards to information on getting started, best practices in terms of architecture, configuration and deployment, and of course contributors guides. We're already a very friendly and welcoming community. > > Contributors guide has been improved a lot and Upstream training, mentorship program, FC SIG doing the continuous effort for many years to get new people > onboard but that only not solving this issue. Do you think we still lack on helping new contributors onboard or something more interesting idea to attract them? With regards to non-corporate: You can't really contribute to a project effectively (or even feel the motivation to) if you're not using it or integrating with it in some form. All the documentation in the world is not going to help with that. That's why I think the other 2 points that I mentioned are important. With regards to corporate developer resources: I guess more outreach to sponsoring companies and being more persuasive. But, I'm not at all versed in the business-y side of things, so I defer to other folks on that one. > >> >> 3. Investigating and working on integrating OpenStack much more closely with other cloud tools. We're great for IaaS, but clouds today are not only IaaS and we need to evolve and play nice with everything else that someone might encounter in a datacenter. Mohammed brings a great point about integrating with Kubernetes. All these integrations need to be well documented, including best practices, and part of our testing infrastructure. >> >> To summarize, I would like to see OpenStack scale better. From homelabbers or small businesses who only need a few services, to large datacenters who may be running multiple OpenStacks, multiple Kuberneteses/OpenShifts, monitoring tools, billing systems. This may result in an increase in adoption, which in turn, should result in an increase in contributions. >> >> I can see the above becoming community goals and the TC doing outreach to document the process and help out. >> >> >>> On Apr 4, 2020, at 9:09 PM, Ghanshyam Mann wrote: >>> >>> This topic is a very important and critical area to solve in the OpenStack community. >>> I personally feel and keep raising this issue wherever I get the opportunity. >>> >>> To develop or maintain any software, the very first thing we need is to have enough developer resources. >>> Without enough developers (either open or closed source), none of the software can survive. >>> >>> OpenStack current situation on contributors is not the same as it was few years back. Almost every >>> project is facing the less contributor issue as compare to requirements and incoming requests. Few >>> projects already dead or going to be if we do not solve the less contributors issue now. >>> >>> I know, TC is not directly responsible to solve this issue but we should do something or at least find >>> the way who can solve this. >>> >>> What do you think about what role TC can play to solve this? What platform or entity can be used by TC to >>> raise this issue? or any new crazy Idea? >>> >>> >>> -gmann From johnsomor at gmail.com Mon Apr 6 22:17:37 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 6 Apr 2020 15:17:37 -0700 Subject: [doc][release][requirements] Issues with wsmeext.sphinxext In-Reply-To: References: Message-ID: This was fixed about a year ago in https://opendev.org/x/wsme/commit/2be89e587c057ee97d1b143de1a54ceeea22aa93 We are probably just missing a release for wsme. I'm not sure how we do an 'x' project release. I will look into it. Michael On Mon, Apr 6, 2020 at 1:22 PM Pierre Riteau wrote: > > Hello, > > A heads up for other projects using wsmeext.sphinxext: it seems to be > broken following the release of Sphinx 3.0.0 yesterday. > Our openstack-tox-docs job in blazar started to fail with: > > Extension error: > Could not import extension wsmeext.sphinxext (exception: cannot import > name 'l_' from 'sphinx.locale' > > Looks like it would affect aodh and cloudkitty too. > > Pierre Riteau (priteau) > From knikolla at bu.edu Mon Apr 6 22:25:50 2020 From: knikolla at bu.edu (Nikolla, Kristi) Date: Mon, 6 Apr 2020 22:25:50 +0000 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> <8FB6369D-925B-4617-B009-E765948100E4@bu.edu> Message-ID: > On Apr 6, 2020, at 3:45 AM, Jean-Philippe Evrard wrote: > > On Sun, 2020-04-05 at 20:27 +0000, Nikolla, Kristi wrote: >> >> The first, is making sure that OpenStack is approachable to new >> contributors. This includes more documentation related community >> goals, which I consider especially important when a lot of people >> that have been here forever start to move on to new ventures, taking >> with them a lot of tribal and undocumented knowledge. We have to >> start being able to do more with less. > > I think this maps very well with Kendall's efforts :) > What do you particularily have in mind? I do not have an answer to this just yet and I need to think about it longer. The last time we did something radical on docs it broke all links. > >> >> The second, is investigating avenues for better integration with >> other open source communities and projects. It's very likely that >> OpenStack is only one of the many tools in an operators toolbox (eg., >> we're also running OpenShift, and our own bare metal provisioner.) >> Once we have acknowledged those integration points, we can start >> documenting, testing and developing with those in mind as well. The >> end result will be that deployers can pick OpenStack, or a specific >> set of OpenStack components and know that what they are trying to do >> is possible, is tested and is reproducible. > > Does that mean you want to introduce a sample configuration to deploy > OpenShift on top of OpenStack and do conformance testing, in our jobs? > Or did I get that wrong? Please note the CI topic maps with other > candidates' efforts, so I already see future collaboration happening. It doesn't have to be limited to OpenShift/k8s, but yes. That would ensure that things work correctly and that we have an actual stake in fixing them if they brake. Furthermore, once the testing template is available, projects can start including in their project gates and start developing features with the broader datacenter in mind as well. See Mohammed's response about actually making Kubernetes a base service. > > I am glad to have asked my questions :) > > Regards, > JP > > > From gmann at ghanshyammann.com Mon Apr 6 23:02:46 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Apr 2020 18:02:46 -0500 Subject: [qa] Proposing Martin Kopec to Tempest core In-Reply-To: References: <17123f4f761.126cbb10d108731.5702022165643836936@ghanshyammann.com> <262984f8419a4fe0b6e5babd6eba0e28@AUSX13MPC102.AMER.DELL.COM> Message-ID: <17151bb61b9.115a1a13e215291.2445438792299088136@ghanshyammann.com> With all in favour, I have added Martin in core list. Welcome, Martin in the Core team. -gmann ---- On Tue, 31 Mar 2020 08:43:00 -0500 Archit Modi wrote ---- > +1 thanks Martin for all the hard work!! > > On Mon, Mar 30, 2020 at 12:39 PM Paras Babbar wrote: > +1 > > On Mon, Mar 30, 2020 at 12:31 PM wrote: > > > > +1 > > > > -----Original Message----- > > From: Ghanshyam Mann > > Sent: Saturday, March 28, 2020 9:43 PM > > To: openstack-discuss > > Subject: [qa] Proposing Martin Kopec to Tempest core > > > > > > [EXTERNAL EMAIL] > > > > Hello Everyone, > > > > Martin Kopec (IRC: kopecmartin) has been doing great contribution in Tempest since a long time. > > He has been doing bugs Triage also in Ussuri cycle and has good understanding of Tempest. > > > > I would like to propose him for Tempest Core. You can vote/feedback on this email. If no objection by the end of this week, I will add him to the list. > > > > -gmann > > > > > -- > > Paras Babbar > > Quality Engineer, OpenStack > > Red Hat | Westford, MA| M:+1 857-222-7309 | PBabbar at redhat.com > > > From fungi at yuggoth.org Mon Apr 6 23:09:34 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 6 Apr 2020 23:09:34 +0000 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <53618757-0AEB-40FB-9270-97400C832E74@bu.edu> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> <753836DB-A994-485A-8640-43FD52158CA8@bu.edu> <17151230331.117e031c3213071.2369283601600121385@ghanshyammann.com> <53618757-0AEB-40FB-9270-97400C832E74@bu.edu> Message-ID: <20200406230934.7mgmnbjn5demjbgw@yuggoth.org> On 2020-04-06 21:50:01 +0000 (+0000), Nikolla, Kristi wrote: [...] > With regards to non-corporate: You can't really contribute to a > project effectively (or even feel the motivation to) if you're not > using it or integrating with it in some form. All the > documentation in the world is not going to help with that. [...] Yes, I don't see this as all that different from other open source projects, actually. Some users of your software will contribute to it when they see things which need fixing, changing or implementing. The confusion which seems to arise is that in the case of software like OpenStack, its primary users are businesses and other medium-to-large organizations, not individuals and hobbyists. And in fact some of our ancillary subprojects which are small utilities used outside these environments see contributions from more diverse sets of users. So if we want contributions from different places, what we should be asking is how do we change who uses the software. > With regards to corporate developer resources: I guess more > outreach to sponsoring companies and being more persuasive. But, > I'm not at all versed in the business-y side of things, so I defer > to other folks on that one. [...] After getting feedback from the OSF BoD, hope was that the Upstream Investment Opportunities documents would help fill that gap: https://governance.openstack.org/tc/reference/upstream-investment-opportunities/ -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From moreira.belmiro.email.lists at gmail.com Mon Apr 6 23:28:01 2020 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Tue, 7 Apr 2020 01:28:01 +0200 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: Hi Jean-Philippe, all, happy to see that your questions already created a passionate discussion, but I also would like to give my contribution. My background and contributions have always been around deploying and operating a large cloud infrastructure. As I mentioned in my nomination I have been involved in the OpenStack community for a long time. I lived the crazy days of "inflated expectations" and I still stick around in the "plateau of productivity". To answer your questions I would like to focus in 2 points that in my opinion are fundamental for the future success of OpenStack. 1) Operators; 2) Projects Consolidation; 1) Operators are the OpenStack users. Ultimately, they define the success of any OpenStack project because they select what to deploy in their clouds. And what's deployed is based of course in the requirements but also in the simplicity and project health. In my opinion the TC and the community in general should focus in its users (Operators) feedback. Making sure that OpenStack is integrated and easy to deploy and maintain over time. Also, make sure that their requirements/pain points are the priorities during the development cycles. Of course, many was done over the years (better docs, deployment tools, all upgrades discussions, ...) but I still think this should be the focus of all the development/integration direction. And looking into the number of OpenStack projects, this bring us to my second point, Project Consolidation. 2) As an Operator, I need to evaluate, deploy and maintain the OpenStack projects to meet my organization requirements. I think we all agree that the number of OpenStack projects is overwhelming! For any new organization that selects OpenStack to deploy their Cloud, navigating through all the projects is extremely challenging. What's worst is that some have very little activity and actually they were never seriously used in a production environment. This can create a lot of confusion and wrong expectations. Of course I know about the project navigator and in the past the project tags/maturity. In fact I'm not advocating more of that. Over the years we insisted to split projects. For example, as an Operator I still don't understand the value of having "Placement" as a separate project. Of course we can argue the architecture pros/cons, that other projects may use it, but at the end it only adds friction to the users (Operators) to deploy and maintain their OpenStack Cloud. This is only one example. Also, we see more and more projects without a PTL volunteer. This doesn't creates the required trust in those projects to anyone that is looking into OpenStack to deploy their Cloud. In my humble opinion the TC and the community in general needs to revaluate the value of each OpenStack project and consolidate or "retire" what is needed. If there's a strong dependency or the scope also matches a different project, maybe consolidate. If the user survey shows that no one is using a project and its health is questionable, we need to find another solution. The goal should be to have a clear set of projects that our users (Operators) can understand the scope and have the trust/confidence to deploy them. cheers, Belmiro On Thu, Apr 2, 2020 at 12:33 PM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Hello, > > I read your nominations, and as usual I will ask what do you > _technically_ will do during your mandate, what do you _actively_ want > to change in OpenStack? > > This can be a change in governance, in the projects, in the current > structure... it can be really anything. I am just hoping to see > practical OpenStack-wide changes here. It doesn't need to be a fully > refined idea, but something that can be worked upon. > > Thanks for your time. > > Regards, > Jean-Philippe Evrard > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moreira.belmiro.email.lists at gmail.com Mon Apr 6 23:36:17 2020 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Tue, 7 Apr 2020 01:36:17 +0200 Subject: [tc][election] campaign discussion: how TC can solve the less contributor issue? In-Reply-To: <760bc17d-f6a3-285c-5040-d4b25020f7c2@gmail.com> References: <17147e25870.d2fe327b159195.7163543139561294972@ghanshyammann.com> <760bc17d-f6a3-285c-5040-d4b25020f7c2@gmail.com> Message-ID: +1 to Melanie comments. I feel exactly the same. And this is specialy hard for Operators. Belmiro On Mon, Apr 6, 2020 at 9:57 PM melanie witt wrote: > On 4/6/20 08:36, Donny Davis wrote: > > On Mon, Apr 6, 2020 at 11:22 AM Artom Lifshitz > > wrote: > > > > On Sat, Apr 4, 2020 at 9:12 PM Ghanshyam Mann > > > wrote: > > > > > > This topic is a very important and critical area to solve in the > > OpenStack community. > > > I personally feel and keep raising this issue wherever I get the > > opportunity. > > > > > > To develop or maintain any software, the very first thing we need > > is to have enough developer resources. > > > Without enough developers (either open or closed source), none of > > the software can survive. > > > > > > OpenStack current situation on contributors is not the same as it > > was few years back. Almost every > > > project is facing the less contributor issue as compare to > > requirements and incoming requests. Few > > > projects already dead or going to be if we do not solve the less > > contributors issue now. > > > > > > I know, TC is not directly responsible to solve this issue but we > > should do something or at least find > > > the way who can solve this. > > > > I'm not running for TC, but I figured I could chime in with some > > thoughts, and maybe get TC candidates to react. > > > > > What do you think about what role TC can play to solve this? What > > platform or entity can be used by TC to > > > raise this issue? or any new crazy Idea? > > > > To my knowledge, the vast majority of contributors to OpenStack are > > corporate contributors - meaning, they contribute to the community > > because it's their job. As companies have dropped out, the > contributor > > count has diminished. Therefore, the obvious solution to the > > contributor dearth would be to recruit new companies that use or sell > > OpenStack. However, as far as I know, Red Hat is the only company > > remaining that still makes money from selling OpenStack as a product. > > So if we're looking for new contributor companies, we would have to > > look to those that use OpenStack, and try to make the case that it > > makes sense for them to get involved in the community. I'm not sure > > what this kind of advocacy would look like, or towards which > > companies, or what kind of companies, it would be directed. Perhaps > > the TC candidates could have suggestions here. And if I've made any > > wrong assumptions, by all means correct me. > > > > I don't think you are too far off. I used to work in a place where my > > job was to help sell Openstack (among other products) and > > enable the use of it with customers. > > > > Customers drive everything vendors do. Things that sell are easy to use. > > Customers don't buy the best products, they buy what they > > can understand fastest. If customers are asking for a product, it's > > because they understand its value. Vendors in turn contribute > > to projects because they make money from their investment. > > > > Now think about the perception and reality of Openstack as a whole. We > > have spent the last decade or so writing bleeding edge features. > > We have spent very little time on documenting what we do have in > > layman's terms. The intended audience of our docs would seem > > to me to be other developers. I hope people don't take that as a jab, > > it's just the truth. If someone cannot understand how to use > > this amazing technology, it won't sell. If it doesn't sell, vendors > > leave, if vendors leave the number of contributors goes down. > > > > If we don't start working at making Openstack easier to consume, then no > > amount of technical change will make an impactful difference. > > I'm not running for the TC either but wanted say Donny's reply here > resonates with me. When I first started working on OpenStack, I was at > Yahoo (now Verizon Media), a company who consumes OpenStack and depends > on it for a (now) large portion of their infrastructure. > > At the time I joined the OpenStack community in 2012, the docs about > contributing and the docs about each component were dead simple. I was > up and running in under a day and started my first contributions > upstream shortly after. > > Fast forward to now, I find the docs are hard to read and navigate. > There's not much layman's terms. And most of all, at least in Nova, is > that the docs are in dire need of being organized. They used to be > simple but when docs moved in-tree things were hastily cobbled together > because as you mentioned, we're always already stretched trying to > deliver bleeding edge features. > > And, there are also differences in opinion about how docs should be > organized and how verbose they are. I have seen docs evolve from simple > to complicated because for example: someone thought they were making an > improvement, whereas I might think they were making the docs less > usable. I'm not aware that there is any guideline or reference > documentation that is to be used as a design goal. Such as, "this is > what your landing page should look like", "here's how docs should be > organized", "you should have these sections", etc. > > Sometimes I have thought about proposing a bunch of changes to how our > docs are organized. But, full disclosure, I worry that if I do that and > if it gets accepted/merged, someone else will completely change all of > it later and then all the organization and work I did goes out the > window. And I think this worry highlights the fact that there is no > "right way" of doing the docs. It's just opinion and everyone has a > different opinion. > > I'm not sure whether that's solvable. I mentioned a guideline or design > goal to aspire to, but at the same time, we don't want to be so rigid > that projects can't do docs the way they want. So then what? Per project > design goals and guidelines I guess? Or is that too much process? I have > wondered how other communities have managed success in the docs department. > > So, back to the contributors point. I was lucky because by the time docs > got hard to consume, I already knew the ropes. I don't know how hard it > has been for newer contributors to join since then and how much of the > difficulty is related to docs. > > I'm not sure I've said anything useful, so apologies for derailing the > discussion if I've done that. > > -melanie > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Apr 7 00:01:19 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 7 Apr 2020 00:01:19 +0000 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: <20200407000118.em6udmjxaj3m3d4m@yuggoth.org> On 2020-04-07 01:28:01 +0200 (+0200), Belmiro Moreira wrote: [...] > Over the years we insisted to split projects. For example, as an > Operator I still don't understand the value of having "Placement" > as a separate project. Of course we can argue the architecture > pros/cons, that other projects may use it, but at the end it only > adds friction to the users (Operators) to deploy and maintain > their OpenStack Cloud. This is only one example. [...] I'm curious, do you then consider microservice architectures inherently flawed? Are you advocating for monolithic applications which can't be decomposed and scaled by moving different functions to different servers? Or do you see "consolidation" of the software as something else? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From melwittt at gmail.com Tue Apr 7 00:16:03 2020 From: melwittt at gmail.com (melanie witt) Date: Mon, 6 Apr 2020 17:16:03 -0700 Subject: [nova][gate] status of some gate bugs In-Reply-To: <6119b4f1-a235-b1b5-93e5-eb5d23e2e0fa@gmail.com> References: <6119b4f1-a235-b1b5-93e5-eb5d23e2e0fa@gmail.com> Message-ID: <4da09470-b423-9be5-34a8-c52c311e2904@gmail.com> Hey all, I have a few updates since last time. This morning we noticed http://status.openstack.org/elastic-recheck was behind on elastic search indexing by 90+ hours. I mentioned it in #openstack-infra and clarkb checked and saw that a worker was stuck on 226 MB of nova-api logs. I checked and found it was due to default policy deprecation message logging 10K+ messages per run. I was able to get help from gmann on the best way to stop the bleeding and he proposed a patch that we've since merged: https://review.opendev.org/717802 After this merged, clarkb cleared the elastic search indexing backlog as it would never be able to catch up. gmann is working on some oslo.policy changes to better handle policy deprecation logging that we hope to be able to consume soon. Worst case, we'll leave the policy deprecation logging disabled for now as the nova code is OR'ing new policy defaults with old policy defaults and will for 1-2 cycles. So we have time to sort out the logging issue in the event that we can't get a solution via FFE in oslo.policy. On 3/18/20 22:50, melanie witt wrote: > * http://status.openstack.org/elastic-recheck/#1813789 > > This one is where the nova-live-migration job fails a server evacuate > test with: "Timeout waiting for [('network-vif-plugged', > 'e3d3db3f-bce4-4889-b161-4b73648f79be')] for instance with vm_state > error and task_state rebuild_spawning.: eventlet.timeout.Timeout: 300 > seconds" in the screen-n-cpu.txt log. > > lyarwood has a WIP patch here: > > https://review.opendev.org/713674 This patch has merged since last time and the fail rate is down to 1 fail in the last 10 days (note this is likely off a bit because of the indexing snafu from earlier). > and sean-k-mooney has a WIP patch here: > > https://review.opendev.org/713342 But, it is not completely gone yet (I just had a patch fail on it a few min ago), so we'll need to work further on this patch (relevant IRC convo is linked in a review comment). > * https://launchpad.net/bugs/1867380 > > This one is where the nova-live-migration or nova-grenade-multinode job > fail due to n-cpu restarting slowly after being reconfigured for ceph. > The server will fail to build and it's because the test begins before > nova-compute has fully come up and we see this error: "Instance spawn > was interrupted before instance_claim, setting instance to ERROR state > {{(pid=3783) _error_out_instances_whose_build_was_interrupted" in the > screen-n-cpu.txt log. > > lyarwood has a patch approved here that we've been rechecking the heck > out of that has yet to merge: > > https://review.opendev.org/713035 This patch has merged and I noticed a big improvement in fail rate in the gate since then (we did not have a e-r fingerprint for this one -- I'll try to add one so we can see when/if it crops back up). > * https://launchpad.net/bugs/1844568 > > This one is where a job fails with: "Body: b'{"conflictingRequest": > {"code": 409, "message": "Multiple possible networks found, use a > Network ID to be more specific."}}'" > > gmann has a patch proposed to fix some of these here: > > https://review.opendev.org/711049 This patch also merged and its fail rate is down to 5 fails in the last 10 days. > There might be more test classes that need create_default_network = True. One of my patches hit another instance of this bug and I've proposed a patch for that one specifically: https://review.opendev.org/716809 > * http://status.openstack.org/elastic-recheck/#1844929 > > This one is where a job fails and the following error is seen one of the > logs, usually screen-n-sch.txt: "Timed out waiting for response from > cell 8acfb79b-2e40-4e1c-bc3d-d404dac6db90". > > The TL;DR on this one is there's no immediate clue why it's happening. > This bug used to hit more occasionally on "slow" nodes like nodes from > the OVH or INAP providers (and OVH restricts disk iops [1]). Now, it > seems like it's hitting much more often (still mostly on OVH nodes). > > I've been looking at it for about a week now and I've been using a DNM > patch to add debug logging, look at dstat --disk-wait output, try mysqld > my.cnf settings, etc: > > https://review.opendev.org/701478 > > So far, what I find is that when we get into the fail state, we get no > rows back from the database server when we query for nova 'services' and > 'compute_nodes' records, and we fail with the "Timed out waiting for > response" error. > > Haven't figured out why yet, so far. The disk wait doesn't look high > when this happens (or at any time during a run) so it's not seeming like > it's related to disk IO. I'm continuing to look into it. I think I finally got to the bottom of this one and found that during the grenade runs after we restart nova-scheduler, while it's coming back up, requests are flowing in and the parent process is holding an oslo.db lock when the child process workers are forked. So, the child processes inherit the held locks and database access through those oslo.db objects can never succeed. This results in the CellTimeout error -- it behaves as though the database never sends us results, but what really happened was our database requests were never made because they were stuck outside the inherited held locks [2]. Inheriting of held standard library locks at fork is a known issue in python [3] that is currently being worked on [4]. I think we can handle this in nova though in the meantime by clearing our cell cache that holds oslo.db database transaction context manager objects during service start(). This way, we get fresh oslo.db objects with unlocked locks when child processes start up. I have proposed a patch for this here: https://review.opendev.org/717662 and have been rechecking it a bunch of times to get more samples. So far, the grenade-py3 and nova-grenade-multinode jobs have both passed 7 runs in a row on the change. Cheers, -melanie [2] https://bugs.launchpad.net/nova/+bug/1844929/comments/28 [3] https://bugs.python.org/issue40089 [4] https://github.com/python/cpython/pull/19195 > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010505.html > From gmann at ghanshyammann.com Tue Apr 7 00:46:53 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Apr 2020 19:46:53 -0500 Subject: [release][oslo] FFE request for Oslo Policy Message-ID: <171521ab0f6.aebddfad215946.4625836105559159173@ghanshyammann.com> Hi, This is regarding the FFE request for policy for - https://review.opendev.org/#/c/717879/ This is part of introducing scope_type and new defaults in policies. During Nova implemtnation[1], we came across two things to solve before we release the new policies changes: 1. A lot of warnings as policy default are changing which end up filling the logs (226 MB n-api logs today). 2. Give operators an option to switch to the new system (scope and new defaults) at the same time without overwriting the policy file. We discussed those two things to solve via oslo_policy[2] which need FFE approval. Sorry for being late on this which should be planned early. Please let me know if it is ok to include these changes in the Ussuri release. [1] https://review.opendev.org/#/q/topic:bp/policy-defaults-refresh+(status:open+OR+status:merged) [2] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-04-06.log.html#t2020-04-06T17:11:39 -gmann From pyh at virtbox.net Tue Apr 7 02:06:52 2020 From: pyh at virtbox.net (Wesley Peng) Date: Tue, 7 Apr 2020 10:06:52 +0800 Subject: choosing object storage In-Reply-To: <20200406140410.1ac82bcc@suzdal.zaitcev.lan> References: <215ab035-fc25-c7ff-dc74-657c0343b3ae@plum.ovh> <51cb6461-dfa6-3b2b-c053-0318bc280c19@catalyst.net.nz> <20200406140410.1ac82bcc@suzdal.zaitcev.lan> Message-ID: <7777da35-2501-c8fb-3736-75424f084730@virtbox.net> Pete Zaitcev wrote: > Last time I saw anyone publish their Swift data at all, it was Turkcell > who had a 36PB cluster and planned to grow it to 50PB by end of 2019. > They started that cluster in Icehouse release with 250GB drives! From my experience, for pure object storage requirement, swift is much simpler than Ceph, either deployment or operations. We have Ceph as block storage only, but have swift as object storage for S3 access etc. We have 500TB data stored in Swift. Regards. From pyh at virtbox.net Tue Apr 7 02:13:09 2020 From: pyh at virtbox.net (Wesley Peng) Date: Tue, 7 Apr 2020 10:13:09 +0800 Subject: choosing object storage In-Reply-To: <7777da35-2501-c8fb-3736-75424f084730@virtbox.net> References: <215ab035-fc25-c7ff-dc74-657c0343b3ae@plum.ovh> <51cb6461-dfa6-3b2b-c053-0318bc280c19@catalyst.net.nz> <20200406140410.1ac82bcc@suzdal.zaitcev.lan> <7777da35-2501-c8fb-3736-75424f084730@virtbox.net> Message-ID: Wesley Peng wrote: > Pete Zaitcev wrote: >> Last time I saw anyone publish their Swift data at all, it was Turkcell >> who had a 36PB cluster and planned to grow it to 50PB by end of 2019. >> They started that cluster in Icehouse release with 250GB drives! > > From my experience, for pure object storage requirement, swift is much > simpler than Ceph, either deployment or operations. > > We have Ceph as block storage only, but have swift as object storage for > S3 access etc. We have 500TB data stored in Swift. > > Regards. > BTW, we have swift and ceph deployed separated, but we didn't have openstack in our environment. We use the cloud architecture developed by ourselves. Ceph and Swift were used for storage products, they are good enough. regards. From sorrison at gmail.com Tue Apr 7 02:00:35 2020 From: sorrison at gmail.com (Sam Morrison) Date: Tue, 7 Apr 2020 12:00:35 +1000 Subject: [neutron][OVN] Issues trying to migrated to OVN using db sync util Message-ID: Hi Neutron/OVN devs/ops, We are in the process of migrating to OVN, we are having issues getting all our neutron data into OVN using neutron-ovn-db-sync-util tool Our neutron has: ~250 networks/subnets ~11,000 ports ~53,000 security groups ~200,000 security group rules The issues we have: We get a lot of timeouts with ovsdb transaction so we have made it commit transactions every 50 items. It takes ages to run, we haven’t got a full sync to happen but it has run for over 1.5 hours before we get an error. The main issue is we are getting this error [1] about dictionary changing which is preventing us from completing a full sync. I’m wondering if there are any sites out there running with this size of secgroups/ports etc. We have dedicated hardware for the ovsdb servers etc. so I don’t think it’s something we can chuck more hardware at We are running stein version and wondering if there has been any work in making this better in train or newer? Thanks in advance for any help you can provide us. Cheers, Sam [1] https://bugs.launchpad.net/networking-ovn/+bug/1871272 From Arkady.Kanevsky at dell.com Tue Apr 7 02:03:46 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Tue, 7 Apr 2020 02:03:46 +0000 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: After reading this lengthy and enlightening threadI come back to Julia original points. 1. What are the benefits of having Ironic as part of OpenStack vs. having Ironic as part of opendev? a. I do not buy that people will use Ironic more as standalone. Biforst has been created several years back and was available as standalone Ironic. And it is not widely used. b. Ironic is used as underpinning to other projects, like Metal3, Airship. And that is with Ironic being part of OpenStack. Can Ironic be improved for easier use outside OpenStack - absolutely? Having better handling of dependencies, like already mentioned Oslo and python will definitely help. c. Will Ironic be more adapted outside of OpenStack if it part opendev rather than OpenStack? Will be good to have some evidence for it. 2. Does being part of OpenStack slows or impede Ironic growth and adoption? I would argue that being part of OpenDev will provide more opportunities for growth. 3. If Ironic becomes OpenDev project what changes? a. It will still be under the same governance, unless Ironic community want to create its own governances which will take velocity of developing new features. So expect that the current OpenStack governance rules will stay. b. Will the gate and other processes change? Do not think so since the current ones work reasonably well. c. Should Ironic be part of "integrated" openstack release and tested together with rest of openstack? - absolutely. It has a lot of benefit for a openstack community, including all derived distro products. d. Will Ironic change its release cadence if it is no longer part of OpenStack proper? Ironic is only as good as its underlying drivers. That means that all drivers, that are currently outside of OpenStack governance will have to follow. e. Will Ironic change the development platform? It currently use devstack. f. All Ironic documentation follows OpenStack and is part of it. 4. If Ironic leaves OpenStack proper what stops some of the other projects, like Cinder, or Keystone. to leave? That worries me. As it may lead to disintegration of the community and the notion of what OpenStack is. Or it may transition OpenStack to federated model of projects. So after saying all of it, I am leaning toward cleaning up and simplifying dependencies to make it easier to consume Ironic outside OpenStack, at least for Victoria cycle and revisit it at that cycle end. Sorry for long stream of conscious. Arkady From sorrison at gmail.com Tue Apr 7 02:11:06 2020 From: sorrison at gmail.com (Sam Morrison) Date: Tue, 7 Apr 2020 12:11:06 +1000 Subject: [neutron][OVN] Multiple mechanism drivers In-Reply-To: References: <8B3A471E-B855-4D1C-AE52-080D4B0D92A9@gmail.com> <20191125075144.vhppi2bnnnfyy57s@skaplons-mac> Message-ID: <047A484A-79BF-488A-BDC8-8AB820B46EED@gmail.com> Sorry just picking this up again after a bit of a hiatus. We have compute nodes that support instances on midonet and on linuxbridge and everything works great, we don’t want or expect these networks to interact with each other. This works because the use different network_types. Our linux bridge networks are flat provider networks and midonet is used for tenant networks (tunnelled). Replacing midonet with OVN almost works too except it shares flat network type with linuxbridge. Ideally in our environment we’d want to say this is a linuxbridge network, use linuxbridge to bind, this is an OVN network and use ovn driver to bind etc. It kind of seems weird that it just iterates over all mechanism drivers and stops once one works, I’m sure there is a good reason for it but I can’t see it. Is anyone using neutron ml2 like this? What mech drivers can interact and so it doesn’t matter which driver binds the port? Sam > On 28 Nov 2019, at 11:27 pm, Sean Mooney wrote: > > On Thu, 2019-11-28 at 11:12 +0900, Takashi Yamamoto wrote: >> hi, >> >> On Mon, Nov 25, 2019 at 5:00 PM Slawek Kaplonski wrote: >>> >>> Hi, >>> >>> I think that this may be true that networking-ovn will not work properly >>> with other drivers. >>> I don't think it was tested at any time. > it should work with other direver if you use vlan or flat networks. > it will not form mesh tunnel networks with other drivers event if you use geneve for the > other ml2 driver. >>> Also the problem may be that when You are using networking-ovn than whole >>> neutron topology is different. There are different agents for example. >>> >>> Please open a bug for that for networking-ovn. I think that networking-ovn team >>> will take a look into that. >> >> networking-midonet ignores networks without "midonet" type segments to >> avoid interfering other mechanism drivers. >> maybe networking-ovn can have something similar. > that is actully the opiste of how that shoudl work. > you are ment to be able to have multiple ml2 drivers share the same segmentation type > and you are not ment to have a segmentation type that is specific to a mech driver. > give we dont scheduler based on the segmenation type supprot today either (we shoudl by the way) > it would be very fagile to use a dedicated ovn segmentation type and i woudl not advise doing it for > midio net. > > ideally we would create placement aggregate or triats to track which segmentation types > are supported by which hosts. traits are proably better for the segmentation types but modelling network segments > them selves would be better as aggreates. > > if we really wanted to model the capsity of the segmenation types we woudl addtionally create shareing resouce providers > with inventories of network segmenation types resouce classes per physnet with a singel gloabl rp for the tunneled > types. then every time you allocated a network in neuton you would create an allocation for that network and tag ports > with the approreate aggreate requrest. > > on the nova side wew could combine the segment and segmenation type aggreate request form the port with any > other aggreates form nova and pass all of them as member_of requriements to placment to ensure we land on a > host that can provide the required network connectivty. today we litrallly just assme all nodes are connected > to all networks with all segmenation types and hope for the best. > > thats a bit of a tangent but just pointing out we should schduler on network connectivity and segmenation types > but we shoudl not have backend specific segmenation types. > >> >> wrt agents, last time i checked there was no problem with running >> midonet agent and ovs agent on the same host, sharing the kernel >> datapath. >> so i guess there's no problem with ovn either. > you can run ml2/ovn and ml2/ovs on the same cloud. > just put the ml2/ovs first. it will fail to bind if a host dose not have > the ovs neutron agent running and will then bind with ml2/ovn instead. > > it might work the other way too but i have nto tested that. >> >> wrt l3, unfortunately neither midonet or ovn have implemented "l3 >> flavor" thing yet. so you have to choose a single l3 plugin. >> iirc, Sam's deployment doesn't use l3 for linuxbridge, right? > if you have dedicated network nodes that is not really a proable. > just make sure that they are all ovn or all ovs or whatever makes sense. > its the same way that if you deploy with ml2/ovs and want to use ovs-dpdk that > you only instlal ovs-dpdk on the comptue nodes and use kernel ovs on the networking nodes > to avoid the terible network performace when using network namespace for nat and routing. > > if you have tunneled networks it would be an issue butin that case you just need to ensure that at least 1 router > is created on each plugin so you would use ha routers by default and set the ha factor so that it created routers on > node with both mechinmum dirvers. again however since the different ml2/driver do not form a mesh you should > really only use different ml2 drivers if you are using vlan or flat networks. >> >>> >>> On Mon, Nov 25, 2019 at 04:32:50PM +1100, Sam Morrison wrote: >>>> We are looking at using OVN and are having some issues with it in our ML2 environment. >>>> >>>> We currently have 2 mechanism drivers in use: linuxbridge and midonet and these work well (midonet is the default >>>> tenant network driver for when users create a network) >>>> >>>> Adding OVN as a third mechanism driver causes the linuxbridge and midonet networks to stop working in terms of >>>> CRUD operations etc. > i would try adding ovn last so it is only used if the other two cannot bind the port. > the mech driver list is orderd for this reason so you can express preference. >>>> It looks as if the OVN driver thinks it’s the only player and is trying to do things on ports that are in >>>> linuxbridge or midonet networks. > that would be a bug if so. >>>> >>>> Am I missing something here? (We’re using Stein version) >>>> >>>> >>>> Thanks, >>>> Sam >>>> >>>> >>>> >>> >>> -- >>> Slawek Kaplonski >>> Senior software engineer >>> Red Hat >>> >>> >> >> > From gmann at ghanshyammann.com Tue Apr 7 03:28:44 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 06 Apr 2020 22:28:44 -0500 Subject: [release][oslo] FFE request for Oslo Policy In-Reply-To: <171521ab0f6.aebddfad215946.4625836105559159173@ghanshyammann.com> References: <171521ab0f6.aebddfad215946.4625836105559159173@ghanshyammann.com> Message-ID: <17152aee162.e0742926216298.5916819155011760394@ghanshyammann.com> ---- On Mon, 06 Apr 2020 19:46:53 -0500 Ghanshyam Mann wrote ---- > Hi, > > This is regarding the FFE request for policy for > - https://review.opendev.org/#/c/717879/ > > This is part of introducing scope_type and new defaults in policies. During > Nova implemtnation[1], we came across two things to solve before we release > the new policies changes: > 1. A lot of warnings as policy default are changing which end up filling the logs (226 MB n-api logs today). > 2. Give operators an option to switch to the new system (scope and new defaults) at the same time without overwriting the policy file. Basically two patches, I proposed those separately 1. https://review.opendev.org/#/c/717879/ 2. https://review.opendev.org/#/c/717943/ -gmann > > We discussed those two things to solve via oslo_policy[2] which need FFE approval. Sorry for > being late on this which should be planned early. > > Please let me know if it is ok to include these changes in the Ussuri release. > > [1] https://review.opendev.org/#/q/topic:bp/policy-defaults-refresh+(status:open+OR+status:merged) > [2] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-04-06.log.html#t2020-04-06T17:11:39 > > -gmann > > From whayutin at redhat.com Tue Apr 7 04:47:30 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 6 Apr 2020 22:47:30 -0600 Subject: old tripleo bugs Message-ID: Greetings, As discussed launchpad bugs that are in new or triaged status and were opened over 365 days ago have been moved to incomplete status. The list is here [1] and 286 bugs were moved to incomplete. The subscribed users or reporters can set the bug back to triaged or new, if you need help ping me. Instead of leaving these bugs open forever, it's now up to the owner to flip the status back to something we'll track. We've been kicking hundreds of bugs down each release and that's just silly. Launchpad will expire and close the bugs in 60 days if they are not updated [2]. One note.. there were only 5 bugs in New and opened over 365 days. Thanks all! [1] http://paste.openstack.org/show/791712/ [2] https://help.launchpad.net/BugExpiry -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Apr 7 04:48:53 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 6 Apr 2020 22:48:53 -0600 Subject: [tripleo] old tripleo bugs Message-ID: Sorry, This time w/ the right filter.. Greetings, As discussed launchpad bugs that are in new or triaged status and were opened over 365 days ago have been moved to incomplete status. The list is here [1] and 286 bugs were moved to incomplete. The subscribed users or reporters can set the bug back to triaged or new, if you need help ping me. Instead of leaving these bugs open forever, it's now up to the owner to flip the status back to something we'll track. We've been kicking hundreds of bugs down each release and that's just silly. Launchpad will expire and close the bugs in 60 days if they are not updated [2]. One note.. there were only 5 bugs in New and opened over 365 days. Thanks all! [1] http://paste.openstack.org/show/791712/ [2] https://help.launchpad.net/BugExpiry -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Apr 7 05:01:04 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 7 Apr 2020 05:01:04 +0000 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: <20200407050103.aruiiqbmtzh3vjfi@yuggoth.org> On 2020-04-07 02:03:46 +0000 (+0000), Arkady.Kanevsky at dell.com wrote: [...] > 1. What are the benefits of having Ironic as part of OpenStack vs. > having Ironic as part of opendev? [...] > c. Will Ironic be more adapted outside of OpenStack if it part > opendev rather than OpenStack? Will be good to have some evidence > for it. > > 2. Does being part of OpenStack slows or impede Ironic growth and > adoption? I would argue that being part of OpenDev will provide > more opportunities for growth. > > 3. If Ironic becomes OpenDev project what changes? [...] I regret that Dimitry's initial post used a subject which seems to be deepening confusion about OpenDev. Please understand that OpenDev is a project hosting platform and community of collaboration infrastructure sysadmins. What it means for a project to be "part of OpenDev" is that it's a piece of the collaboration infrastructure we're using to support and host open source software projects. Ironic would not be that, nor do I think that's what Dimitry meant. I think what he was trying to convey is that he still wants Ironic to be "hosted in OpenDev" (using OpenDev's Gerrit instance for code review, Zuul for project gating, Mailman for mailing lists, and so on, like OpenStack does), and also for Ironic to apply to become an official Open Infrastructure Project represented by the OSF (like OpenStack already is). Not all projects hosted in OpenDev are OSF Open Infrastructure Projects, and not all OSF Open Infrastructure Projects are hosted in OpenDev. Also the vast majority of projects hosted in OpenDev are not "part of OpenDev." We'll navigate this discussion best if we don't conflate these concepts needlessly. Thanks. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tkajinam at redhat.com Tue Apr 7 07:13:43 2020 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 7 Apr 2020 16:13:43 +0900 Subject: [openstack-discuss] [puppet] Nominating Takashi Kajinami for core of the Puppet OpenStack modules In-Reply-To: References: <2edf6519.7982.17129f619d5.Coremail.chdzsp@163.com> <1585552171666.54902@binero.com> Message-ID: Hi, It's my pleasure and great honor to have this nomination. I'll try my best to keep my contribution! Thank you, Takashi On Mon, Mar 30, 2020 at 11:06 PM Emilien Macchi wrote: > yes big +2 as well ! Thanks for your contributions :) > > On Mon, Mar 30, 2020 at 3:20 AM Tobias Urdin > wrote: > >> ​Big +1 >> >> >> ------------------------------ >> *From:* Shengping Zhong >> *Sent:* Monday, March 30, 2020 7:42 AM >> *To:* openstack-discuss at lists.openstack.org >> *Subject:* [openstack-discuss] [puppet] Nominating Takashi Kajinami for >> core of the Puppet OpenStack modules >> >> >> Hey Puppet Cores, >> >> >> I would like to nominate Takashi Kajinami as a Core reviewer for the >> >> Puppet OpenStack modules. He is an excellent contributor to our >> >> modules over the last several cycles. His stats for the last 90 days >> >> can be viewed here[0]. >> >> >> Please response with your +1 or any objections. If there are no >> >> objections by April 6 I will add him to the core list. >> >> >> As there were no objections, I have added Takashi to the core list. >> >> Keep up the good work. >> >> >> Thanks, >> >> Shengping. >> >> >> [0] >> https://www.stackalytics.com/report/contribution/puppet%20openstack-group/90 >> >> >> >> > > > -- > Emilien Macchi > -- ---------- Takashi Kajinami Senior Software Maintenance Engineer Customer Experience and Engagement Red Hat e-mail: tkajinam at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Tue Apr 7 08:09:27 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Tue, 7 Apr 2020 10:09:27 +0200 Subject: [baremetal-sig][ironic] Baremetal whitepaper round 2: doodle Message-ID: Dear all, After the first very productive sessions on finishing the baremetal whitepaper (thanks to everyone contributing and reviewing afterwards!), we would like to keep the momentum and schedule a second round. I've set up a doodle here [0], and we will send out the call details once we have settled on the time slot(s). If you'd like to participate, please make sure you reply to the doodle before the end of this week. Thanks! Arne [0] https://doodle.com/poll/kyd5vvgmg68vxduv From mark at stackhpc.com Tue Apr 7 08:26:33 2020 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 7 Apr 2020 09:26:33 +0100 Subject: [neutron][OVN] Multiple mechanism drivers In-Reply-To: <047A484A-79BF-488A-BDC8-8AB820B46EED@gmail.com> References: <8B3A471E-B855-4D1C-AE52-080D4B0D92A9@gmail.com> <20191125075144.vhppi2bnnnfyy57s@skaplons-mac> <047A484A-79BF-488A-BDC8-8AB820B46EED@gmail.com> Message-ID: On Tue, 7 Apr 2020 at 03:11, Sam Morrison wrote: > > Sorry just picking this up again after a bit of a hiatus. > > We have compute nodes that support instances on midonet and on linuxbridge and everything works great, we don’t want or expect these networks to interact with each other. > This works because the use different network_types. Our linux bridge networks are flat provider networks and midonet is used for tenant networks (tunnelled). > > Replacing midonet with OVN almost works too except it shares flat network type with linuxbridge. > > Ideally in our environment we’d want to say this is a linuxbridge network, use linuxbridge to bind, this is an OVN network and use ovn driver to bind etc. > It kind of seems weird that it just iterates over all mechanism drivers and stops once one works, I’m sure there is a good reason for it but I can’t see it. > Is anyone using neutron ml2 like this? What mech drivers can interact and so it doesn’t matter which driver binds the port? When mixing mechanism drivers in the past I've separated them by physical network. Often drivers accept a list of physnets they manage. Not sure if this works for your case though. > > Sam > > > > On 28 Nov 2019, at 11:27 pm, Sean Mooney wrote: > > > > On Thu, 2019-11-28 at 11:12 +0900, Takashi Yamamoto wrote: > >> hi, > >> > >> On Mon, Nov 25, 2019 at 5:00 PM Slawek Kaplonski wrote: > >>> > >>> Hi, > >>> > >>> I think that this may be true that networking-ovn will not work properly > >>> with other drivers. > >>> I don't think it was tested at any time. > > it should work with other direver if you use vlan or flat networks. > > it will not form mesh tunnel networks with other drivers event if you use geneve for the > > other ml2 driver. > >>> Also the problem may be that when You are using networking-ovn than whole > >>> neutron topology is different. There are different agents for example. > >>> > >>> Please open a bug for that for networking-ovn. I think that networking-ovn team > >>> will take a look into that. > >> > >> networking-midonet ignores networks without "midonet" type segments to > >> avoid interfering other mechanism drivers. > >> maybe networking-ovn can have something similar. > > that is actully the opiste of how that shoudl work. > > you are ment to be able to have multiple ml2 drivers share the same segmentation type > > and you are not ment to have a segmentation type that is specific to a mech driver. > > give we dont scheduler based on the segmenation type supprot today either (we shoudl by the way) > > it would be very fagile to use a dedicated ovn segmentation type and i woudl not advise doing it for > > midio net. > > > > ideally we would create placement aggregate or triats to track which segmentation types > > are supported by which hosts. traits are proably better for the segmentation types but modelling network segments > > them selves would be better as aggreates. > > > > if we really wanted to model the capsity of the segmenation types we woudl addtionally create shareing resouce providers > > with inventories of network segmenation types resouce classes per physnet with a singel gloabl rp for the tunneled > > types. then every time you allocated a network in neuton you would create an allocation for that network and tag ports > > with the approreate aggreate requrest. > > > > on the nova side wew could combine the segment and segmenation type aggreate request form the port with any > > other aggreates form nova and pass all of them as member_of requriements to placment to ensure we land on a > > host that can provide the required network connectivty. today we litrallly just assme all nodes are connected > > to all networks with all segmenation types and hope for the best. > > > > thats a bit of a tangent but just pointing out we should schduler on network connectivity and segmenation types > > but we shoudl not have backend specific segmenation types. > > > >> > >> wrt agents, last time i checked there was no problem with running > >> midonet agent and ovs agent on the same host, sharing the kernel > >> datapath. > >> so i guess there's no problem with ovn either. > > you can run ml2/ovn and ml2/ovs on the same cloud. > > just put the ml2/ovs first. it will fail to bind if a host dose not have > > the ovs neutron agent running and will then bind with ml2/ovn instead. > > > > it might work the other way too but i have nto tested that. > >> > >> wrt l3, unfortunately neither midonet or ovn have implemented "l3 > >> flavor" thing yet. so you have to choose a single l3 plugin. > >> iirc, Sam's deployment doesn't use l3 for linuxbridge, right? > > if you have dedicated network nodes that is not really a proable. > > just make sure that they are all ovn or all ovs or whatever makes sense. > > its the same way that if you deploy with ml2/ovs and want to use ovs-dpdk that > > you only instlal ovs-dpdk on the comptue nodes and use kernel ovs on the networking nodes > > to avoid the terible network performace when using network namespace for nat and routing. > > > > if you have tunneled networks it would be an issue butin that case you just need to ensure that at least 1 router > > is created on each plugin so you would use ha routers by default and set the ha factor so that it created routers on > > node with both mechinmum dirvers. again however since the different ml2/driver do not form a mesh you should > > really only use different ml2 drivers if you are using vlan or flat networks. > >> > >>> > >>> On Mon, Nov 25, 2019 at 04:32:50PM +1100, Sam Morrison wrote: > >>>> We are looking at using OVN and are having some issues with it in our ML2 environment. > >>>> > >>>> We currently have 2 mechanism drivers in use: linuxbridge and midonet and these work well (midonet is the default > >>>> tenant network driver for when users create a network) > >>>> > >>>> Adding OVN as a third mechanism driver causes the linuxbridge and midonet networks to stop working in terms of > >>>> CRUD operations etc. > > i would try adding ovn last so it is only used if the other two cannot bind the port. > > the mech driver list is orderd for this reason so you can express preference. > >>>> It looks as if the OVN driver thinks it’s the only player and is trying to do things on ports that are in > >>>> linuxbridge or midonet networks. > > that would be a bug if so. > >>>> > >>>> Am I missing something here? (We’re using Stein version) > >>>> > >>>> > >>>> Thanks, > >>>> Sam > >>>> > >>>> > >>>> > >>> > >>> -- > >>> Slawek Kaplonski > >>> Senior software engineer > >>> Red Hat > >>> > >>> > >> > >> > > > > From thierry at openstack.org Tue Apr 7 09:27:22 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 7 Apr 2020 11:27:22 +0200 Subject: [tc][oslo] Looking for a Victoria Oslo PTL volunteer Message-ID: <3efe6802-1a75-8869-7da1-0c320d66d417@openstack.org> Hello everyone, Nobody stepped up yet to continue the great work of Ben Nemec as PTL of the Oslo project team. Oslo handles common code between our various projects, and is key to reduce duplication of code and waste of effort. Oslo is very critical for OpenStack, as all other projects depend on its libraries, but can be relatively low-maintenance as most libraries are feature-complete. Being the Oslo PTL gives you a great horizontal perspective on OpenStack, so they are a great opportunity for smaller organizations that depend on OpenStack to keep track of what's happening and contribute back, without having to specialize in one of the larger projects. I suspect Ben can help answer any question on the workload it represents to be the Oslo PTL. Getting a new volunteer for Oslo PTL is our preferred solution. In case the PTL position is just too intimidating and nobody volunteers, the TC is considering experimenting with a PTL-less form of governance that would involve only designating two liaisons (a release liaison(s) and a security liaison(s)). That is the minimal accountability that we require of an openstack project team (someone(s) signing off releases, and someone(s) to contact in case of embargoed vulnerabilities). I say "someone(s)" as ideally multiple people sign up to be a liaison. In that form of governance, if there is any conflict in the team, the TC can ultimately arbitrate the conflict, instead of the PTL. That would be our plan B. So please let us know if you'd be interested in the Oslo PTL position, or if you'd be OK just for filling the duties of a specific liaison. Thanks for considering it! -- Thierry Carrez (ttx) From thierry at openstack.org Tue Apr 7 09:40:44 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 7 Apr 2020 11:40:44 +0200 Subject: [ironic][release] List cycle-with-intermediary deliverables that have not been released yet In-Reply-To: References: <4733205b-ae20-90af-490b-ce56434f22e4@gmx.com> Message-ID: <4d505553-2e0e-52c9-ef33-6f3abde77f13@openstack.org> Sean McGinnis wrote: >>>> Thanks for the detailed response Sean. I don't have an issue with the >>>> cycle model - Ironic is still tied to the cyclical release model. The >>>> part that I disagree with is the requirement to create an intermediary >>>> release. It shouldn't be a problem if bifrost doesn't make a feature >>>> release between Train and Ussuri, we'll just do a final Ussuri >>>> release. It's the intermediary I'd like to be optional, rather than >>>> the final cycle release. >>>> >>> I would suggest switching these to cycle-with-rc then. There is one >>> release candidate that has to happen just before the final release for >>> the cycle, but that's mainly to make sure everything is in good shape >>> before we declare it done. That sounds like it might fit better with >>> what the team wants to do here. >> But what if we want to create a feature release mid-cycle? Some cycles >> we do, some we don't. >> > With cycle-with-rc, that does allow *beta* releases to be done at any > point during the cycle. But those would be marked as b1, b2, etc. > releases. This allows those that want to try out upcoming features to > grab them if they want them, but would prevent anyone else from > accidentally picking up something before it is quite ready. > > I'm guessing this might not be what you are looking for though. > > We do have another release model called cycle-automatic. This was > introduced for tempest plugins to just do a release at the end of the > cycle to make sure there is a tag to denote the tempest version the > plugin was originally developed for. Since some plugins are being picked > up more often downstream, this model does allow for additional releases > to be proposed at any point during the development cycle. > > We will need to discuss this as a team to see if this makes sense for > non-tempest plugins. It was intended only for those types of > deliverables. I just mention it here as something that we do have in > place that might be adapted to fit what the team needs. But we also need > to consider what we are communicating to downstream consumers of our > releases, so I'm not entirely sure at this point if it makes sense, or > would be a good thing, to allow other types of deliverables to use this > model. Yeah the general idea was to drive toward best practices (if you do a single release per cycle, it's important that it's good, so it should use feature freeze, release candidates...). That said today it's rare that we break things... and nobody tests RC releases anyway. So there is definitely a possibility for just having one single cycle-based release model: release once or more in a cycle, do not use betas or RCs. And if there is no release like two weeks before final, we'd cut automatically cut one from HEAD. I'd actually prefer that to switching to cycle-independent, since deliverables under that model are not part of "the openstack release". That said, it might be a bit late to roll that out for this cycle, two days before we actually feature-freeze cycle-with-rc projects... -- Thierry Carrez (ttx) From hberaud at redhat.com Tue Apr 7 09:46:32 2020 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 7 Apr 2020 11:46:32 +0200 Subject: [tc][oslo] Looking for a Victoria Oslo PTL volunteer In-Reply-To: <3efe6802-1a75-8869-7da1-0c320d66d417@openstack.org> References: <3efe6802-1a75-8869-7da1-0c320d66d417@openstack.org> Message-ID: Hello, I'm already the release liaison for oslo and I can continue with this role for a new cycle if it can help us. Cheers Le mar. 7 avr. 2020 à 11:30, Thierry Carrez a écrit : > Hello everyone, > > Nobody stepped up yet to continue the great work of Ben Nemec as PTL of > the Oslo project team. Oslo handles common code between our various > projects, and is key to reduce duplication of code and waste of effort. > Oslo is very critical for OpenStack, as all other projects depend on its > libraries, but can be relatively low-maintenance as most libraries are > feature-complete. > > Being the Oslo PTL gives you a great horizontal perspective on > OpenStack, so they are a great opportunity for smaller organizations > that depend on OpenStack to keep track of what's happening and > contribute back, without having to specialize in one of the larger > projects. > > I suspect Ben can help answer any question on the workload it represents > to be the Oslo PTL. > > Getting a new volunteer for Oslo PTL is our preferred solution. In case > the PTL position is just too intimidating and nobody volunteers, the TC > is considering experimenting with a PTL-less form of governance that > would involve only designating two liaisons (a release liaison(s) and a > security liaison(s)). That is the minimal accountability that we require > of an openstack project team (someone(s) signing off releases, and > someone(s) to contact in case of embargoed vulnerabilities). I say > "someone(s)" as ideally multiple people sign up to be a liaison. In that > form of governance, if there is any conflict in the team, the TC can > ultimately arbitrate the conflict, instead of the PTL. > > That would be our plan B. So please let us know if you'd be interested > in the Oslo PTL position, or if you'd be OK just for filling the duties > of a specific liaison. > > Thanks for considering it! > > -- > Thierry Carrez (ttx) > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Tue Apr 7 09:47:41 2020 From: gr at ham.ie (Graham Hayes) Date: Tue, 7 Apr 2020 10:47:41 +0100 Subject: [all] Please remove all external resources from docs In-Reply-To: <20200406211828.auwmvebaykgbkfek@yuggoth.org> References: <86292630-29e6-3584-8649-970b0c71aa3b@debian.org> <20200406211828.auwmvebaykgbkfek@yuggoth.org> Message-ID: <13800783-0e27-b858-aef2-1c80785021dc@ham.ie> On 06/04/2020 22:18, Jeremy Stanley wrote: > On 2020-04-06 17:12:30 +0200 (+0200), Thomas Goirand wrote: >> I've wrote about this earlier I guess, but I believe I need to do it >> once more. Often I see in the docs things like this: >> >> .. image:: https://governance.openstack.org/tc/badges/.svg >> :target: https://governance.openstack.org/tc/reference/tags/index.html > [...] >> The solution is simple: have the resource being *LOCAL* (ie: stored in >> the project's doc), not stored on an external site. > [...] > > I'm not a fan of this "feature" myself, but feel compelled to point > out that it's intentionally dynamic content dependent on metadata > from another repository which is intended to change at different > times than the document itself changes, so including a copy of that > file would make little sense. It *might* make sense - the info it shows is time dependent, so if a project was asserting follows-stable-policy in pike, showing that is not necessarily _wrong_ ... Same as if a project was moved out of OpenStack in Havana, the docs could show the openstack project badge until then.[1] 1 - I am aware the badges didn't exist in havana It's entirely there so that projects > can have the same feel-good "badges" which they see displayed in the > README files of other projects on GitHub. It exists purely so that > real-time RST-to-HTML renderers will display the current state of a > number of bits of governance metadata, so stripping those out of > packaged documentation builds is the right thing to do. Maybe we > could make it easier to have our documentation builds strip that > content automatically so that distro package maintainers don't need > to? > From rico.lin.guanyu at gmail.com Tue Apr 7 09:59:37 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 7 Apr 2020 17:59:37 +0800 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> Message-ID: On Thu, Apr 2, 2020 at 6:31 PM Jean-Philippe Evrard wrote: > > I read your nominations, and as usual I will ask what do you > _technically_ will do during your mandate, what do you _actively_ want > to change in OpenStack? I aim for improving the bridge cross Developers, Users, and Operators. >From my last year as TC role, I try to involve SIGs, and I think Special Interest Group is the right format. IMO we didn't provide as many resources as we should to promote this kind of group, especially considering the success of SIG required involvement from all three parts(devs, users, and ops). Also if possible, adding language and timezone friendly factors additionally. On the other hand, I will also work on CI for scenarios across projects or communities. We OpenStack surely can do a lot of stuff, but really needs better test coverage to ensure that's mostly stable for all time. We do short for test job for multi-arch and automation scenario (And which is why I stay in SIGs for these two kinds of direction) Last, the community-wide goal schedule, as it's something we should keep pushing and is not yet on track (I mean the scheduling part). -- My friends, stay home please, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Apr 7 10:03:53 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 7 Apr 2020 12:03:53 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: Hi, On Mon, Apr 6, 2020 at 4:45 PM Mohammed Naser wrote: > Hi Dmitry, > > Thank you for raising this. I think what you're looking for makes > sense, I don't think splitting outside OpenStack is the right solution > for this. There are many logistical issues in doing this. > I do agree with this. > > First of all, it promotes even more bureaucracy within our community > which is something that we're trying to split. "Ironic" and > "OpenStack" becoming two separate pieces means that we've failed as a > community to be able to deliver what OpenStack is. If we do this, we > further promote the separation of our communities and that is not > sustainable. With a dwindling contributor base, we'll find power in > standing together in big groups, not by isolating ourselves to small > islands. > This sounds a bit like the "us vs them" mentality, which I think may be hurting OpenStack now. I think we should be more open to the things that happen outside of our control. This also goes back to my points about Oslo and the bigger Python world. > > Arguably, you would say that "well, Ironic is picking up outside > OpenStack and we want to capitalize on that". I agree with you on > that, I think we should absolutely do that. However, I think simply > just becoming a top-level project is not the way to go about this. It > will introduce a lot more work to our (already overwhelmed) OSF staff, > it means maintaining a new identity, it means applying to be a pilot > project and going through the whole process. It means that all > existing developer may need to have to revise the way they do work > because they have signed the CCLA for OpenStack and not "Ironic". > I fully agree, this is unfortunate. > We're adding a whole lot of bureaucray when the problem is messaging. > > I've gone over your points below about what you think this will do and > strongly suggest those alternatives. > > Regards, > Mohammed > > On Wed, Apr 1, 2020 at 1:07 PM Dmitry Tantsur wrote: > > > > Hi everyone! > > > > This topic should not come as a huge surprise for many, since it has > been raised numerous times in the past years. I have a feeling that the end > of Ussuri, now that we’ve re-acquired our PTL and are on the verge of > selecting new TC members, may be a good time to propose it for a formal > discussion. > > > > TL;DR I’m proposing to make Ironic a top-level project under opendev.org > and the OpenStack Foundation, following the same model as Zuul. I don’t > propose severing current relationships with other OpenStack projects, nor > making substantial changes in how the project is operated. > > > > (And no, it’s not an April 1st joke) > > > > Background > > ========= > > > > Ironic was born as a Nova plugin, but has grown way beyond this single > case since then. The first commit in Bifrost dates to February 2015. During > these 5 years (hey, we forgot to celebrate!) it has developed into a > commonly used data center management tool - and still based on standalone > Ironic! The Metal3 project uses standalone Ironic as its hardware > management backend. We haven’t been “just” a component of OpenStack for a > while now, I think it’s time to officially recognize it. > > > > And before you ask: in no case do I suggest scaling down our invaluable > integration with Nova. We’re observing a solid growth of deployments using > Ironic as an addition to their OpenStack clouds, and this proposal doesn’t > try to devalue this use case. The intention is to accept publicly and > officially that it’s not the only or the main use case, but one of the main > use cases. I don’t think it comes as a surprise to the Nova team. > > > > Okay, so why? > > =========== > > > > The first and the main reason is the ambiguity in our positioning. We do > see prospective operators and users confused by the perception that Ironic > is a part of OpenStack, especially when it comes to the standalone use > case. “But what if I don’t need OpenStack” is a question that I hear in > most of these conversations. Changing from “a part of OpenStack” to “a FOSS > tool that can integrate with OpenStack” is critical for our project to keep > growing into new fields. To me personally it feels in line with how OpenDev > itself is reaching into new areas beyond just the traditional IaaS. The > next OpenDev even will apparently have a bare metal management track, so > why not a top-level project for it? > > > > Another reason is release cadence. We have repeatedly expressed the > desire to release Ironic and its sub-projects more often than we do now. > Granted, *technically* we can release often even now. We can even abandon > the current release model and switch to “independent”, but it doesn’t > entirely solve the issue at hand. First, we don’t want to lose the notion > of stable branches. One way or another, we need to support consumers with > bug fix releases. Second, to become truly “independent” we’ll need to > remove any tight coupling with any projects that do integrated releases. > Which is, essentially, what I’m proposing here. > > > > Finally, I believe that our independence (can I call it “Irexit” > please?) has already happened in reality, we just shy away from recognizing > it. Look: > > 1. All integration points with other OpenStack projects are optional. > > 2. We can work fully standalone and even provide a project for that. > > 3. Many new features (RAID, BIOS to name a few) are exposed to > standalone users much earlier than to those going through Nova. > > 4. We even have our own mini-scheduler (although its intention is not > and has not been to replace the Placement service). > > 5. We make releases more often than the “core” OpenStack projects (but > see above). > > > > What we will do > > ============ > > > > This proposal involves in the short term: > > * Creating a new git namespace: opendev.org/ironic > > We could totally do this for all existing projects honestly. I think > the TC could probably be okay with this. > That's an interesting thought, actually. Maybe rather than one "openstack" namespace we can have a namespace per program? Like opendev.org/compute/nova, etc? This may also solve a part of the problem with oslo libraries adoption outside of OpenStack. opendev.org/libs/stevedore doesn't suggest that it belongs to openstack the same way as opendev.org/openstack/stevedore. And it will provide an obvious link between code names and purposes! Who can we talk to to make this happen? > > > * Creating a new website (name TBD, bare metal puns are welcome). > > * If we can have https://docs.opendev.org/ironic/, it may be just > fine though. > > Who's going to work on this website? It's important to not only have > a website but keep it maintained, add more content, update it. We do realize this, and we're already doing it, to some extent, with docs.o.o/ironic. It's unfortunate that we don't have dedicated designers and technical writers, but it's already our reality. Note that I don't suggest a highly dynamic web site. More of a consumer-oriented landing page like http://metal3.io/ and an umbrella-neutral hosting for documentation like http://tripleo.org/. I do agree with Thierry that it's doable without the actual split, but then it needs cooperation from the Foundation and the TC. > The > website will have absolutely zero traction initially and we'll miss > out on all the "traffic" that OpenStack.org gets. I think what we > should actually do is redesign OpenStack.org so that it's a focused > about the OpenStack projects and move all the foundation stuff to > osf.dev -- In there, we can nail down the messaging of "you don't need > all of OpenStack". > I don't think keeping ironic on openstack.org will help you nail it down. > > > * Keeping the same governance model, only adjusted to the necessary > extent. > > This is not easy, you'll have to come up with a whole governance, run > elections, manage people. We already have volunteers that help do > this inside OpenStack, why add all that extra layer? > > > * Keeping the same policies (reviews, CI, stable). > > That seems reasonable to me > > > * Defining a new release cadence and stable branch support schedule. > > If there is anything in the current model that doesn't suit you, > please bring it up, and let's revise it. I've heard this repeated a > lot as a complaint from the Ironic team and I've unfortunately not > seen any proposal about an ideal alternative. We need to hear things > to change things. > There is no ideal alternative, this is why most of these discussions get stuck. We may need to look at the Swift model since it seems closer to what we're aiming at. Which are: 1) More frequent releases with a possibility of bug-fix support for them (i.e. short-lived stable branches) and upgrades (time to ditch grenade?). 2) Stop release-to-release matching for interactions with other services (largely achieved by microversioning). 3) Support for non-coordinated releases in installation tools (okay, this one is fun, I'm not sure how to approach it). > > > In the long term we will consider (not necessary do): > > * Reshaping our CI to rely less on devstack and grenade (only use them > for jobs involving OpenStack). > > That seems reasonable to have more Ironic "standalone" jobs. It is > important that _the_ biggest consumers *are* the OpenStack ones, let's > not alienate them so we end up in a world of nothing new. > I'm not sure I get your proposal. If you suggest that we keep most of our testing on devstack/grenade/tempest with the whole OpenStack, this is something we're moving away from, no matter if split or not. The current approach to CI testing simply doesn't scale well enough to cover our feature set. > > > * Reducing or removing reliance on oslo libraries. > > Why? > I've left some thoughts in my previous replies, but I don't quite like the way we're creating silos in OpenStack. Everyone is depending on oslo.utils, sometimes for one tiny helper (bool_from_string anyone?). Couldn't we move it to Python stdlib? I think the existence of our semi-private libraries discourages reaching out to the whole infrastructure. Everything depends on oslo.i18n, cannot we use the built-in gettext instead? Why is everything pulling in Babel even though nothing imports it? Sorry, this is turning into a rant, the essence of which is that we're taking a too easy approach to requirements. > > > * Stopping using rabbitmq for messaging (we’ve already made it optional). > > Please. Please. Whatever you replace it with, just update > oslo.messaging and make all of us happy to stop using it. It's hell. > I'd be happy to, but I think the oslo.messaging API is designed around the AMQP pattern too much. I'm getting back to my rant above: do all projects really need a messaging queue? When I asked this question in Ironic, we realized that we don't. We simply introduced JSON RPC support instead. Dmitry > > > * Integrating with non-OpenStack services (kubernetes?) and providing > lighter alternatives (think, built-in authentication). > > I support this, and I think there's nothing stopping you from doing > that today. If there is, let's bring it up. > > > What we will NOT do > > ================ > > > > At least this proposal does NOT involve: > > * Stopping maintaining the Ironic virt driver in Nova. > > * Stopping running voting CI jobs with OpenStack services. > > * Dropping optional integration with OpenStack services. > > * Leaving OpenDev completely. > > > > What do you think? > > =============== > > > > Please let us know what you think about this proposal. Any hints on how > to proceed with it, in case we reach a consensus, are also welcome. > > > > Cheers, > > Dmitry > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. https://vexxhost.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Apr 7 10:15:34 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 7 Apr 2020 12:15:34 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: Hi, On Mon, Apr 6, 2020 at 11:06 PM Ben Nemec wrote: > At the risk of getting defensive, I do want to make some points relating > to Oslo specifically here. > I really do hope nothing I've said comes as offensive to the Oslo team. I'm not blaming you, we're all in this together :) > > TLDR at the bottom if your eyes glaze over at the wall of text. > > On 4/6/20 3:10 AM, Dmitry Tantsur wrote: > > With absolutely no disrespect meant to the awesome Oslo team, I think > > the existence of Oslo libraries is a bad sign. I think as a strong FOSS > > community we shouldn't invest into libraries that are either useful only > > to us or at least are marketed this way. For example: > > I think it's relevant to keep in mind that Oslo didn't start out as a > collection of libraries, it started out as a bunch of code forklifted > from Nova for use by other OpenStack services. It was a significant > effort just to get the Nova-isms out of it, much less trying to sanitize > everything OpenStack-specific. > > I also don't think it's fair to characterize Oslo as only focused on > OpenStack today. In reality, many of our new libraries since the initial > incubator split have been general purpose as often as not. Where they > haven't, it's things like oslo.limit that are explicitly dependent on an > OpenStack service (Keystone in this case). We believe in being good OSS > citizens as much as anyone else. > > There's also a boil the ocean problem with trying to generalize every > solution to every problem. It's a question we ask every time a new > library is proposed, but in some cases it just doesn't make sense to > write a library for an audience that may or may not exist. And in some > cases when such an audience appears, we have refactored libraries to > make them more general purpose, while often keeping an > OpenStack-specific layer to ease integration with OpenStack services. > See oslo.concurrency and fasteners. > > In fact, that kind of split is a pretty common pattern, even in cases > where the underlying library didn't originate in Oslo/OpenStack. Think > sqlalchemy/oslo.db, dogpile/oslo.cache, kombu/oslo.messaging. A lot of > Oslo is glue code to make it easier to use something in a common way > across OpenStack services. > > > 1) oslo.config is a fantastic piece of software that the whole python > > world could benefit from. Same for oslo.service, probably. > > oslo.config is an interesting case because there is definitely interest > outside OpenStack thanks to things like the Castellan config backend, > but as Doug mentioned oslo.config is heavily opinionated (global config > object, anyone?) and that's an issue for a lot of people. I will also > point out that the only oslo.* dependency that oslo.config has is > oslo.i18n, which itself has no other oslo dependencies, so there's not > much barrier to entry to using it standalone today. > oslo.config currently pulls in 16 dependencies to a clean venv, including oslo.18n, Babel (why?) and requests (WHY?). > > I would not inflict oslo.service on anyone I liked. :-P > So, you don't like us? :-P > > Seriously though, I would advocate for using cotyledon if you're looking > for a general purpose service library. It's not eventlet-based and > provides a lot of the functionality of oslo.service, at least as I > understand it. It was also written by an ex-Oslo contributor outside of > OpenStack so maybe it's an interesting case study for this discussion. I > don't know how much contribution it gets from other sources, but that > would be good to look into. > Good to know, thanks! > > > 2) oslo.utils as a catch-all repository of utilities should IMO be > > either moved to existing python projects or decomposed into small > > generally reusable libraries (essentially, each sub-module could be a > > library of its own). Same for oslo.concurrency. > > oslo.concurrency has already been decomposed into general purpose code > (fasteners) and OpenStack-specific, at least for the most part. I'm sure > the split isn't perfect, but it's not like we haven't recognized the > need for better concurrency libraries in Python as a whole. Note that > fasteners also lives outside Oslo in another ex-Osloers personal repo. > Again, I'm not sure whether that was beneficial or not but it might be > worth reaching out to Mehdi and Josh to see how they feel about it. > I *think* we mostly use oslo.concurrency for its execute() wrapper, which seems like it could be a very useful library on its own (can I suggest better-execute as a name?). > > I would probably -2 any attempt to split oslo.utils. You'd end up with a > bunch of single file libraries and a metric ****-ton of administrative > overhead. It's bad enough managing 40 some repos as it is. Also, that's > another library with minimal cross-dependencies with the rest of Oslo > (just oslo.i18n again), which was an intentional design decision to make > it relatively painless to pull in. > I guess I have a personal dislike of libraries without a simple goal (I've been also fighting with various "utils" modules inside Ironic - with only limited luck). I do suspect that at least some of oslo.utils contents could find a new home, some maybe even in the stdlib. I also don't want us to end up in the leafpad situation, on the other hand. > > > 3) I'm genuinely not sure why oslo.log and oslo.i18n exist and which > > parts of their functionality cannot be moved upstream. > > oslo.log basically provides convenience functions for OpenStack > services, which is kind of what the oslo.* libraries should do. It > provides built-in support for things like context objects, which are > fairly domain-specific and would be difficult to generalize. It also > provides a common set of configuration options so each project doesn't > have to write their own. We don't need 20 different ways to enable debug > logging. :-) > > oslo.i18n is mostly for lazy translation, which apparently is a thing > people still use even though the company pushing for it in the first > place no longer cares. We've also had calls to remove it completely > because it's finicky and tends to break things if the consuming projects > don't follow the best practices with translated strings. So it's in a > weird limbo state. > May I join these calls please? :) Will I break Zanata (assuming anybody still translates ironic projects) if I switch to the upstream gettext? > > I did talk with JP in Shanghai about possibly making it optional because > it pulls in a fair amount of translation files which can bloat minimal > containers. I looked into it briefly and I think it would be possible, > although I don't remember a lot of details because nobody was really > pushing for it anymore so it's hard to justify spending a bunch of time > on it. > I'd be curious to hear more. Dmitry > > TLDR: I think Oslo provides value both in and out of OpenStack > (shocking, I know!). I'm sure the separation isn't perfect, but what is? > > There are some projects that have been split out of it that might be > interesting case studies for this discussion if anyone wants to follow > up. Not it. ;-) > > -Ben > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Apr 7 10:26:44 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 7 Apr 2020 12:26:44 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <7e94b8efee1417f334ed60572cc3d41c847146e0.camel@evrard.me> <641ea673de0bd7beabfefb8afeb33e92858cbb54.camel@evrard.me> Message-ID: Hi, On Tue, Apr 7, 2020 at 4:03 AM wrote: > After reading this lengthy and enlightening threadI come back to Julia > original points. > > 1. What are the benefits of having Ironic as part of OpenStack vs. having > Ironic as part of opendev? > a. I do not buy that people will use Ironic more as standalone. Biforst > has been created several years back and was available as standalone Ironic. > And it is not widely used. > It is widely used, just not among big names. Julia and I constantly encounter people using it, as well as just installing ironic standalone themselves or with kolla. And now metal3 is another standalone ironic use case completely unrelated to openstack. > b. Ironic is used as underpinning to other projects, like Metal3, Airship. > And that is with Ironic being part of OpenStack. Can Ironic be improved for > easier use outside OpenStack - absolutely? Having better handling of > dependencies, like already mentioned Oslo and python will definitely help. > c. Will Ironic be more adapted outside of OpenStack if it part opendev > rather than OpenStack? Will be good to have some evidence for it. > I wish I could see the future :) I can only suggest that people who currently have doubts about OpenStack-specific nature of Ironic would use it. Maybe they wouldn't. > 2. Does being part of OpenStack slows or impede Ironic growth and > adoption? I would argue that being part of OpenDev will provide more > opportunities for growth. > 3. If Ironic becomes OpenDev project what changes? > As Jeremy rightly noted, I'm using "OpenDev" in a sense that I wish existed, but doesn't exist currently. In the current reality it would be an OSF project outside of the OpenStack umbrella. And I do apologize for the confusion I caused. > a. It will still be under the same governance, unless Ironic community > want to create its own governances which will take velocity of developing > new features. So expect that the current OpenStack governance rules will > stay. > Please expand your question. The TC would no longer be in charge, the PTL-core structure would probably stay in place. > b. Will the gate and other processes change? Do not think so since the > current ones work reasonably well. > Not much or not at all. > c. Should Ironic be part of "integrated" openstack release and tested > together with rest of openstack? - absolutely. It has a lot of benefit for > a openstack community, including all derived distro products. > Should libvirt be part of the integrated OpenStack release? I think Nova's integration with libvirt is more developed than one with Ironic. Still, I keep hearing that the integration will fall apart if Ironic is no longer under OpenStack. Why is that? Haven't we invented microversions specifically to be able to further decouple service development? That being said, there would be a certain level of release coordination. There would be Ironic deliverables matching a coordinated OpenStack release and supported accordingly. > d. Will Ironic change its release cadence if it is no longer part of > OpenStack proper? Ironic is only as good as its underlying drivers. That > means that all drivers, that are currently outside of OpenStack governance > will have to follow. > It's part of the idea, really. I don't see a problem from the driver's perspective though. I thought the Dell team was actually interested in more frequent releases to be able to deliver driver features more often? > e. Will Ironic change the development platform? It currently use devstack. > We use devstack and bifrost, and we'll keep using them. We may reduce our reliance on devstack in the CI, but, to be clear, it's part of our plans irregardless of the outcome of this discussion. > f. All Ironic documentation follows OpenStack and is part of it. > And this may be one of the perception problems we're seeing. Our standalone documentation feels second-class, examples use Nova and OSC. > 4. If Ironic leaves OpenStack proper what stops some of the other > projects, like Cinder, or Keystone. to leave? That worries me. As it may > lead to disintegration of the community and the notion of what OpenStack > is. Or it may transition OpenStack to federated model of projects. > Maybe we may indeed? Maybe if we change OpenStack itself there won't be a need for Ironic (Cinder, Keystone) to leave? It's a great question, thank you for raising it! Dmitry > > So after saying all of it, I am leaning toward cleaning up and simplifying > dependencies to make it easier to consume Ironic outside OpenStack, at > least for Victoria cycle and revisit it at that cycle end. > > Sorry for long stream of conscious. > Arkady > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Apr 7 10:51:54 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 7 Apr 2020 12:51:54 +0200 Subject: [ironic][release] List cycle-with-intermediary deliverables that have not been released yet In-Reply-To: <4d505553-2e0e-52c9-ef33-6f3abde77f13@openstack.org> References: <4733205b-ae20-90af-490b-ce56434f22e4@gmx.com> <4d505553-2e0e-52c9-ef33-6f3abde77f13@openstack.org> Message-ID: On Tue, Apr 7, 2020 at 11:43 AM Thierry Carrez wrote: > Sean McGinnis wrote: > >>>> Thanks for the detailed response Sean. I don't have an issue with the > >>>> cycle model - Ironic is still tied to the cyclical release model. The > >>>> part that I disagree with is the requirement to create an intermediary > >>>> release. It shouldn't be a problem if bifrost doesn't make a feature > >>>> release between Train and Ussuri, we'll just do a final Ussuri > >>>> release. It's the intermediary I'd like to be optional, rather than > >>>> the final cycle release. > >>>> > >>> I would suggest switching these to cycle-with-rc then. There is one > >>> release candidate that has to happen just before the final release for > >>> the cycle, but that's mainly to make sure everything is in good shape > >>> before we declare it done. That sounds like it might fit better with > >>> what the team wants to do here. > >> But what if we want to create a feature release mid-cycle? Some cycles > >> we do, some we don't. > >> > > With cycle-with-rc, that does allow *beta* releases to be done at any > > point during the cycle. But those would be marked as b1, b2, etc. > > releases. This allows those that want to try out upcoming features to > > grab them if they want them, but would prevent anyone else from > > accidentally picking up something before it is quite ready. > > > > I'm guessing this might not be what you are looking for though. > > > > We do have another release model called cycle-automatic. This was > > introduced for tempest plugins to just do a release at the end of the > > cycle to make sure there is a tag to denote the tempest version the > > plugin was originally developed for. Since some plugins are being picked > > up more often downstream, this model does allow for additional releases > > to be proposed at any point during the development cycle. > > > > We will need to discuss this as a team to see if this makes sense for > > non-tempest plugins. It was intended only for those types of > > deliverables. I just mention it here as something that we do have in > > place that might be adapted to fit what the team needs. But we also need > > to consider what we are communicating to downstream consumers of our > > releases, so I'm not entirely sure at this point if it makes sense, or > > would be a good thing, to allow other types of deliverables to use this > > model. > > Yeah the general idea was to drive toward best practices (if you do a > single release per cycle, it's important that it's good, so it should > use feature freeze, release candidates...). That said today it's rare > that we break things... and nobody tests RC releases anyway. > Unfortunately yes :( > > So there is definitely a possibility for just having one single > cycle-based release model: release once or more in a cycle, do not use > betas or RCs. And if there is no release like two weeks before final, > we'd cut automatically cut one from HEAD. > I'd warmly welcome this. Dmitry > > I'd actually prefer that to switching to cycle-independent, since > deliverables under that model are not part of "the openstack release". > > That said, it might be a bit late to roll that out for this cycle, two > days before we actually feature-freeze cycle-with-rc projects... > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From licanwei_cn at 163.com Tue Apr 7 11:02:46 2020 From: licanwei_cn at 163.com (licanwei) Date: Tue, 7 Apr 2020 19:02:46 +0800 (GMT+08:00) Subject: [Watcher]about IRC meeting on April 8 Message-ID: <7805d977.58c8.171544e8d94.Coremail.licanwei_cn@163.com> Hi, We will have the team meeting tomorrow at 08:00 UTC on #openstack-meeting-alt. pls update the meeting agenda if you have something want to be discussed. thanks licanwei | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Apr 7 12:29:50 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 7 Apr 2020 14:29:50 +0200 Subject: [tc][oslo] Looking for a Victoria Oslo PTL volunteer In-Reply-To: References: <3efe6802-1a75-8869-7da1-0c320d66d417@openstack.org> Message-ID: <6fad0436-33bf-7af0-7afe-e64674755bae@openstack.org> Herve Beraud wrote: > I'm already the release liaison for oslo and I can continue with this > role for a new cycle if it can help us. Thanks Hervé, so the new PTL would totally have help! Any volunteer for taking over the PTL role ? -- Thierry Carrez (ttx) From thierry at openstack.org Tue Apr 7 12:55:42 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 7 Apr 2020 14:55:42 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <725b0f10-6edc-e2c2-4b3c-bd00ad22a537@gmx.com> Message-ID: Mohammed Naser wrote: > On Mon, Apr 6, 2020 at 10:53 AM Sean McGinnis wrote: >> [...] >> Cinder has been useful stand alone for several years now, but I have >> also seen the reaction of "why would I use that, I don't need all of >> that OpenStack stuff". >> >> I wonder if we need to do something better to highlight and message that >> there are certain components of OpenStack that are useful as independent >> components that can exist on their own. > > I think that Sean here hit on the most critical point we need to > drive. There's no amount of splitting that would resolve this. I think a large part of the problem lies in the way we communicate about OpenStack. In particular, it is difficult to find a webpage that talks about ironic as a software component you might want to use. Practical exercise: find ironic on openstack.org. The best path involves two clicks and you only land on a component page[1] without much explanations. Or you reach https://www.openstack.org/bare-metal/ which is great, but more about the use case than the software. We are collectively to blame for this. The data on that component page is maintained by a repo[2] that I issued multiple calls for help for, and yet there aren't many changes proposed to expand the information presented there. And having a mix of a Foundation and a product website coexist at openstack.org means the information is buried so deeply someone born in this century would likely never find it. I think we need to improve on that, but it takes time due to how search engines work. I may sound like a broken record, but the solution in my opinion is to have basic, component-specific websites for components that can be run standalone. Think ironic.io (except .io domains are horrible and it's already taken). A website that would solely be dedicated to presenting Ironic, and would only mention OpenStack in the fineprint, or as a possible integration. It would list Ironic releases outside of openstack cycle context, and display Ironic docs without the openstack branding. That would go further to solve the issue than any governance change IMHO. Thoughts? [1] https://www.openstack.org/software/releases/train/components/ironic [2] https://opendev.org/osf/openstack-map/ -- Thierry Carrez (ttx) From sean.mcginnis at gmx.com Tue Apr 7 13:02:59 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 7 Apr 2020 08:02:59 -0500 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <725b0f10-6edc-e2c2-4b3c-bd00ad22a537@gmx.com> Message-ID: <0518e10d-e521-c2cf-ea98-1e446d232434@gmx.com> > Practical exercise: find ironic on openstack.org. The best path > involves two clicks and you only land on a component page[1] without > much explanations. Or you reach https://www.openstack.org/bare-metal/ > which is great, but more about the use case than the software. We are > collectively to blame for this. The data on that component page is > maintained by a repo[2] that I issued multiple calls for help for, and > yet there aren't many changes proposed to expand the information > presented there. And having a mix of a Foundation and a product > website coexist at openstack.org means the information is buried so > deeply someone born in this century would likely never find it. > > I think we need to improve on that, but it takes time due to how > search engines work. I may sound like a broken record, but the > solution in my opinion is to have basic, component-specific websites > for components that can be run standalone. Think ironic.io (except .io > domains are horrible and it's already taken). A website that would > solely be dedicated to presenting Ironic, and would only mention > OpenStack in the fineprint, or as a possible integration. It would > list Ironic releases outside of openstack cycle context, and display > Ironic docs without the openstack branding. > > That would go further to solve the issue than any governance change > IMHO. Thoughts? > > [1] https://www.openstack.org/software/releases/train/components/ironic > [2] https://opendev.org/osf/openstack-map/ > I wonder if it could help if we have something like: used_independently: true in https://opendev.org/osf/openstack-map/src/branch/master/openstack_components.yaml From dtantsur at redhat.com Tue Apr 7 13:07:21 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 7 Apr 2020 15:07:21 +0200 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: <725b0f10-6edc-e2c2-4b3c-bd00ad22a537@gmx.com> Message-ID: Hi, On Tue, Apr 7, 2020 at 2:57 PM Thierry Carrez wrote: > Mohammed Naser wrote: > > On Mon, Apr 6, 2020 at 10:53 AM Sean McGinnis > wrote: > >> [...] > >> Cinder has been useful stand alone for several years now, but I have > >> also seen the reaction of "why would I use that, I don't need all of > >> that OpenStack stuff". > >> > >> I wonder if we need to do something better to highlight and message that > >> there are certain components of OpenStack that are useful as independent > >> components that can exist on their own. > > > > I think that Sean here hit on the most critical point we need to > > drive. There's no amount of splitting that would resolve this. > > I think a large part of the problem lies in the way we communicate about > OpenStack. In particular, it is difficult to find a webpage that talks > about ironic as a software component you might want to use. > > Practical exercise: find ironic on openstack.org. The best path involves > two clicks and you only land on a component page[1] without much > explanations. Or you reach https://www.openstack.org/bare-metal/ which > is great, but more about the use case than the software. We are > collectively to blame for this. The data on that component page is > maintained by a repo[2] that I issued multiple calls for help for, and > yet there aren't many changes proposed to expand the information > presented there. And having a mix of a Foundation and a product website > coexist at openstack.org means the information is buried so deeply > someone born in this century would likely never find it. > I didn't even know about osf/openstack-map.. Either I'm living under a rock (possible) or there may be some improvements in the internal communication as well (not blaming anyone, we're all in it together). > > I think we need to improve on that, but it takes time due to how search > engines work. I may sound like a broken record, but the solution in my > opinion is to have basic, component-specific websites for components > that can be run standalone. Think ironic.io (except .io domains are > horrible and it's already taken). A website that would solely be > dedicated to presenting Ironic, and would only mention OpenStack in the > fineprint, or as a possible integration. It would list Ironic releases > outside of openstack cycle context, and display Ironic docs without the > openstack branding. > > That would go further to solve the issue than any governance change > IMHO. Thoughts? > I think I've said it already, but it's a great idea and should be done no matter how this discussion ends up. Dmitry > > [1] https://www.openstack.org/software/releases/train/components/ironic > [2] https://opendev.org/osf/openstack-map/ > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Apr 7 13:24:08 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 7 Apr 2020 15:24:08 +0200 Subject: [tc][election] Simple question for the TC candidates In-Reply-To: References: <43b720ed1c1d0da72db344bde4d3da88129f7680.camel@evrard.me> <1c7f1f490722a992283539553d2e78c62fc866e7.camel@evrard.me> Message-ID: <5546b06f-66d6-c280-9de7-4d681f1752c0@openstack.org> Jeremy Freudberg wrote: > On Mon, Apr 6, 2020 at 3:19 AM Jean-Philippe Evrard > wrote: >> [...] >> I like this. Should this be self-asserted by the teams, or should we >> provide some kind of validation? For teams that are very close but have >> other openstack services projects dependencies, should the TC work on >> helping removing those dependencies? >> [...] > > Yes the TC should support some kind of initiative to encourage > standalone/reusability. I think self-assertion is fine at the start (I > think this is really important info to publicize) ... but eventually > there should be some kind of reference doc with clear criteria for > what "standalone/reusable" actually means. Of course I'm really just > thinking of services... libraries are another matter... That could easily be added to: https://opendev.org/osf/openstack-map/src/branch/master/openstack_components.yaml and displayed on component pages under: https://www.openstack.org/software/project-navigator (same way we already display dependencies) -- Thierry Carrez (ttx) From fungi at yuggoth.org Tue Apr 7 13:52:42 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 7 Apr 2020 13:52:42 +0000 Subject: [all] Please remove all external resources from docs In-Reply-To: <13800783-0e27-b858-aef2-1c80785021dc@ham.ie> References: <86292630-29e6-3584-8649-970b0c71aa3b@debian.org> <20200406211828.auwmvebaykgbkfek@yuggoth.org> <13800783-0e27-b858-aef2-1c80785021dc@ham.ie> Message-ID: <20200407135241.ayq7j2qtajrnmfgn@yuggoth.org> On 2020-04-07 10:47:41 +0100 (+0100), Graham Hayes wrote: [...] > It *might* make sense - the info it shows is time dependent, so if > a project was asserting follows-stable-policy in pike, showing > that is not necessarily _wrong_ ... Same as if a project was moved > out of OpenStack in Havana, the docs could show the openstack > project badge until then. [...] Fair, if folks prefer that, then the docs build jobs could require the governance repo to obtain the requisite metadata and incorporate the Sphinx extension which generates these SVG images. That still might be unworkably complicated for downstream packaging in distros however, since we don't version and release the contents of the governance repo, so would likely need to start doing that (at least for the projects.yaml and Sphinx extension) and declare it as a min-versioned docs build dependency. Seems like a lot of work compared to just stripping them out. My point was that these badges are mostly only relevant to folks browsing README files in a source code hosting platform, but when the README is embedded into a compiled Sphinx document they seem really out of place and are probably best omitted. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From alifshit at redhat.com Tue Apr 7 13:54:11 2020 From: alifshit at redhat.com (Artom Lifshitz) Date: Tue, 7 Apr 2020 09:54:11 -0400 Subject: [all][Virtual PTG] Notes from the 'Running Virtual Meetups' session Message-ID: Hey all, Last week, Red Hat hosted a 'Running Virtual Meetups' virtual session, with presenters from both Red Hat and others in the industry. I attended, hoping to gleam potentially interesting insights into how we can run our virtual PTG. Here are my notes: In real PTG, the physical and geographical aspects of it make it much easier to separate from our usual work. The usual calls, tickets, emails, whatever, take a back seat due to the constraints of a PTG setting. In a virtual PTG, with us sitting at our usual desks, this will become much harder. This leads into the next point - participant engagement. Having technical conversations for 8 hours a day is already hard enough when you're in the same room as the people you're conversing with. When those people are pixels on your screen, it becomes even harder. The session I attended was geared more towards presentations than PTG-style roundtable discussion, so I'm not sure that the proposed mitigations would apply to us. For instance, live polls make no sense in a PTG context. One interesting suggestion was to do physical movement between topics/sessions. The online attention span of a human is estimated to be around 10-15 minutes, so having participants stand up and walk around their desk/kitchen/hallway 4 times an hour could be something worth trying. Another way to keep folks engaged is to actually do things besides talking. In our context, this could mean hacking on some POC code together, or collaboratively writing a spec. Again, the session I attended was geared more towards presentations, so the suggestion was to do live debugging. It's actually more engaging when things break and you fix them as a group, than when a presentation goes off without a hitch. That was about it. There was also a list of potentially useful tools, that I'll present here as a link dump because I've never used any of them: https://www.mentimeter.com/ https://pollev.com/ http://roll20.net/ https://obsproject.com/ https://www.sli.do/ The danger is that too much tech brings complexity that's difficult to manage, so sticking to simple well-known tools can also be an advantage. Hope this is useful, cheers! From dtantsur at redhat.com Tue Apr 7 13:56:31 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 7 Apr 2020 15:56:31 +0200 Subject: [doc][release][requirements] Issues with wsmeext.sphinxext In-Reply-To: References: Message-ID: On Tue, Apr 7, 2020 at 12:20 AM Michael Johnson wrote: > This was fixed about a year ago in > https://opendev.org/x/wsme/commit/2be89e587c057ee97d1b143de1a54ceeea22aa93 > > We are probably just missing a release for wsme. I'm not sure how we > do an 'x' project release. I will look into it. > Non-OpenStack projects are released by creating a signed tag and pushing it to gerrit. The automation will take it from there. Of course you can just tag it yourself and upload to pypi if you have the necessary rights. Dmitry > > Michael > > On Mon, Apr 6, 2020 at 1:22 PM Pierre Riteau wrote: > > > > Hello, > > > > A heads up for other projects using wsmeext.sphinxext: it seems to be > > broken following the release of Sphinx 3.0.0 yesterday. > > Our openstack-tox-docs job in blazar started to fail with: > > > > Extension error: > > Could not import extension wsmeext.sphinxext (exception: cannot import > > name 'l_' from 'sphinx.locale' > > > > Looks like it would affect aodh and cloudkitty too. > > > > Pierre Riteau (priteau) > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Apr 7 14:26:04 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 7 Apr 2020 23:26:04 +0900 Subject: [OpenStack-I18n] [I18n][PTL][election] Nominations were over - no PTL candidacy In-Reply-To: References: <6175f5cf-94e6-045d-6291-1effd140d6d8@gmail.com> Message-ID: Thanks Thierry for the detail explanation and clarification. I know what happened in the i18n team as I have been involved in the effort. Transition to the SIG totally makes sense to me now. Akihiro On Fri, Apr 3, 2020 at 6:01 PM Thierry Carrez wrote: > > Akihiro Motoki wrote: > > [...] > > I wonder what will change if we move to SIG. > > That's a big question which hits me. We need to clarify what are the > > scope of the i18n team (or SIG). > > The main difference between a SIG and a Project Team is that SIGs have > less constraints. Project Teams are typically used to produce a part of > the OpenStack release, and so we require some level of accountability > (know who is empowered to sign off releases, know how to contact for > embargoed security issues). That is why we currently require that a PTL > is determined every 6 months. > > SIGs on the other hand are just a group of people sharing a common > interest. There might be group leads, but no real need for a final call > to be made. It's just a way to pool resources toward a common goal. > > Historically we've considered translations as a "part" of the openstack > release, and so I18n is currently a project team. That said, I18n is > arguably a special interest, it does not really need PTLs to be > designated, and the release is OK even if some translations are not > complete. It's a 'best effort' work, so it does not require the heavy > type of accountability that we require from project teams. > > So in summary: making I18n a SIG would remove the need to designate a > PTL every 6 months, and just continue work as usual. > > -- > Thierry Carrez (ttx) > From fungi at yuggoth.org Tue Apr 7 14:34:43 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 7 Apr 2020 14:34:43 +0000 Subject: [tc] [ironic] Promoting ironic to a top-level opendev project? In-Reply-To: References: Message-ID: <20200407143442.63ymrt36r7bby2pk@yuggoth.org> On 2020-04-07 12:03:53 +0200 (+0200), Dmitry Tantsur wrote: > On Mon, Apr 6, 2020 at 4:45 PM Mohammed Naser wrote: [...] > > First of all, it promotes even more bureaucracy within our community > > which is something that we're trying to split. "Ironic" and > > "OpenStack" becoming two separate pieces means that we've failed as a > > community to be able to deliver what OpenStack is. If we do this, we > > further promote the separation of our communities and that is not > > sustainable. With a dwindling contributor base, we'll find power in > > standing together in big groups, not by isolating ourselves to small > > islands. > > This sounds a bit like the "us vs them" mentality, which I think may be > hurting OpenStack now. I think we should be more open to the things that > happen outside of our control. This also goes back to my points about Oslo > and the bigger Python world. [...] The assertion that you can't include Ironic in OpenShift while it's still tainted with the OpenStack name seems like even more of an us vs them mentality, if I'm being totally honest. Can't we all just get along? Why *can't* OpenShift include OpenStack projects? I haven't seen this adequately explained. > Maybe rather than one "openstack" namespace we can have a > namespace per program? Like opendev.org/compute/nova, etc? This > may also solve a part of the problem with oslo libraries adoption > outside of OpenStack. This was in fact one of the options presented to the TC when we reorganized for the move from git.openstack.org to opendev.org, but the TC chose to have a single namespace for all official OpenStack deliverables. It can be revisited, though reshuffling every OpenStack deliverable repo at mass scale is going to be a nontrivial effort so is not something we should engage in without careful planning and consideration. > opendev.org/libs/stevedore doesn't suggest that it belongs to > openstack the same way as opendev.org/openstack/stevedore. And it > will provide an obvious link between code names and purposes! Who > can we talk to to make this happen? [...] This specific example likely won't work. Remember that OpenDev is shared by multiple communities, and so a "libs" namespace in OpenDev could be misleading if all the repositories within it are the product of a single community. On the other hand, something less generic like opendev.org/plugh/stevedore would be fine. (Side question, why not just the obvious opendev.org/oslo/stevedore? Is the name "Oslo" considered as tainted as "Openstack" by product managers?) > Note that I don't suggest a highly dynamic web site. More of a > consumer-oriented landing page like http://metal3.io/ and an > umbrella-neutral hosting for documentation like > http://tripleo.org/. I do agree with Thierry that it's doable > without the actual split, but then it needs cooperation from the > Foundation and the TC. [...] Apparently it doesn't need cooperation from the Foundation and the TC, since the TripleO team already did it without actually asking anyone whether they should. > 1) More frequent releases with a possibility of bug-fix support for them > (i.e. short-lived stable branches) and upgrades (time to ditch grenade?). Existence of extended maintenance mode for stable branches is evidence that distros (in particular) want fewer longer-lived stable branches, not more shorter-lived ones. > 2) Stop release-to-release matching for interactions with other services > (largely achieved by microversioning). Deciding what integrations you test in upgrades will likely get very interesting here. The compatibility matrix between Ironic releases and Nova releases gets increasingly complex the more releases you're supporting at a different cadence. Elsewhere you draw comparisons to Libvirt, which is probably a similar situation. Does Libvirt actually test whether they'll break Nova? Or do they just make whatever changes they want (with deference to their API stability policies) and expect consumers like Nova to perform their own independent evaluation of new releases? I have a feeling if Ironic's release cadence drifts significantly from OpenStack's, then the situation for Nova will be closer to what it is in their relationship with Libvirt today. > 3) Support for non-coordinated releases in installation tools (okay, this > one is fun, I'm not sure how to approach it). [...] This part might not be as hard as you think. Deployment projects already deal with lots of dependencies which are not released on the same schedule as OpenStack (Libvirt as already mentioned, but hundreds upon hundreds more beyond). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Apr 7 14:45:12 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 07 Apr 2020 09:45:12 -0500 Subject: [all] Gate status: tempest-full broken on stein Message-ID: <171551a32d4.f5c40d0c245176.1518110488935954922@ghanshyammann.com> Hello Everyone, tempest-full (py2 job) started failing on stable/stein gate because devstack try to install master Tempest on system-wide and fail for min require py version by Tempest. We did not set the INSTALL_TEMPEST flag to false for stable branches since stable/pike. I am going to add this as one of thing to do while cutting devstack stable branch. More details on Bug: https://bugs.launchpad.net/tempest/+bug/1871327 Here are the fixes, I am backporting from stable/train: Please hold the recheck until they are merged. https://review.opendev.org/#/q/I60949fb735c82959fb2cfcb6aeef9e33fb0445b6 -gmann From smooney at redhat.com Tue Apr 7 15:00:37 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 07 Apr 2020 16:00:37 +0100 Subject: [all] Please remove all external resources from docs In-Reply-To: <20200407135241.ayq7j2qtajrnmfgn@yuggoth.org> References: <86292630-29e6-3584-8649-970b0c71aa3b@debian.org> <20200406211828.auwmvebaykgbkfek@yuggoth.org> <13800783-0e27-b858-aef2-1c80785021dc@ham.ie> <20200407135241.ayq7j2qtajrnmfgn@yuggoth.org> Message-ID: On Tue, 2020-04-07 at 13:52 +0000, Jeremy Stanley wrote: > On 2020-04-07 10:47:41 +0100 (+0100), Graham Hayes wrote: > [...] > > It *might* make sense - the info it shows is time dependent, so if > > a project was asserting follows-stable-policy in pike, showing > > that is not necessarily _wrong_ ... Same as if a project was moved > > out of OpenStack in Havana, the docs could show the openstack > > project badge until then. > > [...] > > Fair, if folks prefer that, then the docs build jobs could require > the governance repo to obtain the requisite metadata and incorporate > the Sphinx extension which generates these SVG images. for what its worth im not sure why this thread was suddenly revied after 6 months or maybe i was ignoring it but we discussed this as part of the pdf docs item back in denver and for the pdf docs we did want to convert the badges into svgs and enbed them but i dont know if that was ever done because it too a lot of time to get the initally docs builing correctly so i think the task to be able to embed rather then link to the badges may have fallen by the way side. > That still > might be unworkably complicated for downstream packaging in distros > however, since we don't version and release the contents of the > governance repo, so would likely need to start doing that (at least > for the projects.yaml and Sphinx extension) and declare it as a > min-versioned docs build dependency. Seems like a lot of work > compared to just stripping them out. My point was that these badges > are mostly only relevant to folks browsing README files in a source > code hosting platform, but when the README is embedded into a > compiled Sphinx document they seem really out of place and are > probably best omitted. a simple anser would also be to not ship the readme in the docs or do its as plain text. i presonally prefer to read the rst before they are renderd anyway so if i was looking for the docs in a a disto package i would be hoping to find the rst files not the html docs. From marie.delavergne at inria.fr Tue Apr 7 15:08:03 2020 From: marie.delavergne at inria.fr (Marie Delavergne) Date: Tue, 7 Apr 2020 17:08:03 +0200 (CEST) Subject: [tc][all][SDK][API] Multi-region/edge OpenStack deployment Message-ID: <1047736409.24639028.1586272083357.JavaMail.zimbra@inria.fr> Hi, Following the previous discussions regarding approaches to make several instances of OpenStack collaborative[1], we finally decided to make a research report on the approach we initially proposed during the edge hackathon in Berlin (OSF summit 2018). The document is a short article (~10 pages long)[2] that gives a rather complete overview of the approach. If you are interested to see how mutliple instances of distinct OpenStack can give the illusion of a single system you might be interested to give it a look. A brief excerpt: “We propose a novel approach that delivers any collaboration between services of distinct OpenStack once and for all. Our approach eliminates development efforts related to the brokering aspect by leveraging dynamic composition and a Domain Specific Language (DSL). Using these two mechanisms, Admins/DevOps can specify, on a per-request basis, which services from which OpenStacks are required for the collaboration. This information is then interpreted to dynamical recompose services between the different OpenStacks, enabling the execution of single-, cross- or multiple-site requests without code effort.” The main idea is, considering several OpenStacks running, to use a DSL to specify which instance of the service (i.e. on which Openstack) the user wants its request to be executed. For instance, to provision a VM on a Nova in Paris using an image from a Glance in New York. You may contact the API either in NewYork or in Paris and perform the following request: "openstack server create --image debian --scope { compute: Paris, image: New York }" Along with the report comes a proof of concept available on GitLab[3]. Feel free to get back to us. Cheers [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013562.html [2] https://hal.inria.fr/hal-02527366/ [3] https://gitlab.inria.fr/discovery/openstackoid ---- Marie Delavergne -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangcf9988 at aliyun.com Mon Apr 6 02:11:15 2020 From: zhangcf9988 at aliyun.com (zhangcf9988 at aliyun.com) Date: Mon, 6 Apr 2020 10:11:15 +0800 Subject: [nova]-Pci device pass-through error. Message-ID: <2020040610111524325643@aliyun.com> Hello, contributors to the Openstack community: I've got a problem and I'd love to get some advice. When we used the openstack-pike version, we encountered the following problems: Deployment environment: The openstack-pike version of openstack was installed on CentOS7.7.1908 system using yum, where nova version was 9.1.3 Nova - compute node: The Dell R730 server was used, and three graphics CARDS of the same model Nvidia 1050 Ti were installed. At the same time, the iommu was enabled and the graphics card pass-through was configured. But we had a problem: Two PCI expansion board CARDS were installed on Dell R730, two Nvidia 1050 Ti graphics CARDS were installed on the first PCI expansion board card, and one was installed on the second PCI expansion board card. As an Nvidia 1050 Ti graphics card is composed of two chips, respectively: Video card chip (Proudct_id:1c82, Vendor_id:10de) Sound card chip (Proudct_id:0fb9, Vendor_id:10de) Create a VM, we hope that the VM can passthrough for graphics and sound card chip at the same time, but we test found that VM will get 2 graphics or sound card chip, rather than a complete physical graphics should be some graphics and sound card chip, consequences of this, we installed the three pieces of graphics card, no one can be normal use, because the first VM occupies two graphics chips, lead to the second created, VM libvirtd detected a VM occupies physical graphics graphics chips, cause the failure of the second always create a VM. At the same time, I also tested that when I removed one of the two Nvidia graphics CARDS on the first PCI expansion board card and installed a physical Nvidia graphics card on the first and second PCI expansion board CARDS, the test created a VM that could obtain a video card and a sound card chip at the same time, so that the VM successfully obtained a complete Nivida physical graphics card. We hope that we can insert pieces of the same type of graphics card, but now we tried by modifying the nova pci_devices table in the database, the first VM can be modified by the graphics card is assigned the wrong question, but in fact, the second VM still complains when created, and will also we manually modify the database information to clean up, at present there are two possible solutions, the first is to modify the source code, the second is using different types of graphics CARDS, don't know the problem, can be solved by modifying the source code. Thank you very much. The following is the configuration of nova on the control node: [DEFAULT] enabled_apis = osapi_compute,metadata my_ip = 172.16.160.206 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver transport_url = rabbit://openstack:mqpass123 at baseserver available_filters=nova.scheduler.filters.all_filters available_filters=nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter enabled_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:novapass123 at baseserver/nova_api [cinder] os_region_name = RegionOne [database] connection = mysql+pymysql://nova:novapass123 at baseserver/nova [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://controller:9292 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = baseserver:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = novapass123 [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = netpass123 service_metadata_proxy = true metadata_proxy_shared_secret = 20190909 [oslo_concurrency] lock_path=/var/lib/nova/tmp [pci] alias = { "name": "nvidia1050ti", "product_id": "1c82", "vendor_id": "10de", "device_type": "type-PCI" } alias = { "name": "nvidia1050ti", "product_id": "0fb9", "vendor_id": "10de", "device_type": "type-PCI" } [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = novapass123 [vnc] enabled = true vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip #The following is the configuration for nova-compute: /etc/nova/nova.conf: [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:mqpass123 at baseserver my_ip = 172.16.160.204 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver instance_name_template = instance-%(uuid)s remove_unused_base_images=false [api] auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = baseserver:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = novapass123 [vnc] enabled = True #vncserver_listen = 0.0.0.0 #vncserver_proxyclient_address = $my_ip vncserver_listen = 172.16.160.204 vncserver_proxyclient_address = 172.16.160.204 novncproxy_base_url = http://172.16.160.206:6080/vnc_auto.html keymap=en-us [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = novapass123 [libvirt] virt_type = kvm #virt_type = qemu use_usb_tablet=true [filter_scheduler] enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,AggregateCoreFilter,AggregateDiskFilter,DifferentHostFilter,SameHostFilter, PciPassthroughFilter available_filters = nova.scheduler.filters.all_filters [PCI] passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1c82" }, {"vendor_id": "10de", "product_id": "0fb9"}] alias = { "name": "nvidia1050ti", "product_id": "1c82", "vendor_id": "10de", "device_type": "type-PCI"} alias = { "name": "nvidia1050ti", "product_id": "0fb9", "vendor_id": "10de", "device_type": "type-PCI"} [neutron] metadata_proxy_shared_secret = 20190909 service_metadata_proxy = true url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = netpass123 flavor: openstack flavor create --public --ram 10240 --disk 50 --vcpus 4 nvidia1050ti --id 1 \ --property os_type=windows \ --property hw_firmware_type=uefi \ --property hw_machine_type=q35 \ --property img_hide_hypervisor_id=true \ --property os_secure_boot=required \ --property hw_cpu_cores=4 \ --property hw_cpu_sockets=1 \ --property hw_cpu_threads=2 \ --property pci_passthrough:alias='nvidia1050ti:2' image: openstack image create win10base1903UEFI --file win10base1903Q35V1.qcow2 --container-format bare --disk-format qcow2 --public \ --property os_type=windows \ --property hw_firmware_type=uefi \ --property hw_machine_type=q35 \ --property img_hide_hypervisor_id=true \ --property os_secure_boot=required \ --property hw_cpu_cores=4 \ --property hw_cpu_sockets=1 \ --property hw_cpu_threads=2 \ --property pci_passthrough:alias='nvidia1050ti:2' #The following is an error reported in openstack-nova -compute: 2020-03-31 10:57:04.555 2643 INFO os_vif [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:51:25:34,bridge_name='brq8d505aa1-51',has_traffic_filtering=True,id=1a673f99-8a71-41be-a87e-40f3308fcfc3,network=Network(8d505aa1-5146-49db-8814-7dcef91bd1c1),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap1a673f99-8a') 2020-03-31 10:57:04.723 2643 ERROR nova.virt.libvirt.guest [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] Error launching a defined domain with XML: instance-7e5d283f-5ed5-4ebe-b145-d348e94addca 7e5d283f-5ed5-4ebe-b145-d348e94addca test1 2020-03-31 02:57:04 10240 50 0 0 4 demo demo 10485760 10485760 4 4096 RDO OpenStack Compute 16.1.7-1.el7 30535dfb-90e3-401e-bbf7-f2c43120fb65 7e5d283f-5ed5-4ebe-b145-d348e94addca Virtual Machine hvm /usr/share/OVMF/OVMF_CODE.fd /var/lib/libvirt/qemu/nvram/instance-7e5d283f-5ed5-4ebe-b145-d348e94addca_VARS.fd destroy restart destroy /usr/libexec/qemu-kvm